Bookmark and Share

On this new years day, I was relaxing a bit, pondering (with some glee) how large, clunky and slow a “modern” application (ca. 2008) was that did less than an older application (ca. 1997) I own, which I happen to know (because I was there) was written in C. I presume, like most clunky, slow, and bloated apps, that the “modern” app was written in C++, Java, or some similar modern HLL.

From there, my thoughts flashed back to an Amiga OO CAD application, which I know (again because I was there) is about 1/2 68000 assembler and 1/2 C code; and I compared this to a modern version which does similar things (though to be honest, not as many, and not nearly as fast… on a machine with a 3 GHz processor compared to a machine with a 20 MHz processor!)

Forty years of programming experience have given me an interesting intuition for the size and efficiency of code for a number of languages; particularly assembler, C, C++, and Java. I certainly recognize the ease of programming in modern HLLs, the convenience of, for instance, not worrying about memory management nearly as much, but I view the generally slow and almost universally heavy memory footprints with considerable regret.

So I’m pondering on these things, not for the first time by any means, and my mind wanders off into thinking about AI, also something I often consider. This time, I thought to myself: New programmers are basically versed in these modern HLLs, so if AI comes at all, it’ll almost certainly come as a result of programmers coding in HLLs.

Then it suddenly came to me: Wouldn’t one of the first things an introspective AI do, be simply to try to optimize its own performance?

Would it not recognize that for any given processor, code of the equivalent of hand-tuned assembly would be on the order of hundreds of times, possibly thousands of times, more efficient than that produced by the HLL that gave it… birth, as it were?

Would it not recognize that the HLL’s libraries contained enormous amounts of unused cruft? I certainly can, so I presume it will, too.

The implication is that once hardware and software reach the “can support” level for AI, given the propensity to code in HLLs… it’ll be able to rework itself almost immediately into a form that (a) would work in far less capable hardware, or (b) would work a lot better than the first generation in the same hardware it was “born” in.

This, in turn, means that we might have AI capable hardware on our desks right now; we just won’t know it until there’s better hardware out there so the AI can fix our lousy code.