🔽🔼
Font Size
Inefficiencies of modern HLLs and implications for AI
Category: AI, ML and LDNLS
by Admin on Thursday, January 1st, 2009 at 21:13:23

On this new years day, I was relaxing a bit, pondering (with some glee) how large, clunky and slow a "modern" application (ca. 2008) was that did less than an older application (ca. 1997) I own, which I happen to know (because I was there) was written in C. I presume, like most clunky, slow, and bloated apps, that the "modern" app was written in C++, Java, or some similar modern HLL.

From there, my thoughts flashed back to a mid-1980's PCB OO CAD application, which I know (again because I wrote it) was written in about ½ 68000 assembler and ½ C code; and I compared this to a modern version which does similar things (though not nearly as fast... on a machine with a 3 GHz processor compared to a machine with a 20 MHz processor!)

Forty years of programming experience have given me an interesting intuition for the size and efficiency of code for a number of languages; particularly assembler, C, C++, and Java. I certainly recognize the ease of programming in modern HLLs, the convenience of, for instance, not worrying about memory management nearly as much, but I view the generally slow and almost universally heavy framework footprints with considerable regret.

So I'm pondering on these things, not for the first time by any means, and my mind wanders off into thinking about AI, also a subject of considerable interest to me. This time, I thought to myself: New programmers are basically versed in these modern HLLs and associated heavyweight frameworks, so if AI comes at all, it'll almost certainly come as a result of programmers coding in HLLs, perhaps on top of NN or similarly special-purpose hardware.

Then it came to me: Wouldn't one of the first things an introspective AI do, be simply to try to optimize its own performance?

Would it not recognize that for any given processor, code of the equivalent of hand-tuned assembly would be on the order of hundreds of times, possibly thousands of times, more efficient than that produced by the HLL and frameworks that gave it... birth, as it were?

Would it not recognize that the HLL's libraries and frameworks contained enormous amounts of unused cruft? I certainly can, so I presume it will, too.

The implication is that once hardware and software reach the "can support" level for AI, given the propensity to code in HLLs... it should be able to rework itself almost immediately into a form that (a) would work in far less capable hardware, or (b) would work a lot better than the first generation in the same hardware it was "born" in.

This, in turn, implies that we might have AI capable hardware on our desks right now; we just won't know it until there's better hardware out there so the AI can fix our lousy code.

Want to add a comment to this post? Click here to email it to me.

0.03 [Cached]