Bookmark and Share

We can (if we are honest) observe that progress, and the potential it unleashes in many cases, is not all that closely linked with what’s commercially available or common around the time of the fundamental invention. In the first decade after lasers were invented, for instance, there was no significant commercial application. When the integrated circuit was invented, it wasn’t much to look at and functionally speaking, for decades, it was outright pitiful compared to ICs today. We’re still dealing with developing a full understanding of how neurons do what they do. In laser parlance, in 2017 we are yet pre-laser, and in my opinion, anyone who tries to tell us that lasers, figuratively speaking, can’t do X at this point should be considered, at most, a hand-waver in the grips of a fit of profound hubris.

With regard to the subject at hand – intelligence and consciousness resulting from information processing – nature has, fortunately enough, provided numerous models at various levels. So we know it can be done at least one way – neural-like systems. Sure, it’s obviously not easy. Brains use very small, very complicated, and very difficult to understand computing elements.

But achieving a manufactured intelligence is also obviously highly interesting and to many, highly desirable. Assuming only that our technological progress doesn’t actually halt due to some unrelated factor (war, asteroid, runaway climate, alien invasion, etc.), there are many reasons, all supporting one another very, very well, to assume that we will “get there from here.” Not the least of which is there are many (sub-)reasons to presume that will be a great deal of economic leverage in such technology.

At present there are no known physics related reasons to presume that we won’t get there eventually. If one is (or multiple are) discovered – for instance, should it be determined at some point in the future that brains use some heretofore unknown physics mechanism(s) to do what they do – then we may quite suddenly be on different grounds in terms of ultimate practicality. But there isn’t even a hint of this as yet. It definitely appears to be chemistry, electricity, and topology all the way down as far as brains go. That stuff, we can do. Larger and clumsier and perhaps even slower… perhaps even only as emulation… yet we can do it. We just don’t know exactly what to do. Yet.

It seems to me that betting against AI is betting against human nature. That’s not a bet I’d presume to make.