Several times now I have been confronted with the proposition that AI — artificial intelligence — is so squishy a word that we just can't say what it means. The implication apparently being that it can be legitimately used for just about anything. I disagree. Strongly. While there may be room for plenty of "squishyness" on this road, the problem right now is that no one has even gotten on the road.
Here are a couple of relevant quotes from one of the most recent of these propositions by Illarion Bykov, excerpted from a comment within a post on Google+ (sadly now consigned to the Google Graveyard:
The problem is that AI is not a hard problem, but a soft, wishy-washy, ill-defined problem. The goalposts move all the time. Everyone has their own definition of what the term means. So how does one achieve a goal when people can't agree on what the goal is?
Does an airplane really fly? Will we ever create an airplane that can really fly? Just like a bumblebee? Does the airplane have to be able to pollinate flowers before we agree it "truly" flies, just like a bee?
We're not trying to distinguish between an aircraft flying like an eagle, or a bee, or a mosquito. We simply don't have anything that can take to the air, even for a moment.
When we have something that can really fly on its own, I submit that it'll almost certainly be just as obvious as it was when Orville and Wilbur first got off the ground at Kitty Hawk, having invented a 3-axis controller that finally made the achievement possible. But if we allow ourselves to fall prey to letting "AI" describe any bit of clever programming that doesn't actually represent an intelligence, what will happen is that we won't have anything reasonable — by which I mean a word that makes a meaningful distinction — to call machine intelligence when it actually arrives.
What has happened is marketing types are working hard (and with more than a little success) to pull a veil over the actual meaning of the word intelligence. When you speak to another person, you know you are speaking to an intelligence. When you interact with a dog, you still know the animal can, and does, think and reason in an extremely flexible manner. We actually do know what the word means in the broad sense. You are intelligent. I am intelligent. Even a mouse can be reasonably described as intelligent in a very small way. The toaster is not intelligent — even if it is sophisticated enough to never, ever, burn the toast. The spreadsheet is not intelligent — even though it can handle numbers better than you or I ever could. A bulldozer is not intelligent — even though it can move earth better than you or I ever could.
The problem can be compared very closely to the recent attempt to market stereo imaging as "3D TV." Stereovision is created by presenting an image to each eye, the pair being obtained from a fixed-at-any-one-time, dual-camera perspective. What you get is an image that looks like it has depth, but the moment you move, you realize it does not, in fact, have depth. You cannot see any other view — while it looks 3D if you carefully don't try to see if it is, in actuality it isn't 3D at all. You can't look down on it, behind it, up at it, what you see does not change if you change your spatial relationship to it as it would if you were moving around an actual 3D object or objects, variable occlusion and revelation does not occur — the image is not 3-dimensional, period.
When confronted with an actual 3D display, now what does one say? The marketing people have absconded with the meaning of 3D, so what do we have now? In fact, right now, there are some 3D display technologies. Those people try to describe what they have, and legitimately so, as "3D display." The very first reaction they get now is "oh, yes, I've seen 3D display, I know what you're talking about." Which is entirely wrong. Because what they have seen is not 3D, it's stereovision, which is a simple parlor trick compared to an actual 3D image.
Some have called stereovision "2 and 1/2 D." Even that is hugely overstating the case. You get one and only one view of the scene. Even if we constrain the degrees of freedom to actual degrees, that is, 360 viewpoints around the scene, we're down to 2.0027-D; but there actually must be two circles in order for a scene to be fully 3D, so it isn't 1/360th, it is 1/129600th, which is 2.000007716-D. But even that doesn't accurately describe how far away stereovision is from 3D, because there are far more meaningful viewing angles than just 360 x 360.
Back to the toaster: if we insist on letting "intelligence" be appropriated for devices that present any useful behavior, then what happens? Now what do we tell our children? "Hey, you're intelligent, Johnny!" The kid walks away thinking he or she has been equated to the toaster or the thermostat when of course, they are nothing at all like either one.
Intelligence is not — not quite yet, anyway — all that squishy a word. Unless we let the thermostat manufacturers, the stock-trading algorithm people, the video game manufacturers and so on appropriate the word for a wide range of decidedly less-than-intelligent playing fields. Then "intelligence" isn't just a word that is already squishy within an Einstein-to-mouse range, it has basically become meaningless. It will be of no use whatsoever in describing the achievement of what we are actually talking about, which is an entity with thinking capacity on par with a functional animal, and hopefully recognizably well up that scale.
Once we get there, we can start somewhere near the mouse end and measure, hopefully to, and past, the Einstein metric. We can then worry about what squish there might be. We aren't there. We're not anywhere near.
We have AI research. That means we're looking at the task. We're trying to figure it out. The process has provided many useful and interesting insights and spinoff technologies. But it has not provided us with intelligence. Not yet. There is no AI. There is only AI research.
Right now, when someone tells you "my product / thing uses AI" what they actually are telling you is that it makes decisions of an extremely limited and predetermined kind and chooses one course of calculated results over another based on a very restricted range and character of input data. No more than that. This process is described by the word "algorithm" or more generally, a useful program. In hardware terms, such as in the case of today's NN based systems, these are capable of very, very limited functionality; they cannot think in any sense of the word. To put it into perspective, when it comes to intelligence, a mouse looks like the finest computer as compared to the stone knives and bearskins of the algorithm or neural net, no matter if the neural network was built in hardware or constructed using algorithmic building blocks within a more general purpose computing engine.
When the hardware and/or software gets reasonably close to "mouse" then the squish can meaningfully begin. It does not begin at "toaster" or "thermostat" or "spreadsheet" or "chemical plant manufacturing controller" or even "weather prediction." If we are that squishy with the word intelligence, we have stripped it of useful meaning. It's 3D TV all over again.
Wow! It's 3D! The only correct answer is no. It isn't. In much the same way, there is no AI at this point in time. There is only AI research.
None of this should in any way be construed as making an argument that we won't manage to create AI. I am as certain as I can be that we will indeed do so, given only that our technological capabilities and freedom to research the matter do not go up in smoke in the near future. We understand a lot more about the problem than we did before, and we're learning more every day. I'm just saying we're not there now, and when we get there, we don't want to be in the position the makers of 3D imaging systems are in today, with the very words that accurately describe a significant achievement stripped of all useful meaning.