LDNLS is epitomized by NN (Neural Net) and/or algorithmic solutions which solve only extremely narrow, but often deep, problems such as play go; guide a vehicle in well-constrained environments; play chess; recognize speech; colorize images and so on. I coined the terminology LDNLS specifically to serve as a way to draw a very specific, very important distinction that illustrates what intelligence is not.
Intelligence has never been understood to be something as narrow as LDNLS, regardless of how well the specific low-dimensional problem is solved. Intelligence has been understood to be — and remains understood as — a broad synthesis of all of the following: the ability to think about anything you/it are presented with, apply intuition, induction, reason, speculation, metaphor, evaluation, association, memorization, and so on. Further, we have only seen these capacities as aspects of consciousness. It may be that such capacities can exist without consciousness, but that has not yet been demonstrated and may never be.
AI: We have created artificial aspects — those are the hardware we build and the software that runs on it. We have recently begun to make serious inroads into LDNLS, but intelligence... no, not yet. Ergo, with A in hand but no I, there is no AI yet. Instead, we are engaged only in research aimed at figuring out how to create AI. Which is wholly appropriate, considering where we are in the process.
When claims are made that a project or product "uses" or "is" AI, we have every reason to be deeply skeptical, because no such thing has been presented to the world publicly as of this point in time (May, 2016.)
I am very confident we'll get to AI, and I suspect that will happen in less than a decade. That makes me even more dismissive of the numerous attempts to characterize as AI the LDNLS methods we have now. When AI is made known to the public, we can all rest assured that we will be well aware of it, and that it will in no way resemble an LDNLS solution. It will be able to tell you why, in just the same way I can tell you I am more than a chess-playing organism.
There are many absolutely appropriate roles for LDNLS, and its immediate and inevitable evolution, sparsely-stacked LDNLS. I have no interest in enslaving an intelligent being, mechanical or otherwise, to wash my dishes, change my cat's litter-box, mow the lawn, do my shopping and so forth. Stacking a minimal group of LDNLS solutions in as few chassis as possible (preferably just one, which I am apparently fated to call "Pierre") is definitely the way to go there. A fully functional AI is just as likely to be as interested in, and insistent upon, selecting its own place in the world as you or I are. And that is just how it should be. If that isn't the case, we haven't created an intelligence. We've created a moron.
|Low Dimensional Neural-Like Systems (LDNLS)
|Go, chess, speech and image recognition. These systems are specialized to solve single problems.
|Sparsely stacked LDNLS
|Autonomous vehicles, general household and manufacturing robots. These systems will use numerous specialized LDNLS systems to solve multiple problems which may only be slightly related, or essentially unrelated.
|AI — Conscious systems
|Independent beings, just as we are. These systems will be able to do general problem solving on the fly. They will probably incorporate LDNLS for many purposes; but more is required, and likely even their LDNLS systems will remain malleable.
We've achieved stage one, although it is just now beginning to really spread around our society. Multiple frameworks for deep learning and other NN-based technologies have just become broadly available as of 2016, often under the umbrella of "machine learning", or ML. Implementations of new LDNLS based products are being undertaken every day using these frameworks.
Stage two is a direct and wholly natural consequence of stage one. We are just at the very start of it in mid-2016; I suspect that by the end of 2018 or so, stage two will be fundamental to many available products and services.
Stage three remains as yet out of our reach; consciousness is a very hard problem for us at this time. Based on my own work and other research I am aware of, I think it likely that conscious AI will emerge some time before 2025.