Is Deep Learning AI going to result in human intelligence or better anytime soon – the setup?

Does a significant advance like the recent advances in AI presage a massive new potentially dangerous robotic world?

ai_robot_a0755080-cb27-4e42-8132-d441a6d813ca-1020x612

Elon Musk Stephen Hawking, Bill Gates and others. have stated that recent advances in AI, specifically around CNN (Convoluted Neural Nets) also called Deep Learning has the potential to finally represent real AI.

This is exciting and worrisome if true.   I have been interested in this problem from the beginning of my career.   When I started doing research at MIT into AI I had kind of a “depressing” feeling about the science.   It seemed to me that the process the brain used to think couldn’t be that hard and that it would take computer scientists not very long to try lots of possible approaches to learn the basic operation and process of how people “abstract and learn” to eventually give computers the ability to compete with us humans.   Well, this has NOT been easy.   Decades later we had made essentially zero progress in figuring out how the brain did this “learning” thing let alone what consciousness was and how to achieve that.    I have a blog about that sorry state of affairs from the computer science perspective and the pathetic results of biologists to solve the problem from the other side, i.e. to figure out how the brain worked.    Recent discoveries indicate the brain may be far more complicated than we thought even a few years ago.  This is not surprising.  My experience in almost every scientific discipline is that the more we look into something in nature inevitably we discover it is a lot more complicated than it first seemed.

Definition of “Smart”

Computer scientists categorize intelligence according to 3 levels.  I think we could probably come up with more levels but let’s start with the basic idea.   ANI is what we have done for the last 30+ years in artificial intelligence and is still what we do with CNN or DBN or any of the new technologies.

ANI (Artificial Narrow Intelligence)

Computer Narrow Intelligence (CNI) refers to the ability of a computer to learn in a specific discipline.   Frequently this is simply a matter of programming in the current knowledge of a particular area.  In the past there have been some really impressive examples of this that leads some people to think the programs are intelligent but of course they don’t know much more than they are taught and don’t really have the opportunity to learn beyond the area they know.

An example of this is Mathematica which was done in the 70s and 80s and was so proficient at solving differential equations that many of the worlds best mathemeticians couldn’t solve.   Just a dozen or so years ago, IBM developed Watson which made a lot of press for beating people in Jeopardy finding answers to questions in general by looking at the Internet.  This may sound like an easy problem but figuring out the right answer from the internet to a general question is harder than you might think.  IBM pushes Watson to help businesses in general build more intelligent looking applications.

Most recently for recognition problems in specific areas like speech, text, object, vision a new technology CNN and DeepLearning have proven to be the most powerful ways to systematize learning a specific set of features and abstracting the basic features of the problem area several levels.   This has created the excitement that we may be on the cusp of the next level of AI skill getting to what is called AGI.

AGI (Artificial General Intelligence)

The problem of General Intelligence is significant and nobody thinks we have done it but there is some excitement about CNN becoming an AGI.   The worry is that an AGI would make humans obsolete or even threaten humans in some way.   It is assumed that AGI’s would not be MORE intelligent than humans so it is not that humans would become irrelevant rather that we wouldn’t be special anymore.

This series of blogs is intended to explain CNN and to see how far it is from AGI.

ASI (Artificial Superior Intelligence)

Eventually if we can build an AGI the assumption of some is that it would simply be a matter of scaling the technology to produce ASI, superior intelligence.  This is not obviously possible.   We don’t know if there is another level of intelligence possible or achievable or what barriers might be to developing it because we haven’t gotten past ANI.

Things are simplified by a discovery but rapidly get more complicated

Computer science has had stages of success trying to do ANI’s.   In the 80s we had initial success but soon discovered that many problems of learning were very difficult.  The field actually went into a funk for 10 years because people became convinced it was overhyped and it was.   In the 90s a new technology called rule based knowledge systems gained prominence and some advances were made but it was short lived.   Again AI went into a funk as the field was not able to take the simple ideas of rule based systems beyond some problems.   In the 2000 time frame machine learning became the rage.   Machine learning is simply using statistical techniques to learn things.  The power of mathematics applied to learning.  If you fed machine learning systems enough good data that was labeled with the correct answer the systems could do a better job of appearing to learn or recognize things.   Machine learning is still in wide use.   In 2010 time-frame 4 researchers worldwide made advances in neural networks called CNN (Convoluted Neural Networks.)   They produced neural simulating systems that showed much better learning.   This is where we are today.

I postulate that the history above is typical for numerous areas of science.   People come up with a simple idea that seems to give answers and does something really cool that catches people’s attention.  People like simplicity and simplicity means that the idea can be used to answer some questions better than previously.  Usually it means that simple modifications to that theory makes stunning progress.   However, rapidly we almost always discover the limitations of the simple idea.   Things get more complicated and we see diminishing returns from the idea.

Science then goes through a period where we make marginal improvements and wait for the next “simple” idea that gives us new insight and new advances that will allow us to solve the big problems the previous insight didn’t let us do.

Let’s take genetics for instance.  Let’s take the amazing simple discovery of DNA.  DNA was composed of 4 principal chemicals distributed across a number of chromosomes in a helical fashion.   This simple model seemed like it would lead to some simple experiments to understand these DNA strands and how they function.   Suffice it to say but the simple idea has become enormously complex to the point that we now know that DNA gets created during the lifespan of humans that can affect their functioning in their life and some can be passed on to subsequent generations to children contrary to a fundamental belief of evolution postulated by Darwin.   We learned that the DNA was composed of 99% junk only to discover it wasn’t junk but a whole different language than the language of genes.   The 4 principal chemicals became 5 and now a 6th was added.   Each of these simple ideas  breaks down into more and more complex patterns.

This same pattern of simplifying discovery leads to some advances, we think it’s all so simple and then we find it’s way more complicated as we look deeper is same pattern found in physics over and over, chemistry and almost every subject I am aware of.   So, it is not surprising that the more we look at brains and intelligence the more we learn it is more complicated than we initially thought.   We have a conceit as humans to believe the world must be broken down into simple concepts fundamentally that we can figure out and eventually understand but every time we make advances in understanding a part we seem to find there is a whole new set of areas that are raised that put our understanding farther away than ever it seems.

The real thing is that the insight and gains of science tend to expose a whole set of problems and issues we had no idea about before..   Every simplification is followed by an incredible array of complications that lead us to more questions than we knew even existed when we started.   So, it is a good thing to make these advances but there is always an excitement when we discover this simplification whether it is string theory in physics or DNA in biology that we believe will give us the holy grail.

Interestingly, this philosophical problem that as we look at something using a simplification or organizational mechanism that the problem simply becomes more complex was discussed in the book “Zen and the Art of Motorcycle Maintenance,” an incredibly insightful philosophy book  published 50 years ago and still relevant.

I therefore have 100% skepticism that the recent discovery of a new way to operate neural nets is somehow the gateway to us claiming a 50 year goal of human level intelligence.   Nevertheless, the advance has moved us substantially forward in a field that has been more or less dead for decades.   We can suddenly claim much improved recognition rates for voices, faces, text and lots of new discoveries.  It’s an exciting time.   Yet the goal of human level intelligence is far far from where we are.

Will neural nets produce applications that will take our jobs away or make decisions to end human life?   Maybe, but it’s nothing different than has been going on for centuries.

Neural nets were a technology first imagined in the 1950s and 1960s.  First implementations of neural nets was tried in the 70s and 80s and resulted in less than impressive results.  For 30 years virtually no progress was made until 2010, 4 years ago.   Some simple advances in neural nets has resulted in the best performing pattern recognition algorithms we have for speech, text, facial, general visual recognition problems and other problems.  This has brought neural nets and in particular a form of neural nets called Convoluted Neural Nets (CNN) sometimes also called Deep Learning as a new discovery that will lead to human intelligence soon.

Some of these new recognition machines are capable of defeating humans at specific skills.  This has always been the case with computers in general.   At specific problems we can design, using our understanding of the problem, a computer algorithm which when executed by a high speed computer performs better at that specific task than any human.  One of the first that was impressive and useful was Mathematica.  Done in the early 80s Mathematica was able to solve complex algebraic and differential equation problems that even the best mathematicians couldn’t solve.  IBM’s Watson is a recent example of something similar but as I’ve pointed out such things are not examples of general purpose learning machines.   Mathematica will never help you plan a vacation to Mexico nor will Watson be able to solve mathematical equations.   Neither will ever have awareness of the world or be able to go beyond their limited domain.  They may be a threat to humans in the sense that they might cost jobs for humans if we develop enough of these special purpose systems.  They could eventually do a lot of the work that many humans do that is rote.   If such systems are put in charge of dangerous systems such as our nuclear arsenal and they make a decision to kill humans inadvertently it is not because of evil AI, it is because some human was stupid enough to put a computer with no common sense in charge .    Such problems should be blamed on the humans who wrote those algorithms or who put such systems in charge of dangerous elements.

So, the idea that computers, AI and such could be a physical threat to humans or a job risk is real but is within the domain of our current abilities to control.

To get links to more information on these technologies check this:

perceptive-apis-cognitive-technologies-deep-learning-convolutional-neural-networks

Articles in this series on Artificial Intelligence and Deep Learning

is-deeplearning-going-to-result-in-human-intelligence-or-better-anytime-soon-the-setup

Is Deep Learning going to result in human intelligence or better anytime soon? – part 2, CNN and DBN explained

Deep Learning Part 3  – Why CNN and DBN are far way from human intelligence

artificial-intelligence-the-brain-as-quantum-computer-talk-about-disruptive