Breakout MegaTrends that will explode in 2015, Continuation Perceptive APIs, Cognitive Technologies, Deep Learning, Convolutional Neural Networks

brain sprouting

This is a continuation of the series on the Disruptive Megatrends for 2015 Series

12. Perceptive APIs / Cognitive Technologies / DeepAI – smart technologies become more and more important to differentiate

I strongly believe rapid adoption of Deep Learning technologies and more adoption of various AI sub-disciplines will show dramatic growth in 2015.   This is because of the need by companies to utilize greater intelligence from BigData and the need to put intelligence into social and all applications in general.

Many people are not aware of the significant changes that have happened in Artificial Intelligence in the last few years.   There are several areas of AI that have made great strides, combined with some hardware advances we are seeing a 3rd wave of AI and this time may be the magic time that sticks and leads to mass adoption.   AI was my original study at MIT and I have a lot of thoughts on conventional AI approaches which have failed generally and which I was skeptical of from the beginning.  However, in the last couple years we have seen the emergence of true AI or what is being called Deep Learning or Deep AI.

Deep Learning involves the use of Convolutional Neural Networks which are a synthetic form of “brain” based on virtual neurons configured in layers.   Each layer of a convolutional neural network either amplifies or selects from the previous layer.  How many layers to use, what layers follow what other layers, how they are connected and how to configure them is a matter of experience.  The number of layers determines how deep the learning is and if you feed the output back into the input of the neural net you have potentially unlimited depth of learning.    It’s like designing your own brain.  The more you need the neural net to learn the deeper the layers and the more neurons you must use.  The amount of processing for all the neurons grows exponentially thus the interest in GPUs.  There are patterns that work in different scenarios.  You initially feed in a lot of data to the CNN and it learns.  After the training period you feed in new data and the system reports or acts on the data in however you have trained it to operate.

Neural networks were first designed in the 1980s but didn’t seem to work  that well.  Neural nets made some uninteresting advances for the next 30 years but in 2010 The principal recent advance was the introduction of LTSM (long term short memory) which led immediately to several impressive achievements by CNN that suddenly made them better than any other approach to recognition we have seen.   DeepMind (a British LLC acquired by Google)  has among other perfected the LTSM in each neuron which seems to have greatly improved the ability of neural networks to recognize and abstract information.  DeepMind demonstrated some interesting enough results to get Google interested and they were bought and have been employed in some of Googles recognition visual and voice recognition projects.

One of the claims of the DeepMind folks was that they could feed the visual feed of Atari games into DeepMind and it could learn to play Pong and other games from the video feed well enough to play the games and in some cases defeat good human players.   Now that does sound impressive.   Other examples of Deep Learning applied to pattern recognition has shown significant improvements from previous approaches leading to much higher recognition of text than ever before.

Elon Musk who has seen more of DeepMind than any of us is worried.  He claims that the technology at Google has the potential to become dangerous AI.  Dangerous AI is AI which may be able to gain some form of innate intelligence or perception which might be a threat to ourselves.    Since then other luminaries such as Bill Gates and even Stephen Hawkins have expressed reservations at Deep Learning AI.

Whether you believe such concerns as stated by these people the basic convolutional neural network technology is available in numerous open source projects and as APIs.   There is a lot of work and experience needed to configure the layers and parameters of each layer.   The amount of processing and memory required can be prodigious and some dedicated GPU’s are being developed to do CNN.   Several projects are underway to determine the limits of CNN learning capabilities.  It is exciting.

The technology is in use at numerous companies and in numerous applications already.

Given the way advances like this make their way into the industry you can see rapid adoption of Deep Learning technology through open source and APIs is likely to make its way into numerous applications and underlying technologies in 2015.

I want to make the distinction between different disciplines of AI which have been around for a while and being applied to applications and the Deep Learning.   I believe the Machine learning examples below will become Deep Learning before long this year.

D-Wave and Quantum Computer Technology is advancing rapidly

d-wave-512-Qubit-computer

I don’t believe everyone will be buying a D-wave anytime soon.   The newest version coming out in March will have 1152qubits and represent a dramatic advance in quantum computer technology.   This is now at the point that people should become aware of it and consider what impact it will have.

I discuss quantum computers more deeply in an article on Artificial Intelligence here.

Google is currently using D-wave for some recognition tasks.  The prognosis is positive but they aren’t buying hundreds of these yet.   This is a technology that is rapidly evolving.  As of last year the D-wave was powerful enough to compete with the best processors built today.  That’s quite an achievement for a company in business developing a completely new technology to be able to build a processor which is as fast or maybe 5 times faster than the current state of the art processors available today but at $10 Million its a bit pricey compared to what is charged for state of the art silicon processors.   The real point is that if they have accomplished this and the scale goes up at Moores law every year on qubits which is extremely likely then the D-wave will quickly (<10 years) be faster than all computers on the earth today at least for solving some set of problems.

The set of problems the D-wave is good at are problems that involve optimization, i.e. logistics, recognition problems and many other problems that don’t look like optimization problems but can be translated to be optimization problems.   The D-wave leverages the fuzzy quantum fog to calculate at the square root of time a normal processor would need to calculate such problems.  As the number of qubits rise the size of problem that can be solved grows exponentially and eventually supersedes all existing computing ability to solve these problems.

The big win for D-wave and quantum computers are in the areas of security, better recognition for voice, visual and other big data and smarter query responses.

inner source

CNN / Deep Learning is being applied here:

 Facebook is using it in their face recognition software

Google apparently can transcribe house street addresses from pictures

Visual Search Engines for instance in Google+

Check Reading machines used by banks and treasury

IBM’s Watson

CNN and AI with BigData is a big theme.

Conventional AI is being used with BigData but I expect over the next year we will see more uses of  CNN.    Here are some articles talking about some companies doing this:

Machine Learning and BigData

cep-extension-to-run-machine-learning-models-written-in-pmml

software-startup-qstream

bigml-machine-learning-platform

computers-big-data-machine-learning

Predictive-Analytics-Vertical-Markets

CNN / Deep Learning resources

Caffe

Atari Games Play

Deep Learning Resources

2 thoughts on “Breakout MegaTrends that will explode in 2015, Continuation Perceptive APIs, Cognitive Technologies, Deep Learning, Convolutional Neural Networks

Leave a comment