Understanding how deep learning works
For an introduction of the differences between machine learning and deep learning, check out the first blog in the Deep Learning Series here.
Deep learning structures neurons in layers to create an “artificial neural network” that can be trained to make intelligent decisions. A simple neural network takes an input and passes it through multiple layers of hidden neurons to find higher level structures. The network then uses this higher order information to make predictions for the input. (See Figure 1 below).
The notion of making predictions based on higher order structures is important since this is one of the key things that differentiate deep learning to other classical machine learning like linear SVM. In deep learning, model predictions are computed based on complex, non-linear combinations of the input which allows the model to make predictions for tasks that are difficult to do even by humans.
Figure 1: Deep Learning Neural Networks
Deep learning training
Training is the process where the neural network learns how to model the relationship between the input variables from data. Depending on the tasks, there are several ways to do this:
Supervised learning – When the task is to learn to map an input to a limited set of concepts, we can employ supervised learning. The process is called supervised learning because the model is learning from data that comes in the form of pairs. For example, if the task is to classify animal pictures then we need to supply the algorithm with training data that consists of images and the name of the animal that appears in each of the images.
Supervised learning is ideal for problems where there is a set of reference points or ground truth such as classification problems where you identify the input as part of a class of data– “cats.” It is also ideal for regression problems where, given a particular x value, you determine the expected value of the y variable, such as predicting the cost of a house based on square footage, location, and proximity to freeways.
Unsupervised learning – Here the task is to learn relationships between data points without any explicit labels. In unsupervised learning, the algorithm will try to organize the data on its own by finding the underlying structures or patterns. In deep learning, unsupervised learning techniques can be useful for cases where we have a lot of data, but very few labeled data.
In this situation it is often beneficial to first ‘pretrain’ the model in an unsupervised way using all of the data, including the ones which have no labels. An example of a popular unsupervised learning method for deep learning is training an autoencoder.
Reinforcement learning – For tasks where we want to build a model that can take certain actions based on input from the environment, it is often difficult to collect data that can comprehensively represent the space of realistic pairs. In this setting, we can apply reinforcement learning where we train a model by essentially letting it interact with the environment and giving it a “reward” or “penalty” based on the outcome of the actions that the model takes.
Deep learning has existed since the 1970s, but because of the lack of computational resources it was only possible to build and train very basic neural networks and the technology floundered for many years. Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton changed this in 2012 when they demonstrated that it was now possible to train deep neural networks efficiently on a large data set by using GPUs. Their deep learning system was able to reduce the error rate by almost 50 percent — a very significant improvement — in certain image classification tasks.
Since then, deep learning has begun to make a meaningful impact in many areas of industry. Today, image recognition by machines trained via deep learning performs better in some scenarios than humans– from classifying animals to identifying indicators for cancer in blood and tumors in MRI scans. Deep learning applications have also extended to speech recognition, autonomous driving cars and more.
In November 2012, Microsoft chief research officer Rick Rashid demonstrated how he was able to transcribe his spoken words into English text then translated into the Chinese language. The kicker is that the Chinese translation is then spoken back out in his own voice. In 2017, AlphaGo defeated the best human Go player — a feat many experts previously thought is years away due to the complexity of the game. Google’s AlphaGo learned the game, and trained for its Go match using deep reinforcement learning by playing against itself over and over and over.
In 2018, Hinton, Yoshua Bengio and Yann LeCun were awarded the coveted Turing Award for 2018 by the Association for Computing Machinery for their achievements and subsequent contributions to the development of artificial intelligence and deep learning.
One application that holds tremendous promise for deep learning is cybersecurity. I will explore deep learning in cybersecurity in the next blog in this series.