Deep Learning Neural Networks

What's all the frenzy around "neural networks"?

I keep seeing news about something awesome that a computer has been "trained" to do with "deep learning". These news spiels refer to something called "Q Learning", "Deep Learning Neural Networks", or "Hierarchical Learning".

Although these are huge terms, it's a pretty simple idea as to how they work. In this article, we cover some of the basics of this genre of algorithm.

The Idea of Neural Networks

In biology, creatures that have brains have a "neural network", where individual neurons are essentially wired together. Neurons typically take inputs from other, connected neurons, and based on the strength (the importance of those inputs) of the input neurons, create an output.

Through many different neurons wired together, a brain can create a good output (think math for instance).

Awesome, But That's Biology

Q Learning is based on the same principle as the biology it mimics. These programs have layers of "neurons" that are interconnected as shown in the simple example below.

Neural Network Example

The "input" neurons are essentially the network's perspective of the world around it. These would be fed data of what the current situation is. If this network was meant to play the game "2048", then it would likely have one input neuron for every cell of the game board, which would be fed data of what item is in each cell.

The "hidden" neurons (which could be multiple layers of neurons), are the network's decision making network, where it is able to interpret the input, and (hopefully) understand it, or strategize based on it.

The "output" neurons are the decision that the network makes. In the same example as before (2048), this could be which direction to move the board.

So how do these get connected?

Although the example image above shows the neurons are already connected, neural networks don't have them initially connected. The network needs to be "trained", which is just the process of connecting the neurons. The way this process occurs is through pretty standard evolution.

This process has two major parts to it. There's the neural network, then there's a scoring and evolution system.

The neural network starts with no connections, and on the first generation, some "mutations" are made. These mutations equate to creating connections, removing some connections, or changing the strength of how much a particular connection affects a neuron.

How does this get better than chance?

So what essentially happens from the first generation is that each instance (there are many tested at once) of the neural network gets inputs, and makes its decisions. Every output is rated somehow (this rating works differently based on the goal), and the best few instances are kept, with the rest being destroyed.

At this point, the first generation is done, and we can apply mutations to the best neural networks, and run the process again.

We continue doing this, and after numerous generations (this can be thousands of generations), the system somehow knows how to interpret its inputs, and make a much better than random decision.

"Somehow knows how to interpret its inputs"

This phrase is exactly why deep learning neural networks are so awesome. The programmer knows what the inputs mean, and knows what the outputs mean. However, the programmer has no idea about how the "hidden layer" works, as they really had no part in making that work.

This is awesome because they made the program, it works pretty well, they understand how everything talks to each other, but they have no idea what it means.

An Example

Although I'm sure an example would be nice, it's not here yet. I will be going through an example soon, and will update this post with a link to the follow up article.

Leave a Reply