This post has already been read 319 times!

What is a Neural Network ?

To get started, I'll explain a type of artificial neuron called a perceptron.

Perceptrons

were developed in the 1950s and 1960s by the scientist Frank Rosenblatt, inspired by earlier work by Warren McCulloch and Walter Pitts. Today, it's more common to use other models of artificial neurons - in this book, and in much modern work on neural networks, the main neuron model used is one called the sigmoid neuron. We'll get to sigmoid neurons shortly. But to understand why sigmoid neurons are defined the way they are, it's worth taking the time to first understand perceptrons.

So how do perceptrons work ?

A perceptron takes several binary inputs, x1,x2,…, and produces a single binary output:
tikz0

In the example shown the perceptron has three inputs, x1,x2,x3. In general it could have more or fewer inputs.Rosenblatt proposed a simple rule to compute the output.

He introduced weights, w1,w2,…, real numbers expressing the importance of the respective inputs to the output.

The neuron's output, 0 or 1, is determined by whether the weighted sum \sum_j w_j x_j is less than or greater than some threshold value.

Just like the weights, the threshold is a real number which is a parameter of the neuron.

To put it in more precise algebraic terms:

output is :    0 \mbox{ if } \sum_j w_j x_j \leq \mbox{ threshold} \\ 1 \mbox{ if } \sum_j w_j x_j > \mbox{ threshold}

That's all there is to how a perceptron works!

Obviously, the perceptron isn't a complete model of human decision-making! But what the example illustrates is how a perceptron can weigh up different kinds of evidence in order to make decisions. And it should seem plausible that a complex network of perceptrons could make quite subtle decisions:

tikz1

 

The source of this article is :

http://neuralnetworksanddeeplearning.com/chap1.html

Michael A. Nielsen, "Neural Networks and
Deep Learning", Determination Press, 2015

Leave a Reply

Post Navigation