Human beings and machines may have much in common. Humans invent new machines all the time and
they say that the first machine ever invented was the wheel.
From far dark ages in history to nowadays computers age and at the core of every machine there is One and Zero.
Known as the binary system, in fact, when you see the letter "A" in your computer there are Ones and Zeroes behind it.
When you use the most complex software or surf to your favourite web site, there are Ones and Zeroes there too.
Spiritual people say that the Universe is made of nothing (zero) and something (one), that the Universe is made mostly of emptiness.
The film A.I. has created a myth in people's mind perceiving artificial intelligence as some kind of "magic" of the technology. Also, in the old sci-fi films we always see machines, those gigantic computers who develop independent free will to take control over humans. Not so nice picture, huh?
In this paper I will try to demystify the idea of artificial intelligence by giving simple explanations and with no mathematics if possible, putting in your hands the simple truth: all there is behind is One and Zero.
Back in 1943 McCulloch and Pitts developed models of artificial neural networks (from now on ANN) based on their understanding of neurology, those discoveries found out how neurons learn in the human brain: by transmitting electric impulses through the synapses (connections) between neurons.
We could say that neurons in our brain are united through a gigantic number of connections the makes the whole act like an enormous almost infinite network.
Well, this idea was transported to software research to create an algorithm, or method, that can learn like the brain does: through connections and signal propagation through neurons.
Our brain needs the input data, like reading, smelling, or hearing music, then the brain filters all through electrical impulses and waves.
When one listens to only a few tunes he/she can recognize the melody and tell the songs name before the end of the play.
Here the input are the music notes and the output the song's name just recognized. Easy..
In the same manner we can design an ANN:
But a single note will not be enough to recognize a whole melody and so the ANN needs more input data to learn before being able to give a valid output.
Why does the ANN need layers?
The web connections in an ANN are organized in layers, and a layer contains from one to many neurons, so, for the music problem the layer's distribution is:
- One Input layer containing information for the ANN to learn, let's say the music notes where each note is a neuron.
- One to several hidden layers that will connect input information to the output.
- One output layer to give the answers, in this case yes/no if the music notes correspond to a certain song.
How does the ANN learn?
The ANN learns by iterations or repetitions, and this iterations are called epochs.
So for each learning epoch in the ANN there is:
- Feed input data
- Propagate signal through layers
- Give an output
Well then, if we don't tell the net when to stop the loop can go on forever. This flow needs to be more elaborated by setting stopping conditions somewhere, sometime when it is for sure that the net has learned.
Like in the biological model the neurons transmit the electrical impulses through layers of neurons in the brain till there is a desired output.
The most known ANN model is called multilayer backpropagation or multilayer perceptron, and a perceptron is simply a neuron that learns.
Let's expand the learning model a little bit more by creating a stopping condition called the minimum desired error (the ANN learns from its errors just like us! Well, ahem! sometimes..),
- Feed input data.
- Propagate signal through layers from the output last layer backwards to the first hidden layer. This is backpropagation.
- Calculate current error.
- Ask: Is current error smaller than the minimum desired error? Then give output and EXIT.
- If current error is bigger: Go back to 1.
This model is yet a very simple model as one could ask: what if the current error is never smaller than the minimum desired error? Then we can create a second stopping condition, the maximum number of iterations (epochs) allowed.
In step two (backpropagation) some necessary mathematics calculations are done to find out the current error.
This calculations are based upon the connections between layers. I am not going to deep in the formulas details I'm just going to give the idea behind it:
My Actual Layer Data=My Previous Layer's Calculations.
And the word Previous is very important here because it is the way that layers are connected to each other.
Conclusion.. and what's really inside a neuron?
So far we've talked about neurons, networks, layers, input and output data, backpropagation and epochs.
All these words are the usual terminology in all ANN papers but this paper is different and I want to talk about what is inside a neuron.
Inside a neuron there is One or Zero and the output solution once the network has learned is given as One (true) or Zero (false). Of course there are ANNs that work with real numbers like 1.5672 but in most cases the input data is scaled close to Zero or One values to make sure that the best performances are given.
After these very simple explanations Artificial Intelligence is in your hands now and you can walk your way on.
For some A.I. programming scripts you can visit:
For the rest of your research you can do some Wiki work.
Written by Maria M. Olivares. (Copyright 2010) The article must be reproduced in full with author name and links retained.