Servo 112004

Mktti

A Second/Look by Miohflet

Yes, you are currently reading the follow-up on networks of neurons and your neural net is making you do it! Last February, we discussed a common type of neural net that may very well emulate our own, human neural nets. This time, I'd like to look at another type of common neural net whose structure is also very close to that of our own and give you some ideas to chew on. What could be a better way to start our follow-up than with our very own recap?

Our last neural net was a small combination of consecutive switches with multiple different threshold values. When we applied a voltage to the first "neuron" or switch, it would filter out any input less than its threshold value, while allowing any voltage equal to or greater than this magic value to pass. By linking a large number of these neurons together — or even a small number, as in our example — a neural net was formed. As most of you probably remember, a neural net is a form of AI (Artificial Intelligence) based upon the modular structure of the human brain.

The Human Brain Reloaded

In the minds of our teachers and students, the brain can actually take on many forms, as its precise, low level structure is still very misunderstood by us. We don't know for sure that our brain is based on many different threshold values, yet we also don't know for certain whether the new structure I'm about to discuss holds any truer. With that notion in mind, let me delve into this new and more practical structure that we'll call NeuralNetR.

In NeuralNetR, there is no change in voltage among signals, each and every signal is either -50 mV or 0. This setup very closely emulates that of binary and could, in fact, be considered identical. I know what's on your mind now — why is -50 mV used for the on state? Well, I'll rebut that with this — why is 5 V used for most common integrated circuits?

The voltage of any sort of logic relies almost entirely upon its implementation. Neurons are turned on and off through the occurrence of chemical reactions that occur exclusively along the neurons' dendrites. In review, dendrites are the neuron's receptors — or "input pins." As long as the logic understands its own implementation, in theory, everything works flawlessly.

So, how does NeuralNetR actually implement logic if all of its neurons are identical replicas? NeuralNetR was smarter than the average neural net and

— rather than being overly complicated by modifying each threshold value — it modifies its neural network connection pattern. NeuralNetR uses its neurons in a similar fashion to OR gates.

As we astute electronics hobbyists know, OR gates will forward the input signal as long as it meets the OR gate's data sheet's specified internal threshold value or — in the case of our brain — as long as they meet the neurons -50 mV threshold.

Using a neural structure like this, your brain is converting one or more input signals into one or more output signals of the same magnitude (voltage). This setup allows multiple inputs to trigger one output and one input to trigger multiple outputs (in Figure 1, left is the input neurons, right is the outputs).

This proves to be extremely useful for forwarding our body's inputs to appropriate analyzers and even to the appropriate muscles without running a separate neuron for each input. Hence, the name neural network — just as multiple computers run and communicate on a computer network, multiple input and output neurons run and communicate in a neural network. If our nervous signals didn't utilize common paths, we'd all just be a big clump of tangled neurons with no room for any other essential organs in our comparatively small bodies.

Neurons At Your Defense

As you might have guessed already, your neural net's main purpose is to logically forward all incoming signals to a higher authority that contains an even more advanced structure of neural nets. This higher authority is your cerebrum — the consciousness of your brain — and even the best scientists still have no idea how this section of the brain allows a human to think and analyze such intricate data. It is unknown even to you what composes your mental mind. The closest hobbyists can get to replicating the decision making ability of a person is through the development of neural nets, many of which are very similar to the two I've just described.

It may come as a relief to you to hear this, but not all nervous signals must travel through your brain. In fact, the human nervous system's signal paths that we're most interested in resemble almost identically the pattern of NeuralNetR; you may know these as reflexes. Reflexes are implemented within your body as reflex arcs. A reflex arc is a path through which nervous signals can travel directly from input to output. Reflexes are one of many safety mechanisms in our body; many of these are overly complicated and unrelated to neural nets and won't be covered here.

Reflexes allow your body to remove itself from danger without the large overhead time needed by your brain to determine what should be done. Let's go through a scenario in our mind to display the usefulness and function of reflex arcs: You turn on your stove and, while it's heating up, you reach for a can and accidentally touch your hand to the element. Your hand jumps off of the element and you've witnessed reflexes.

Again, you're thinking that this is real great and all, but how does this help me build a neural net? Well, I'm glad you asked that question because, after neurons themselves, reflexes are the most essential concept in developing a hardware-based neural net. We have to admit that, if we're reading an article in SERVO Magazine entailing logic gate-based neural nets, we're probably interested in building one for a robotic project. In any type of robot, the primary concern is to quickly and correctly act upon a certain input — a concept strikingly similar to the concept of your reflexes.

What Does It All Mean?

Before the age of the microcontroller, all robots were based on simple neural nets that were very similar in concept, but even simpler than NeuralNetR. After the advent of the microcontroller, it became more practical to code artificial software neural nets into microcontrollers through the use of simple IF/THEN commands.

Just as NeuralNetR would output a few small voltages based on an input or two by using some logic chips, microcontrollers would do this with a single chip and two lines of code. As you might expect, the use of static neural nets decreased as dynamic code could be used. Lately, however — out of enthusiasm to create a machine that emulates the human mind as closely as possible — neural nets have posed an interesting challenge and gained greater popularity.

With a few OR gates, we can wire a digital sensor input to a digital motor output, along with many other I/O devices, and have only the appropriate motors run when the appropriate sensors receive an appropriate signal. Although it may not be as efficient or practical as a microcontroller, we're correctly emulating the reflex arcs of the body's neural net.

One major advantage that neural nets have over their microcontroller counter parts is reaction speed. Just as reflexes in your neural net are faster than your brain at analyzing the data, your electronic neural net is faster than a microcontroller at analyzing the data. This makes NeuralNetR great for any project you might have that requires lightning-fast reaction time.

Despite their differences, neural nets and microcontrollers don't have to be mutually exclusive. Just as your body combines your cerebrum with your neural net, you can combine a microcontroller with a NeuralNetR. Combinations like this can lead to very interesting results that even more closely mimic the human nervous system.

0 0

Post a comment