ID:154341
 
I've heard of them before, but I've never actually found substantial information on what exactly they are.

My guess is that they're systems that track back events to determine the sequence of events that led up to a certain result, so if they want to replicate the same result they cause that sequence of events to occur again.

Am I right?


I've heard many people both supporting them emphatically (Derek Smart, Creator of neural-net-based Battlecruiser: Millenium), or rejecting them as completely inaccurate (Dr. Wallace, Creator of rule-based ALICE Chatbot).

(As far as chatbots go, ALICE seems surprisingly limited. I'm shocked that people claim it is so intelligent.)
Spuzzum wrote:
My guess is that they're systems that track back events to determine the sequence of events that led up to a certain result, so if they want to replicate the same result they cause that sequence of events to occur again.

Not really...I'm not the best to describe it accurately but I can try.

The idea with a neural net is to feed it a bunch of data and teach it some things, then try and get it to use what it knows to make decisions. Often you don't tell it anything about what led to a situation -- you just show it, say, 5,000 pictures of situations, and you point to each and say "This one is good, this one is bad", and you hope the net "learns" from that and makes useful associations. Frequently you don't actually know what it's learning, you just hope it's useful.

For example, one early military application was to get a neural net to recognize images that contained camoflauged tanks, so that a computer could scan through tens of thousands of satellite images and find the ones that had tanks in them. So they showed dozens of pictures of camouflaged tanks and dozens of pictures without tanks, and told the net that the tank ones were interesting.

And what do you know, it worked! They could then show it both kinds of pictures and it could identify which had tanks...

Until one day it gave completely random results. After analyzing the training input they had given it, the programmers realized what had happened: The test pictures with camouflaged tanks had been taken on a cloudy day, and the pictures without tanks had been taken on a sunny day.

They had accidentally taught the net to choose cloudy day pictures, and it didn't give a damn about the tanks.

Oops.
In response to Deadron
Hah. That's military genius for you. We got all of this neat computer equipment, and it tells you that it likes the cloudy pictures...
In response to Deadron
I think I see... in a way, this too is a finite state machine. It analyses dozens of variables and then picks a result. The only difference is that it can evaluate different variables as it sees fit, or ignore certain other variables.
Am I right?

Well, my very limited layman's knowledge goes something like this:

The name "neural network" comes from the network's use of "neurons". Each neuron is a sensing device that can be assigned a certain threshold of sensitivity, and if its threshold is passed, it will interact with other neurons nearby. A cluster of these neurons is wired up to a number of inputs and outputs. In "training mode", you give the network both an input and an expected output. The input might be a digital picture of Spuzzum, and the output might be the text string "Spuzzum." After the network has been trained on all the relevant data, you switch it from training mode to, uh, non-training mode, and in theory it should be able to make educated guesses about new inputs based on what it has learned.

My own, admittedly not-well-informed, opinion of neural networks is much like my opinion of biodiesel: I think it's a great idea with a lot of potential, but I haven't seen any "killer apps" developed using the technology yet.
I will give it a try Spuzzum.

Neural Nets have an various inputs. this would be stuff like sight, hearing, you name it. Actually they would have duplicates of each, more than likely. 2 eyes, 2 ears, as well as inputs for things like leg joint positions(hip, knee, ankle, foot, toes).

Next each input or stimulus has a multiplier that modifies the incoming data. you could set this to anything, initially. this is the importance of that data.

All like stimuli are added and compared to a threshold. so all your left leg readings are added, and compared to a number, which acts like a gate. if you equal or exceed this, the neuron fires.

most times this neuron is used as data for a second layer of neurons. at the end of the line you feed back and modify the thresholds.

an example:

a newborn creature is hungry, but cannot hunt or even walk. It has some reserve energy. right now it has three neurons that it can read accurately; hunger, smell and remaining energy.

The hunger neuron is beeping, and the creature is a preditor, so the multiplier is high. so you have 1 (hungry) times 1.2(pred value) plus smell 0.15 (no food here) times 1 (average) compared to hunger's threshold 1.5 (eat). He wants to eat, but no food is here. the neuron does not fire, the eat neuron is does not receive a signal to begin the eating process.

However, like any starving baby, he can do two things, yell and kick his legs. his yell multiplier should be low, though, because babies in the wild are game for others. so this neuron fails to fire as well. This leaves the leg kicking. This fires easily, and feedsback modifing the hunger neuron's threshold. However, the smell rises! smell 0.2, so the baby rewards any neurons that fired, by dropping its threshold, thereby making movement more likely. The baby learns to move its legs in a way that causes hunger to decrease.

I am very tired. I hope I make some sense. I will draw out a small neural net and email it to you.
In response to Ernie Dirt
the multipliers on each neuron are the key, these weights determine which range of neurons will become active and the weights change as learning progresses. your brain is a neural network.your thought is a function of the relationships between neurons in your brain. when a connection between two neurons is used, the synaptic junction between the two is reinforced, and surrounding connections may be diminished some. you can see then why the notion of a neural network is good (direct electrical connection between the process of sensory input and the output, programs can do simple stuff like linking neurons to some basic muscles and if the signal makes it to them they contract), but often it is difficult to get really great results that are effective for our purposes. a neural network such as our brain has billions of neurons parallel processing information (how we can deal with information that normally would require such a huge serial runtime for searching and comparison). to accurately simulate this would take some hardcore computation speed. however, this is only true for ai neural networks. they are built into systems that function traditionally. the possibilities for creating a hardware neural network (a series of physical nodes that are connected) are being explored. I read about this one guy who is making a neural network with about one million neurons this way (good luck, he's doing each connection by hand). he wants to get something in the end with about the intelligence of a cat.
if you actually want to construct a simple nn and see code examples of someone elses, I found a tutorial done by this programmer name Matt Buckland at: http://www.gameai.com/buckland.html
I highly suggest you check this tutorial out, the programmer breaks it down to really simple steps to try to give you his own insight, and he has code examples and an executable that he created using a neural net.
Probably the most key thing you need to know about neural networks is that design is everything. They tend to be best for specific applications they're designed for. Typically, they're given a set of training data (similar to real-world data) in which they have input to examine, and a known result--the result could be something simple like yes/no or a choice of categories, for example. The neural network has to learn this training set, which basically means it adjusts weights to try to reproduce the intended output from the input, and then it's hoped that it will be able to use that technique on a broader set of information. Usually this is best in cases where a person isn't even sure themselves what decision process they'd use, so it can't be translated to simple or fuzzy logic.

To this end, there are a few key questions to ask:

  • Is the decision binary, or can it be weighted somehow? Is it enough to say "This is probably X, but there's a small chance it may be Y"?
  • What kinds of data would make a good training set, to make sure the network learns the right rules instead of learning irrelevant stuff or ignoring important information?
  • Are there other ways I can present the information that will help the analysis? (Example: If the optimal solution depends on sin(A) but the network is only given A, a simple weighted network will only approximate this at best; precalculating sin(A) for the input and putting that in as B, an extra input variable, might prove invaluable.)
  • Can the neuron structure be changed in a way that makes it more flexible or more adept at this type of calculation? (Imagine if all neurons with A, B, and C as inputs were capable of also pretending there are hidden inputs AB, AC, and BC which multiply the values; a simple network can do basic addition and that's it, but multiplication could be quite handy. This is similar to what I mentioned above with sin(A).)

    When you think about it, there might be lots of good reasons to add some of these considerations to an AI. Consider a monster in a 3D game, for example, that knows it has a target X and has various obstalces O. The monster should know that its chances of hitting the target depend on its angle to the target (arccos(u.v)/sqrt(u.u*v.v)) and the angles between the line of fire and other obstacles. If you didn't know in advance what kinds of calculations would be involved, but knew angles might come into play, then you'd probably want to design neurons that could process hidden trig functions and multiplications, as well as square roots (or generic power functions, anyway).

    Lummox JR
Is this a neural network?

http://www.20q.org/

Z
In response to Zilal
I would say so! I spent an hour on that thing, and I'm coming back!
In response to Lord of Water
It's a hoot, isn't it?

Z

"A baby carriage does not urinate on its hands."
Artifical Intelligence, Artifical Life, and Neural Networks are all BIG intersts of mine.

As to your question on NNs, Its like a tree of nodes.
It has a starting point var, then it branches out into nodes giving the var multiple paths and choices to make as it branches out. For 1 example reinforced learning would be a piece of data that starts out, goes down a random node, and each time it passes thru that node the node var. Increases by 1. The end results are checked, and compared. Depending on the desired outcome the node path with the most activity becomes like a primary decession path. But there are still other nodes that can change the pattern or combine paths.

One of the founding fathers of NNs is Marvin Minsky. You can check up on his work as he is still active in this area.

LJR




In response to Ernie Dirt
Hmmm I found all this very interesting! :)
Ever play Creatures?

LJR

Ernie Dirt wrote:
I will give it a try Spuzzum.

Neural Nets have an various inputs. this would be stuff like sight, hearing, you name it. Actually they would have duplicates of each, more than likely. 2 eyes, 2 ears, as well as inputs for things like leg joint positions(hip, knee, ankle, foot, toes).

Next each input or stimulus has a multiplier that modifies the incoming data. you could set this to anything, initially. this is the importance of that data.

All like stimuli are added and compared to a threshold. so all your left leg readings are added, and compared to a number, which acts like a gate. if you equal or exceed this, the neuron fires.

most times this neuron is used as data for a second layer of neurons. at the end of the line you feed back and modify the thresholds.

an example:

a newborn creature is hungry, but cannot hunt or even walk. It has some reserve energy. right now it has three neurons that it can read accurately; hunger, smell and remaining energy.

The hunger neuron is beeping, and the creature is a preditor, so the multiplier is high. so you have 1 (hungry) times 1.2(pred value) plus smell 0.15 (no food here) times 1 (average) compared to hunger's threshold 1.5 (eat). He wants to eat, but no food is here. the neuron does not fire, the eat neuron is does not receive a signal to begin the eating process.

However, like any starving baby, he can do two things, yell and kick his legs. his yell multiplier should be low, though, because babies in the wild are game for others. so this neuron fails to fire as well. This leaves the leg kicking. This fires easily, and feedsback modifing the hunger neuron's threshold. However, the smell rises! smell 0.2, so the baby rewards any neurons that fired, by dropping its threshold, thereby making movement more likely. The baby learns to move its legs in a way that causes hunger to decrease.

I am very tired. I hope I make some sense. I will draw out a small neural net and email it to you.
In response to Zilal
Yeah, thats fun :)
In response to Zilal
Zilal wrote:
Is this a neural network?

http://www.20q.org/

Z

Hey! That is very much like Animal Guessing Demo I released.