ID:154362
 
I just got the Programming Gems Volume 1 book, which has a bunch of excellent short papers on various elements of game programming, including a batch on AI techniques.

A couple of the articles go into some depth on a concept I've heard of many times but never really learned: Finite State Machines.

However, I didn't get from reading the articles much sense of why taking this approach would be valuable.

Does anyone have any thoughts on this?
Deadron wrote:
I just got the Programming Gems Volume 1 book, which has a bunch of excellent short papers on various elements of game programming, including a batch on AI techniques.

A couple of the articles go into some depth on a concept I've heard of many times but never really learned: Finite State Machines.

However, I didn't get from reading the articles much sense of why taking this approach would be valuable.

Does anyone have any thoughts on this?

5 dimes says that bit on finite state machines was written by Andre LaMothe, who is to game programming books what Dr. Joyce Brothers is to talk show appearances.

I think the problem with finite state machines in terms of AI--I've never found them extremely useful--is that they make discrete decisions. Some people mix this up a bit by using probabilities to determine the next state, as LaMothe himself recommends, but it boils down to the same thing. You're either in a state or you're not.

What closer approaches the human thinking process is that we constantly weigh out factors, like our emotional state, or how close so-and-so is getting, or whether we think we can take a fight or should bug out while we still can.... Once we make a decision, we commit to it, but only partially; we reserve the right to change our minds if circumstances change. So a more realistic type of AI would be a state machine whose state is determined by those factors, but where the decision can change after a certain time (we can't just change our minds every split second) if the factors that led up to the decision have changed by a certain tolerance.

Think of it like this: Each possible decision is an N-dimensional vector corresponding to the things we think about to make that decision. The dot product of this decision with the factors we consider, divided by the length of both, is the cosine of the angle between them; basic vector math. For each possible decision we can make, there will be a corresponding "angle" to it based on current factors; the highest cosine value should win. If we're not doing anything yet, we just go with the best option. If we're now committed to an option, there should be a threshold below which we won't even consider going with a different one; one we reach that point, we make a new decision and stick with that.

The math of this is pretty simple:
proc/Decide(list/input,currentchoice)
var/list/L
var/mag_input=0
var/mag_choice=0
var/cosine=0
var/maxcosine
var/threshold=0.2 // our fudge factor
var/bestchoice=currentchoice
L=choices[currentchoice]
for(var/i=1,i<=L.length,++i)
cosine+=input[i]*L[i]
mag_input+=input[i]*input[i]
mag_choice+=L[i]*L[i]
maxcosine=cosine/sqrt(mag_input*mag_choice)+threshold
for(var/possiblechoice in choices)
L=choices[possiblechoice]
cosine=0
mag_choice=0
for(var/i=1,i<=L.length,++i)
cosine+=input[i]*L[i]
mag_choice+=L[i]*L[i]
cosine/=sqrt(mag_input*mag_choice)
if(cosine>maxcosine)
maxcosine=cosine
bestchoice=possiblechoice
return bestchoice

Here you have choices, an associative list with items (like "Attack" or "Evade" or "Wander") associated with lists of inputs. Here all the lists are the same size; it doesn't really have to be done that way, but could be handled even better by more associative lists. Whatever creature uses this AI system, they'll stick to a decision until circumstances change enough to make another choice significantly better. The advantage of using the threshold is that if two choices look more or less equally attractive, the creature won't flip-flop between them constantly due to little variations in the input.

Finite state machines sound good in theory, but in reality I find I have trouble figuring out how I want to implement them or deal with the subtle problems like dealing with a wildly changing environment.

You might also want to consider something like this in terms of chosen direction of motion. In an Asteroids-like game I once made, I found that in the AI for computer-controlled ships, a ship would tend to "wobble" between directions because its AI was limited to things like deciding when to turn left or right or to shoot. Things like this would benefit a bit from a threshold system too, or perhaps a limited sense of a flight path. In BYOND this concept doesn't apply directly, but the idea of making a decision and sticking with it is a good one overall.

Lummox JR
In response to Lummox JR
But here, Lummox JR, you do not leave enough room for idleness. Humans spend a lot of time idle. They are not constantly making descisions. This would have to be factored logically into the equasions, too.
In response to Lord of Water
They're only idle because they choose to be.

The couch potato only stays idle because he chooses not to take the cans to the supermarket.
In response to Foomer
I must concur with Foomer on this. In terms of behavior, you are never really "idle" unless maybe you are unconscious. In all other cases, apparent "idleness" is actually a choice to pursue one course of action over another. For example, even if you are just sitting there, you weighed the benefits of sitting there in there first place with other possible actions. Then you continue to weigh them. Once another alternative becomes more attractive, you do something else (like go to the bathroom) until that option no longer seems optimal.
Notice I said seems optimal. This reflects that the weighing is subjective and dependant on the actor. That is one man's "idleness" is another's exhausting day.

-James
In response to Jmurph
Jmurph wrote:
I must concur with Foomer on this. In terms of behavior, you are never really "idle" unless maybe you are unconscious. In all other cases, apparent "idleness" is actually a choice to pursue one course of action over another. For example, even if you are just sitting there, you weighed the benefits of sitting there in there first place with other possible actions. Then you continue to weigh them. Once another alternative becomes more attractive, you do something else (like go to the bathroom) until that option no longer seems optimal.
Notice I said seems optimal. This reflects that the weighing is subjective and dependant on the actor. That is one man's "idleness" is another's exhausting day.

Yeah. Or, for example, running. Some people would run because it makes them feel good, while others would run only based on necessity.

Everything in the universe is a matter of perspective. (*looks around for Jobe suspiciously*)
In response to Lummox JR
Lummox JR wrote:
5 dimes says that bit on finite state machines was written by Andre LaMothe, who is to game programming books what Dr. Joyce Brothers is to talk show appearances.

No, one was written by Steve Rabin and one by Eric Dybsand. LaMothe did provide an article on neural nets, which aside from my plan to read it out of curiousity, I'm not insane enough to touch with a 10 foot pole since I believe in results.


I think the problem with finite state machines in terms of AI--I've never found them extremely useful--is that they make discrete decisions.

I didn't get the impression that the functionality for choosing a decision was interesting to Rabin, who wrote the more general article. In fact he doesn't discuss that at all. What he lists as the main advantages of using an FSM system are:

1. Easily allow communication between game objects
2. Offer a general and readable solution to implementing AI behavior
3. Facilitate keeping debug records of every event

The feature of his proposed approach is that everything that happens in the game goes through a message router (in the form of messages that could be dumped to a log file), and the router sends them to the game objects. It's really a kind of notification system, where any object can register for any notification.

That part is not really FSM...what he seems to like about FSM is that by have discrete states which you enter and leave, with entry and exit functions (constructor and destructor, essentially) it's easy to understand and debug why an object is in the state its in, and its easy to add a state.

That's where my curiousity comes in...the paltry bit of AI programming I've done for Living & Dead is already implicitly state based, along the lines of a tree:

Do I have a valid combat target?

If no target, so see if I can acquire one
Am I being attacked?
Make the attacker my target
Is someone in view of me being attacked?
Make the attacker my target

If I have a target, attack it
Wait for next round

No target
Chance of moving randomly


These are states and choices between states, but they are not formally defined as states. The states are mixed up a bit (having a target is one indicator, being dead is another), and there is no formal entry or exit from one state to another.

So that's where my question lies: The value of formal states with entry and exit, and the ability to say "This is the state I'm in".
In response to Spuzzum
Spuzzum wrote:
Jmurph wrote:
I must concur with Foomer on this. In terms of behavior, you are never really "idle" unless maybe you are unconscious. In all other cases, apparent "idleness" is actually a choice to pursue one course of action over another. For example, even if you are just sitting there, you weighed the benefits of sitting there in there first place with other possible actions. Then you continue to weigh them. Once another alternative becomes more attractive, you do something else (like go to the bathroom) until that option no longer seems optimal.
Notice I said seems optimal. This reflects that the weighing is subjective and dependant on the actor. That is one man's "idleness" is another's exhausting day.

Yeah. Or, for example, running. Some people would run because it makes them feel good, while others would run only based on necessity.

Everything in the universe is a matter of perspective. (*looks around for Jobe suspiciously*)


ah... i agree with you there.
In response to jobe
Hmm. Why do you keep replying to old posts?
In response to Nadrew
He's replying to anything that has his name in it that he hasn't replied to yet. I don't disagree with that logic.