ID:1374863
 
I am creating a game about disaster and survival of said disaster, i want NPCs to be quite dynamic.

My idea is to make the NPC analyse its surroundings somehow, is it safe? does it have food? water? shelter? If not, it should start first looking for water, then food, then shelter. Or if in a harsh enviroment, look for shelter first the neverything else.

Basically i want NPCs to fight players and other NPCs for resources, and for that i want to use a simplified Marslov's Pyramid.

Anyone got any input on that, or a better alternative?
(Warning; grab a snack and strap yourself in...lol)

Key terms to look into are "state machine" and "fuzzy logic".

A state machine is a form of AI where the bot has a set of defined "states" (IE, current specific goals) which drive its actions. It then includes a decision-making "brain" that accepts input/triggers, evaluates that input (including its own conditions/stats), and uses that evaluation to place itself into the proper "state".

Your system might perform periodic checks on the current conditions of the bot, and switch it into a different "mode" based on its currently most important, but un-met, need (Am I safe? If not, my goal is to run away. Am I safe, but thirsty? My goal is to find water. Am I safe, not thirsty, but hungry? My goal is to find food. Am I safe, hungry, and thirsty? Thirst takes precedence over hunger, so I need to find water. Etc.)

All actions the bot will take will hinge on its current "state"/"mode"/"goal" ("find water", "find shelter", "run away", "attack", etc.)

The "brain" of the bot, the decision-making procedure that does the work of classifying which "state" it should be in can be called in various ways.

You can have a continuously running loop that forces the bot to evaluate/reevaluate its current state at a constant, periodic rate (as long as it is still hungry, and that is it's most important need, then it will remain in the "find food" state, even though it is "thinking" every few seconds) This sort of bot is flexible (it could decide one minute that its goal should be to find food, but along the way if something changes, it can change its mind and go for a different goal; Did I become thirsty while looking for food? Then forget about the food and go find something to drink.), but it can become a resource drain (the bot will be running the decision-making procedure over-and-over-and-over, etc.)

OR, you can give the bot a one-track mind, and only let it reevaluate its state once it has accomplished the goal to satisfy the previous state (the bot will remain in "find food" mode until it actually eats, which will be the trigger to run the decision-making procedure again to see what the next state should be) This is more efficient, but puts blinders on your AI. They'll lock onto a goal, and ignore changes in their situation. You can get around that limitation, though, by adding more triggers to force them to run the decision-making proc (every change to their situation will come with a trigger to their "brain"; if their thirst level goes up, call the decision-making proc; if an enemy comes into range, call the decision-making proc; if I spot water or food, run the decision-making proc to see if I should go after it, etc)

There's one more limitation on a "state machine" system, though. The traditional type is usually referred to as a "finite" state machine, because it only allows the bot to be in one state at a time. They're either in "find food" mode, or they're in "run away" mode, or they're in "find shelter" mode, etc.

This is where principles of "fuzzy logic" can enhance the process. "Fuzzy logic" is a method of allowing the bot to be in each state based on a percentage/value range. They're not in "find food" mode -OR- "find water" mode, they can be half-in "find food" -AND- half-in "find water" (or 10% in "find food" and 90% "find water", or whatever)

Your decision-making proc, then, would need to decide not only which state(s) they need to be in, but how badly they need to be in those states. It stops being "am I thirsty or hungry"?, and becomes "I am thirsty, but I am also hungry; which should get more of my focus?"

Their actions, then, can no longer just be a simple if/or switch based on their single state, but rather they need to be weighted based on how much they want/need something. (I'm thirsty and hungry. Thirst is more important to satisfy than hunger, but I am so hungry that my thirst is still less important to me. If I see water, I may or may not go for it, if I see food, I will definitely go for it. If I see both, I'll pick the food first.)

Hope that didn't ramble on too long, and that it helps!
Do you mean Maslov's Pyramid? Because if not I'm pretty confused.
Whoops yeah Marslovs, i always get it wrong, my English teacher hates me for that :P
Maslow!!!!!!!!! <- DAT!
Geez, is it only me whom it infuriates?

Generally speaking making an AI based off the Maslow's hierarchy of needs would be rather interesting, setting the priorities of the NPC towards various actions and goals, of course the priorities could be changed by modifiers and their critical levels could and should be set so it would give the NPC different purposes and motivators.

But there is also some questionable aspects, for example different species, which would mean they have different priorities as well as different responses towards stimuli. As for certain species you'd have to completely reconstruct the hierarchy.

Sounds a rather interesting concept to build AI upon.
In response to Taitz
Taitz wrote:
Maslow!!!!!!!!! <- DAT!
Geez, is it only me whom it infuriates?

No, it infuriates me as well...but then again the OP's username is "Bongfarts_Stonersatan" so I'm not incredibly surprised :P

As for certain species you'd have to completely reconstruct the hierarchy.

Not necessarily. If you had weighted priorities like SSGX was talking about, I imagine you could produce very different behavior just by changing a species' initial probabilities. So, you might have a camel that doesn't look for water nearly as often as a bird, or you might have a plant-creature that never eats because it uses photosynthesis (perhaps it would just try to stay out of shadows?).

DOOM had an interesting way of handling monsters changing their targets:

(from http://doom.wikia.com/wiki/Monster_behavior) After a monster awakens from its dormant state and it is hit for the first time, a target countdown timer called "threshold" is activated. The longer a monster walks around with the same target, the lower the threshold gets. As long as the threshold remains positive, the monster will not change its target even if it is hit by another player or a monster. The monster will only choose a new target if the threshold reaches zero and the monster is hit by another monster or player, or if its current target dies. The threshold system does not apply to the arch-vile, which changes its target immediately to the player or monster that hurts it.

I think that's a pretty cool idea, allowing AI to "give up" on a certain objective or target after some time passes
In response to Magicsofa
Magicsofa wrote:
I think that's a pretty cool idea, allowing AI to "give up" on a certain objective or target after some time passes

Incidentally, this is what the current version of AI in Murder Mansion does.

One problem they had in the past was that sometimes they'd get "stuck". They'd have a goal, and be on their way to it, but something would get them hung up and they'd end up just standing still.

So, as a failsafe in the new and improved version, there are actually two triggers to make them drop their current target (which gives their routines a chance to set a new goal).

One such trigger is semi-randomized and occurs on a mostly regular time period (every 30 or so seconds; I find that generally, that's long enough for them to satisfy their current goal, so if they're still working on something after that long, they should probably abort the mission)

The second is a counter that tracks how long they've been standing still (unless they specifically mean to be standing still), because standing still means that they've gotten stuck (perhaps they've set a target that their pathing can't find a way to reach, or perhaps they've somehow selected an invalid target, or whatever else)

So upon either of these two conditions, they'll drop whatever they're trying to do, and pick something else (it may be the same goal, like getting food, but they'll pick another food source, or try another route)

They've received several other tweaks that should prevent them from getting stuck in the first place, but if it should ever happen, there's a way out.
Sorry for misspelling Maslow :S

And dont jugde me on my name, i was like totally out of this world at the time. Would wish i could change it.

Wanna know what would be cool? If creatures had DNA and the pyramid of behavior was inherited through it! The creature with the most efficient behavior would become dominant!
In response to Bongfarts_Stonersatan
Bongfarts_Stonersatan wrote:
Wanna know what would be cool? If creatures had DNA and the pyramid of behavior was inherited through it! The creature with the most efficient behavior would become dominant!

This could be acheived, but it would require a massive undertaking in design and implementation.

In order to make it realistic enough, you'd need to replicate a very large number of traits and define how those traits affect, enhance, or hinder the behaviors/goals/drives of the animals.

At the basic level, this is already how games work. For most games, every mob has a set of variables that govern a limited set of traits (this varies from game to game, but in general, many of them have a similar list: health, speed, strength, defense, etc.) This set of variables is the mob's "DNA", and the levels of each trait will force a very simple form of evolution/survival of the fittest once you pit them against each other.

This basic concept could just be extended (again, given a HUGE effort in design; listing every possible trait that you can think of, then devising the general functions that pit those traits against each other to see which ones give the greatest advantages, etc., and a HUGE effort in programming everything into a game)

So (like most things that would make for killer games), it is entirely possible, just probably beyond the reach of the effort anyone would be willing to put into it.
I think there's a mid-way point that would not take such a huge effort. If you were to program mobs that could reproduce (don't get any funny ideas), then they would of course inherit the "stats" of their parent(s), possibly with some variation. Put -those- mobs on a map together and see what happens.

The hardest part would probably be balancing such an environment to create a fun game. The AI would also get very complex if you wanted realistic social behavior.