- Home /
Machine Learning for FPS game
I have had difficulty creating AI for my multiplayer first person shooter because there are a lot of behaviors (peeking, flanking, prefiring, etc) that I haven't been able to code into the AI. I realize that a lot of these behaviors would be produced by an AI that trained against itself in the FPS game (Adversial Self Play).
I've watched some tutorials on setting up Machine Learning in Unity, and I've worked with feed forward neural networks before. I feel ready to get started.
The BIG issue is how to feed image data into the AI efficiently. For this to work, the current state of the screen has to be fed as an input into the AI every frame. Is this even possible with Unity's implementation of machine learning? If I'm training 10 AI agents simultaneously (simulating a 5 v 5 match), is this feasible for my computer to run?
Answer by Bunny83 · Dec 16, 2017 at 02:06 PM
No, no, no ^^. Feeding in an image as input makes no sense in such a case. You do things like that for more general AIs but they would need to be trained with millions of different levels / level designs which just makes no sense. Also this would be way too slow. In such a case you usually use convolutional neural networks. However this only makes sense for the real world and a particular goal. Calculating the kernel convolutions every frame won't work out that well. Such an AI is used for analytical purposes and not for realtime AI.
In general there are many ways to design the input of an AI for video games. Since the whole environment is artificial we already have the whole scene in a much more abstract and easier to process way. Though it usually helps to tell the AI about certain things in the world. This is either part of the level design or some kind of automated level post processing. Games like Halflife require info_nodes to be placed by the level designer to specify where an AI can move to. Those might also be used for scripted sequences as movement targets.
In Unity we have for example the navmesh which already gives you a whole navigation mesh where the AI can move. For certain game features you usually mark certain objects that can be used as cover for example.
Generally your AI basically knows everything about the map and the position of every object in the scene. However the trick is to selectively ignore information that an AI should not know. For example an AI would do a visibility check to all enemy players. If the check (raycast) fails the AI doesn't know about that enemy. If an enemy comes into view you might decide if the AI actually recognise the enemy based on several factors. If the line of sight is not within the AIs current view, ignore it. If it's in view you may add some reaction time until the AI actually reacts to that input. The further away an enemy player is (so it's relative size in the view is small) the longer you make the reaction time. Things like that can be easily tweaked with AnimcationCurves.
All those "cleaned" input states could be feed into a neural network to decide what action the AI should perform. However what abstracted inputs you need highly depends on the kind of game and what the AI should be capable of doing. Most AI in a general sense are not very intelligent. They are designed for a specific task. If your game involves that the AI has to reach a checkpoint and performs a certain action there you can't expect it to learn that just by trial and error. You know that true evolution takes billion of years ^^. The more complex the task it should perform the more difficult it gets. The game mode plays an important role. If you have a deathmatch game an AI should probably hunt down an enemy that has gone out of sight. However if it's about checkpoint capturing it would be stupid to let the AI chase an enemy since that's not the goal of the game. Every human team could easily figure out that "weakness" in the AI and just have a "bait" player that drags the whole enemy team somewhere its not a threat.
If you have some free time you may want to look through the computerphile AI playlist to get a better feeling for how most AIs work.
That was a great allude to valve's way of making bots. I actually watched a walkthrough of how their AI work in CounterStrike, and it does rely on a number of different info_nodes that point to cover, escape routes, etc. They also tesselate the ground plane into shapes rather than construct one mesh like Unity does; not sure which approach would be better.
I think you're right that feeding in an image as input would be a huge waste of time and not acknowledging the fact that a simplified version of the world is already available for use by the AI.
I have been program$$anonymous$$g my bots for my FPS Capture the Flag game manually for some time now. The reason why I want to use machine learning is because I would like to let the AI find the best way to play the game on it's own without me having to manually program it, that would save me many headaches.
I was originally thinking of making a bot for a defuse gamemode (alike counterstrike), but there were so many behaviors that I was not able to produce using the Nav$$anonymous$$esh provided by Unity such as cutting the pie, covering, and other elements of situational awareness.
Here is a walkthrough of cutting the pie that I made in hopes that I would be able to program it:
https://www.figma.com/file/ekWZTrsn5uhAqV3iL7Q9WalZ/FPS-AI
I was not able to program this behavior. This is why I've switched to making a capture the flag game; it's a simpler game mode.
Thank you for all the help and links! I will definitely take your advice to heart