Playing complex strategy games for many years, one of the things that irks me the most is that hard AI levels often just give the dumb AI cheats to simulate it being smarter. To me, it’s not very satisfying to go against cheating AI. Are any games today leveraging neural networks to supplant or augment hand-written decision tree based AI? Are any under development? I know AI can be resource intensive, but it seems that at least turn based games could employ it.
I don’t know what it’s using specifically under the hood, but in Street Fighter 6 Capcom recently added a new AI opponent you can fight that they say is trained on actual player ranked matches and fights more like a human opponent. You can even have it try to mimic your own playstyle if you’ve played enough.
It can do some odd things and its mimicry isn’t perfect. But it definitely doesn’t feel like the typical high difficulty CPU opponent which uses things like input reading to react faster than a real player ever could.
…it also has been seen teabagging.
I’m not into fighting games, but that’s pretty neat! I hope the industry follows suit if people like how it works in Street Fighter 6.
You can train it in mirror matches, but the V Rivals that you can fight other than your own mirror are an amalgamation of a particular rank. There’s a whole lot of skill variance in Master rank alone, so it might be good for training me against Dhalsim, because hardly anyone plays Dhalsim, so no one knows the matchup, but it won’t help me learn how to beat Punk, specifically.
Yeah, there are some disappointing limitations for sure, but it definitely is interesting, and does at least feel more like a human player than the normal CPU opponents.
…if a somewhat schizophrenic one.
Chess.
For most games, it’s not difficult to make AI that can absolutely destroy humans. But it turns out to be very difficult to make AI that feels like a fun and engaging challenge to a human. Hardest of all is making AI that realistically plays like a human does.
Hardest of all is making AI that realistically plays like a human does.
However it is being worked on and coming along, you can play one here
Chess has been using neural networks for their AIs way before it was cool. Different AI skills are usually just trained to different depths.
The advantage of a neural AI, in my mind, isn’t that it is better. It is that it is worse in a way that is fun.
Chess/Go? AlphaZero would fit that description. Also think they were tackling StarCraft as well?
Oh that’s really interesting; I hadn’t considered racing games as a genre to benefit from this type of machine learning. I guess I figured there’s not so much to AI there that it’s necessary, at least when we already know the “ideal lap line” for cars to follow, but yeah it gets a lot harder when considering other drivers on the track and a huge array of unique car models with their own handling and performance characteristics.
I played Forza Horizon 4 and the Drivatars are pretty convincing. They make exactly the kind of mistakes on the track that I make and they can be challenging but beatable in a way that’s much more fun than any other racing game I had played before.
The challenge is that AI for a video game (even one fixed game) is very problem specific and there’s no generalized approach/kit for developing AI for games. So while there’s research showing AI can play games, it’s involved lots of iteration and AI expertise. Thats obviously a large barrier for any video game and that doesn’t even touch the compute requirements.
There’s also the problem of making AI players fun. Too easy and they’re boring, too hard and they’re frustrating. Expert level AI can perform at expert level, which wouldn’t be fun for the average player. Striking the right difficulty balance isn’t easy or obvious.
I wouldn’t mind an AI using unorthodox strategies, but yeah that’s a good point that fine tuning it to be fun is a big challenge. Speaking of “non-player-like behavior”, I wonder if AI could be used to find multiplayer exploits sooner, though the problem there is you don’t really have much training data besides QA and playtesters before a full release.
Historically, AI has found and used exploits. Before OpenAI was known for chatgpt, they did a lot of work in reinforcement learning (often deployed in game-like scenarios). One of the more mainstream training strategies (pioneered at OpenAI) played sonic and would exploit bugs in the game, for example.
The compute used for these strategies are pretty high though. Even crafting a diamond in Minecraft can require playing for hundreds of millions of steps, and even then, AI might not constantly reach their goal. Theres still interesting work in the space, but sadly LLMs have sucked up a lot of the R&D resources.
trying to live train AI against your playstyle is both expensive and unnecessary. Hard bots have never really been too much trouble. We don’t really need to use AI to outpace humans in most games. The exceptions would be an extremely long play games like chess and go.
There’s been a lot of use in AI for platformers and stuff like trackmania, but not for competition, simply for speedruns.
yeah I would like to leverage AI for stuff like RPG NPCs. instead of hearing the same filler lines for 200 hours of gameplay, barely reacting to the context of your game you could have a vibrant array of endless dialog that actually keeps up with your game progress (or lack thereof).
That would be a pretty good use. Llms are a little slow on most home hardware still. Hallucinations could also be a little scary. I wonder if that would affect your ESRB rating, That’s technically it could say anything…
The fear of hallucinations is so great for a commercial company that when square enix tried it on a remake of a detective game of theirs, it became the poster child of how awful LLMs are for videogames, it’s one of the worst rated steam games, it’s like talking to a wall because they nerfed it so hard it’s worse than a normal text parser.
It would certainly be nice to have for the fighting games I play. A few have toyed with the idea of “shadow fighters”, but it never really feels like playing against a person. It might get their habits down, but it doesn’t replicate the adaptation of facing a person and having them change how they play based on how you’re playing. If someone could crack that nut, everyone would have someone on their level to play against at any hour of the day, no matter how obscure the game is.
Hard bots have actually been so much trouble, that literally the only way to make them hard at all is to make them cheat by allowing them to operate outside of the ruleset the player is bound by. It’s a humongous issue with every strategy game on the market.
This has been discussed a lot over the decades (with some VERY good articles written by assholes we try to pretend don’t exist))
The gist of it is: AI cheats because the alternative isn’t “fun” and rapidly outpaces humans.
Because in an RTS? After you get a build order down, the big decider is Actions Per Minute (APM). From a build standpoint, it is the idea of triggering the appropriate research the absolute second you have enough minerals. From a combat standpoint, it is rapidly issuing move and attack orders so that you always win the combat triangle. The former isn’t significantly different than just having cheaper research or faster build times. The latter is actively demoralizing in the same way that we all died inside when we first got permission to go online in Starcraft. Except at a level that even the good players realize they ain’t shit.
For grand strategy games (barring real-ish time ones like Stellaris) you basically have two real approaches. The first is the games with research options (… like Stellaris. Look, I have been playing a lot of Stellaris lately). We try not to acknowledge it but RNG has a massive impact on that when you really want to get torpedoes but no options are popping so you are just doing the fastest research choices you can to get a new pool. And the difficulty option there is… a known order.
The other are the very elaborate fixed tech trees. Obviously this gets back to build order. And the reality is… the benefit gained from rapidly updating the hard mode AI to use the current meta just isn’t worth it. That IS somewhere that an optimizing function can be applied to (and… semi-off-the-record but that has been a thing for over a decade and is why devs aren’t THAT surprised when a “new” meta takes over in a strategy game) but it becomes a question of how much it is worth it.
All that said, we are seeing a lot more effort put into “learning” AI in racing games (driveatars) and fighting games because those tend to be cases where even the best AI is still expected to be “human” and we aren’t TOO demoralized when we realize we are in a pub with Daigo. That said… there is a reason that modern SNK Bosses tend to have super armor rather than frame perfect inputs. Because the former is “bullshit” but the latter is just mean.
APM actually does jack shit. You can spam a button fast and you’ll get 400 APM and get rolled by someone who does 40. EAPM is where it is at. Which is effective APM. How many actions you can do that move you closer to victory. Instead of just spamming two buttons on repeat (which is what a lot of Starcraft players do)
There used to be AI’s integrated into Starcraft 2 and later actually playing the game (like a player would) online. You can put restrictions on eAPM for these bots. You can force them to make human mistakes - delaying upgrades. They can get pretty well aproximated to human skill. The main issue with it is they suck at context. They can’t really “remember” stuff happening. Picked up a dropship and it flew away from my FOV? It’s gone. Oh shit a dropship came from the exact same spot! Oh good it flew away, which means it can’t hurt me no more.
There are also tournaments in SC2 for unlimited AIs - where they play the game without any caps. The only thing that matters is who wrote a more efficient bot. Machine learning isn’t reallly used there, more likely a decision tree. Those do exactly what you are describing. Playing against those as a human is pointless and would get someone who introduced them as a difficulty instantly fired.
Makes sense. But it seems pedantic to make the distinction between APM and EAPM.
This is reddit. Gotta ignore someone’s post to make a pointless correction that they already addressed but much more aggressively.
The alternative is, Erastil forbid, a conversation.
Didn’t Alien Insurrection use something to learn how you play so the Alien knew to change it’s tactics?
Yeah, but it’s not like an LLM or nueralnet thing. The kind of AI used for video games doesn’t need all that to feel smarter/harder.
Making a bot harder is actually easier than making it easier. It’s super straightforward to make one that always wins and is perfect. It’s more involved making a bot that doesn’t always take the best path or the most efficient way of completing the thing. I, personally, could make a Rocket League bot that plays the game better than any human since it’s all just math, and the computer is a calculator. I don’t think I would be able to make one a human player could actually beat though.
L4D has a mechanic like what the OP wants. It’s just not very good (IMO it overcorrects way too hard in both directions). Every time you win or lose or this happens too much, and this happens too little, it keeps track of that and then just adjusts things to change it up. Like if you sprint through one stage without resistance, the next stage will have more infected to deal with. They even gave it a name: the AI Director.
The alien in Alien Isolation is like that; but it is better done.
Oh yeah, I read about that! Not really ML, but pretty much what I’d like more games to have.
The most advanced AI I’ve seen is in Hitman WoA, and Zelda: Breath of the Wild.
Both games don’t have “learning” AI. They just have tons of rules that the player can reasonably expect and interact with, that make them seem lifelike. If a guard sees you throw a coin twice in Hitman, he doesn’t get suspicious and investigate - he goes and picks it up just like the first one. Same for reactions to finding guns, briefcases, or your exploding rubber duck.
Im pretty sure we could make AI in games smarter and/or better than humans for a long time. They are just not fun to play against. You need to have AI that you can win against. What i think should be happening instead of neural networks is the ai should gamble a bit more . The good example is eu4 where on hard difficulty ai will not attack you until its sure it can win… which makes it more predictable than normal ai beacuse you can reasonably guess whetewer it will attack you and try to outmanouver it. Wheras on normal sometimes it will just attack you if there is a reasonable ( or sometimes even unreasonable ) chance to win which makes normal sometimes( very very very very rarely ) harder difficulty. Now hard difficulty is stil generaly ( 99,9% of time ) much harder due to ai cheats but what i said is a thing. Total war Warhammer 3 could use that in particular to spice things up. Currently attacking army will always attack and defending will defend which makes attacking more advantagous , and the army will always wait for reinforcment . They could for example make it so depending on the army composition ( or even just rng ) the defending army will sometimes attack ( for example when there are only melee combatants ) so that you dont have time to deal damage with mage . Or the opposite. Make it so the attacking army will just stay still and protect the artilery and bombard you with canons it it has lots of artilery . Like you know just some basic strategies so the fights arent always so similar at the begining.
Yeah, the easiest thing to implement is omnipotent AI. The code for the AI is executed within the game engine, so you have complete access to any information you want.
You can just query the player position at any point in time, even if there’s a wall between the NPC and the player. It requires extra logic to not use the player position in such a case, or to only use the rough player position after the player made a noise, for example.
Of course, the decision-making is a whole separate story. Even an omnipotent AI won’t know how to use this information, unless you provide it with rules.
I’m guessing, what OP wants is:
- limiting the knowledge of the AI by just feeding it a rendered image like humans see it, and
- somehow train AI on this input, so it figures out such rules on its own.
ECHO, the 3rd person action\puzzle game was a fun concept to script in your machine dopplegangers to learn on you (and repeat after you one of the set actions you can do) and reset every cycle.
I don’t think it would work by itself without such limiting.
I always got the impression it wasn’t a learning AI but rather a very limited “Has the player pressed the run button? if YES: AI can use run next cycle”
Yes it is, it’s 100% scripted. And yes, in the environment where you can do like 10 different actions, they start to do their routine adding ones that you used in that cycle before they get reset. In a sense, they act no more natural than monsters from a tabletop game.
But these do make me think that if we talk gamedesign with a LLM as an actor, it should too have a very tight set of options around it to effectively learn. The ideal situation is something simplistic, like Google’s dino jumper where the target is getting as far as it can by recognising a barrier and jumping at the right time.
But when things get not that trivial, like when in CS 1.6 we have a choice to plant a bomb or kill all CTs, it needs a lot of learning to decide what of these two options is statistically right at any moment. And it needs to do this while having a choice of guns, a neverending branching tree of routes to take, tactics to use, and how to coexist with it’s teammates. And with growing complexity it’s hard to make sure that it’s guided right.
Imagine you have thousands of parameters from it playing one year straight to lose and to win. And you need to add weight to parameters that do affect it’s chance to win while it keeps learning. It’s more of a task than writing a believable bot, that is already dificult.
And the way ECHO fakes it… makes it less of a headache. Because if you limit possible options to the point close to Google’s dino, you can establish a firm grasp on teaching the LLM how to behave in a bunch of pre-defined situations.
And if you won’t, it’s probably easier to ‘fake it’ like ECHO or F.E.A.R. does giving a player an impression of AI when it’s just a complicated scri orchestrating the spectacle.
The only issue with current systems is that the “AI” is tweaked to the specific game mechanics. You can easily enough build multiple algorithms for varying play styles and then have it adapt to counter the play style of the player. The problems is that the current way that many games are monetized is through expansions, gameplay tweaks, etc., as well as those being necessary when a game mechanic turns out to be really poorly implemented or just unpopular and the mechanics change. If the “AI” isn’t modified at the same time to rake advantage of the changes, then it becomes easy to beat. The other issue is that eventually a human can learn all of the play style algorithms and learn to counter them and then it becomes boring.
Unfortunately, generative “AI” is not a true learning model and thus not truly intelligent in any sense of the word. It requires that it is only “taught” with good information. So if it gets any data that includes even slight mistakes, it can end up making lots of those mistakes repeatedly. And if those mistakes aren’t corrected by a human, it doesn’t understand which things were mistakes and how they contributed to winning or losing. It can’t learn that they were mistakes or to not do them. It doesn’t truly understand how to decide something is wrong on its own, only that things are related and how often it should use those relationships over others. Which means manual training is required, which due to the sheer volume of information required to train a generative “AI”, is not possible in a complex game where the player has thousand of possible moves that each branch to thousands of possible combinations of moves, etc.
The Rain World Animation Process.
While the title suggests only animation, the AI is tied directly into the animations so you gat a 2 for 1 deal in this video.GT Sophy on Gran Turismo
M Rossi in the old Forza games.
Don’t need to be sMart when you’re aggressive and have powerful lawyers (impacts, even caused by the AI were charged against the player).
Have you read about Alphastar, super computer that can whoop the best StarCraft players in the world? https://youtu.be/ZsCnuDgDcPo?feature=shared