EA partners with the company behind Stable Diffusion to make games with AI
-
Sorry but procedural generation will never give you the same result as a well tuned small LLM can.
Also there's no "hoping", LLM context preservation and dynamic memory can be easily fine-tuned even on micro models.
-
It would be interesting if NPCs could react believably to free form input from the player, but that seems unlikely to be reliable. I'm not sure how you'd map the text to in game actions.
Like, you could have the LLM respond with "ok I'll let you have my horse if you leave in the morning, and solve the zombie problem" but it's not trivial to make the game world respond to that. Not even counting it breaking out into "sorry as an LLM I can't ..."
Yes it is trivial.
LLM can already do tool calling, emotion metadata output and so on. It would take minimal effort for a well tuned model to also output things like facial expressions, body language, hand and body movements and so on.
-
wrote last edited by [email protected]
I agree that the results will be different, and certainly a very narrowly trained LLM for conversation could have some potentials if it has proper guardrails. So either way there's a lot of prep beforehand to make sure the boundaries are very clear. Which would work better is debatable and depends on the application. I've played around with plenty of fine tuned models, and they will get off track contextually with enough data. LLMs and procedural generation have a lot in common, but the latter is far easier to manage predictable outputs because of how the probability is used to create them.
-
No, that's exactly what this is about. They came right out and said as much. It won't work, but they'll cause a lot of damage in the process of failing.
Which is a different article about a (somewhat) unrelated topic.
Using AI for development is already out there, and you can't put that genie back in the bottle. As an engineer I'm already using it in my daily work for tons of things - I've built separate agents to do a number of things:
- read work tickets, collate resources, create a work plan, do the initial footwork (creating branches, moving tickets to the right states, creating Notion document with work plan and resources)
- read relevant changes in UI design documents and plan + execute changes (still needs some manual review but e.g. with Android Jetpack Compose, it makes 90-95% of the needed work and requires minimal touch-up)
- do structural work - boilerplates, etc.
- write unit and integration tests, and already working out a UI test automation agent
- do code reviews on changes, document them, and write appropriate commit messages
- do PR reviews - I still review them myself but an extra eye is always helpful
guess what, AI didn't replace me, it just allowed me to focus on actually thinking up solutions instead of doing hours of boilerplate stuff.
AI isn't the enemy in software development. Companies who think they can replace engineers with AI. Middle managers will sooner be on that date, as they were mostly useless anyway.
-
Who fucking asked for that? WHO??!!
The Saudis?
‍
️ -
This post did not contain any content.
they should concentrate on disney ones.
-
wrote last edited by [email protected]
You are going to get downvoted, but you're right. AI doesn't need to be used for every part of the entire development process for it to be "made with the help of AI". There are certain parts of the workflow that I'm sure is already being done regularly with AI, for example commenting code.
Mindlessly feeding prompts into chatgpt for the entirety of the core code or art would be terrible.
-
How do they reduce costs with AI if not by eliminating jobs?
-
15+
BF3 was when I made my vow to stop supporting them for releasing unfinished buggy ass games.What are these ass games you are talking about? Im willing to look past them being buggy.
-
I don't think it would be easy to map free form text to game behavior. Not just like "make the NPC smile" but complex behavior like "this NPC will now go to this location and take this action". That seems like it would be very error prone at best.
-
I don't think it would be easy to map free form text to game behavior. Not just like "make the NPC smile" but complex behavior like "this NPC will now go to this location and take this action". That seems like it would be very error prone at best.
How do you think most game scripting engines work?
Nowadays game engines don't rely on strictly speaking hardcoded behaviour, but rather are themselves just a scripting environment to execute a specific format of code.
Skyrim is still the perfect example because it gives you the ability to literally do anything in the world, via a scripting language.
Instructing NPCs to behave in a specific way is also done through these scripts. And LLMs - especially coding fine-tuned ones which could be tied into the execution chain - can easily translate things like
<npc paces around>to specific instructions so the NPC walks up and down at a specific distance or in a circle or whatever you want it to do.You're seriously over-estimating the work it takes on even crappy, but modern engines to get certain things to happen. Especially when it comes to things that are already dynamically scripted. Like NPCs.
-
How do they reduce costs with AI if not by eliminating jobs?
By improving the cadence of projects.
A project costs X amount because of the standard template of pay per time unit Y multiplied by timeframe in time unit Z.
Simply said if you have 100 people working on the project, that costs 100Y per hour. If the project takes 6 months (approx. 960 hours), you multiply the two and get that your costs are 96000Y.
Now the two ways to reduce this is to either reduce the number of employees, with AI you can get rid of maybe 2/3, reducing the expenses to 32000Y....
Or since AI speeds up almost every workflow by about 8 to 10 times, you can keep all the people, but cut down project time from 6 months to about 2 months, which doesn't just reduce the expenses by the same 2/3 but also increases potential profits for the same 6 month period by 200%, as instead of one product you're releasing three.
Cutting jobs ain't the only way to reduce costs with AI.
-
since AI speeds up almost every workflow by about 8 to 10 times
Citation fucking needed.
-
This post did not contain any content.
I can't start boycotting a company that I've been boycotting for well over a decade.
-
I can't start boycotting a company that I've been boycotting for well over a decade.
Double boycott
-
What are these ass games you are talking about? Im willing to look past them being buggy.
I don't know what they're taking about. Amarillo's Butt Slapper isn't published by EA.
-
LLM generated code is notoriously bad. Like, "call this function that doesn't exist" is common. Maybe a more specialized model would do better, but I don't think it would ever be completely reliable.
But even aside from that, it's not going to be able to map the free form user input to behavior that isn't already defined. If there's nothing written to handle "stand on the table and make a speech", or "climb over that wall" it's not going to be able to make the NPC do that even if the player is telling them too.
But maybe you're more right than I am. I don't know. I don't do game development. I find it hard to imagine it won't frequently run into situations where natural language input demands stuff the engine doesn't know how to do.
-
since AI speeds up almost every workflow by about 8 to 10 times
Citation fucking needed.
My own fucking experience. Which I've already explained in detail above.
-
It's very obvious in this thread that you have hands on experience and many others do not. 20+ years professional SWE here, a majority of it applied ML/big data/etc. LLMs are really bad at many things but specifically using them as a natural language layer over NPC interactions would be relatively easy and seems like a great use case honestly.
-
LLM generated code is notoriously bad. Like, "call this function that doesn't exist" is common. Maybe a more specialized model would do better, but I don't think it would ever be completely reliable.
But even aside from that, it's not going to be able to map the free form user input to behavior that isn't already defined. If there's nothing written to handle "stand on the table and make a speech", or "climb over that wall" it's not going to be able to make the NPC do that even if the player is telling them too.
But maybe you're more right than I am. I don't know. I don't do game development. I find it hard to imagine it won't frequently run into situations where natural language input demands stuff the engine doesn't know how to do.
Okay I won't even read past the first paragraph because you're so incredibly wrong that it hurts.
First generation LLMs were bad at writing long batches of code, today we're on the fourth (or by some metric, fifth) generation.
I've trained LLM agents on massive codebases that resulted in <0.1% fault ratio on first pass. Besides, tool calling is a thing, but I guess if I started detailing how MCP servers work and how they can be utilised to ensure an LLM agents doesn't do incorrect calls, you'd come up with another 2-3 year old argument that simply doesn't have a foot to stand on today.