AI can’t make good video game worlds yet, and it might never be able to

This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more news about video game industry’s pushback against generative AI, follow Jay Peters. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.

Long before the generative AI explosion, video game developers made games that could generate their own worlds. Think of titles like Minecraft or even the original 1980 Rogue that is the basis for the term “roguelike”; these games and many others create worlds on the fly with certain rules and parameters. Human developers painstakingly work to make sure the worlds their games can create are engaging to explore and filled with things to do, and at their best, these types of games can be replayable for years because of how the environments and experiences can feel novel every single time you play.

But just as other creative industries are pushing back against an AI slop future, generative AI is coming for video games, too. Though it may never catch up with the best of what humans can make now.

Generative AI in video games has become a lightning rod, with gamers getting mad about in-game slop and half of developers thinking that generative AI is bad for the industry.

Big video game companies are jumping into the murky waters of AI anyway. PUBG maker Krafton is turning into an “AI First” game company, EA is partnering with Stability AI for “transformative” game-making tools, and Ubisoft, as part of a major reorganization, is promising that it would be making “accelerated investments behind player-facing Generative AI.” The CEO of Nexon, which owns the company that made last year’s mega-hit Arc Raiders, put it perhaps the most ominously: “I think it’s important to assume that every game company is now using AI.” (Some indie developers disagree.)

The bigger game companies often pitch their commitments as a way to streamline and assist with game development, which is getting increasingly expensive. But adoption of generative AI tools is a potential threat to jobs in an industry already infamous for waves of layoffs.

Last month, Google launched Project Genie, an “early research prototype” that lets users generate sandbox worlds using text or image prompts that they can explore for 60 seconds. Right now, the tool is only available in the US to people who subscribe to Google’s $249.99-per-month AI Ultra plan.

Project Genie is powered by Google’s Genie 3 AI world model, which the company pitches as a “key stepping stone on the path to AGI” that can enable “AI agents capable of reasoning, problem solving, and real-world actions,” and Google says the model’s potential uses go “well beyond gaming.” But it got a lot of attention in the industry: It was the first real indication of how generative AI tools could be used for video game development, just as tools like DALL-E and OpenAI’s Sora showed what might be possible with AI-generated images and video.

In my testing, Project Genie was barely able to generate even remotely interesting experiences. The “worlds” don’t let users do much except wander around using arrow keys. When the 60 seconds are over, you can’t do anything with what you generated except download a recording of what you did, meaning you also can’t plug in what you generated into a traditional video game engine.

Sure, Project Genie did let me generate terrible unauthorized Nintendo knockoffs (seemingly based off of the online videos Genie 3 is trained on), which raised a lot of familiar concerns about copyright and AI tools. But they weren’t even in the same universe of quality as the worlds in a handcrafted Nintendo game. The worlds were silent, the physics were sloppy, and the environments felt rudimentary.

The day after Project Genie’s announcement, stock prices of some of the biggest video game companies, including Take-Two, Roblox, and Unity, took a dip. That resulted in a little damage control. Take-Two president Karl Slatoff, for example, pushed back strongly on Genie in an earnings call a few days later, arguing that Genie isn’t a threat to traditional games yet. “Genie is not a game engine,” he said, noting that technology like it “certainly doesn’t replace the creative process,” and that, to him, the tool looks more like “procedurally generated interactive video at this point.” (The stock prices ticked back up in the days after.)

Google will almost certainly continue improving its Genie world models and tools to generate interactive experiences. It’s unclear if it will want to improve the experiences as games or if it will instead focus on finding ways for Genie to assist with its aspirational march toward AGI.

However, other leaders of AI companies are already pushing for interactive AI experiences. xAI’s Elon Musk recently claimed that “real-time” and “high-quality” video games that are “customized to the individual” will be available “next year,” and in December, he said that building an “AI gaming studio” is a “major project” for xAI. (Like with many of Musk’s claims, take his predictions and timelines with a grain of salt.) Meta’s Mark Zuckerberg, who is now pushing AI as the new social media after the company cut jobs in its metaverse group, envisions a future where people create a game from a prompt and share it to people in their feeds. Even Roblox, a gaming company, is pitching how creators will be able to use AI world models and prompts to generate and change in-game worlds in real time, something that it calls “real-time dreaming.”

But even in the most ambitious view where AI technology is feasibly able to generate worlds that are as responsive and interesting to explore as a video game that runs locally on a home console, PC, or your smartphone, there’s a lot more that goes into making a video game than just creating a world. The best games have engaging gameplay, include interesting things to do, and feature original art, sound, writing, and characters. And it takes human developers sometimes years to make sure all of the elements work together just right.

AI technology isn’t yet ready to generate games, and whoever thinks it might be is fooling themselves. But AI-generated video is still bad, and it was still used to make a bunch of bad ads for the Super Bowl, so tech companies are probably still going to put a lot of effort toward games made with generative AI. In an already unstable industry, even the idea that AI tools could rival what humans can make might have massive ramifications down the line.

But the complexity of games is different from AI video, which has improved considerably in a short period of time but has fewer variables to account for. AI game-making tools will almost certainly improve, but the results might never close the gap from what humans can make.

  • In a long X post, Unity CEO Matthew Bromberg argues that world models aren’t a risk, but a “powerful accelerator.”
  • While the video game industry probably shouldn’t feel threatened by AI world models just yet, generative AI tools will continue to be controversial in game development. Even Larian Studios, beloved for games like Baldur’s Gate 3, isn’t immune to backlash.
  • Steam requires that developers disclose when their games use generative AI to generate content, but in a recent change, developers don’t have to disclose if they used “AI powered tools” in their game development environments.
  • Some games, like the text-based Hidden Door and Amazon’s Snoop Dogg game on its Luna cloud gaming service, are embracing generative AI as a core aspect of the game.
Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Source link