AI Speedrunning Tools Are Here and the Community Is Divided


The speedrunning community has a new debate on its hands, and it’s a good one. AI-powered tools that can analyse game code, find optimal routes, and even discover new glitches are now sophisticated enough to produce results that rival months of human research. The question is whether using them counts as legitimate speedrunning.

This isn’t a hypothetical. Several speedrun categories have already been affected, and the community response has been surprisingly thoughtful — more nuanced than the usual internet discourse, at least.

What the tools do

The most notable AI speedrunning tool, developed by a research group that includes an Australian programmer, uses reinforcement learning to play through games millions of times, testing different routes, sequences, and inputs. It’s similar to how DeepMind’s AlphaGo mastered Go — through massive-scale trial and error.

The tool has already found previously unknown sequence breaks in two classic platformers and identified a routing optimisation in a popular modern speedrun title that saves approximately three seconds. Three seconds doesn’t sound like much, but in speedrunning, three seconds can be the difference between a world record and fifth place.

Another set of tools analyses game memory and code to identify potential glitches by finding edge cases in physics engines, collision detection, and state management. Human glitch hunters do this too, but the AI can test millions of scenarios per hour compared to a human’s dozens.

The community response

The speedrunning community isn’t monolithic, but the major positions have emerged clearly.

The purists argue that speedrunning has always been about human skill, creativity, and dedication. Finding a new glitch through hundreds of hours of experimentation is part of what makes speedrunning meaningful. An AI that finds the same glitch in ten minutes undermines the achievement.

The pragmatists point out that speedrunners have always used external tools. Route planners, frame data calculators, and disassemblers are all standard parts of the speedrunning toolkit. AI is just the next tool in the progression.

The separatists suggest creating distinct categories. Human-discovered routes and AI-discovered routes could be tracked separately, allowing both approaches to coexist without directly competing.

The Australian speedrunning community — which is more active than most people realise, with strong scenes in Melbourne and Sydney — has generally leaned toward the separatist position. Interestingly, some of the technical work behind these AI tools involves the same reinforcement learning techniques used by AI consultants Sydney for optimisation problems in other industries. The Oceanic Speedrunning Discord had a long and surprisingly civil discussion about this in December, with the consensus being that AI-discovered routes should be allowed but clearly labelled.

The deeper question

What’s really being debated isn’t the tools themselves. It’s what speedrunning is for.

If speedrunning is purely about completing a game as fast as possible, then the tools don’t matter — only the result does. If a new route exists, it exists regardless of how it was found. The human runner still needs to execute it, which requires the same mechanical skill as any other speedrun.

But if speedrunning is also about the process — the community effort of researching, discovering, and sharing knowledge — then how a route is found matters a lot. Part of what makes speedrunning compelling to watch and participate in is the story behind the record. The runner who spent six months looking for a skip, found it at 3am, and then executed the run that made it matter. AI removes that story.

I think both perspectives are valid, and the community will probably land on a middle ground. AI-discovered routes will be adopted into the main categories once human runners verify and execute them. The tools will be acknowledged as discovery aids, similar to how disassemblers are acknowledged now.

What this means for gaming AI generally

The speedrunning case is a microcosm of a larger question: what role should AI play in activities we value for their human elements?

Speedrunning is, at its core, a human performance. Watching someone execute a perfect run is thrilling because you know a person did it. The training, the failed attempts, the incremental improvements — these are human stories that AI can’t replicate.

But speedrunning is also a knowledge sport. Finding the optimal route requires research, experimentation, and analysis. AI is better at those things than humans are. Refusing to use it won’t make it not exist.

The smartest path is probably integration with transparency. Use the tools, credit the tools, and keep the human performance at the centre of what makes speedrunning matter. The runner still has to sit in the chair and hit every input. That’s the part no AI can do for you.