– Advertising –
– Advertising –
“I bring characters to life with computer brains,” says Starke. He tells Inverse that he fell headlong into a lifelong passion for games that led him to his job as an AI scientist at Electronic Arts (EA).
Motion capture is king in Starkes Welt, but the technology he is developing could change the way video games are developed massively. The more unique moves you try to model, the more difficult it is to pre-program all of their combinations by hand using Mocap technology.
“We don’t want to go into the motion capture lab and capture exponential variations of what you could be doing with your lower body, like walking or running, while doing other specific actions with your upper body,” explains Starke Inverse. How it could soon work – Starke describes in a research paper presented in August at the computer graphics conference SIGGRAPH 2021 how machine learning could better synthesize the movements of these characters.
It’s tedious work and it’s becoming less and less feasible: as the fidelity of motion capture technology increases, file sizes increase. Plus, it would be an impossible task to collect every possible combination of movement using motion capture technology, resulting in a video game that would dwarf current games (EA FIFA is around 50GB; Rockstars Red Dead Redemption 2 is epic 150 GB). Now Works – To bring the characters to life in a video game, the actors dress in skin-tight motion capture suits covered with sensors. They laboriously play cutscenes and perform perfect roundhouse kicks. It may seem like a first step in the development of a video game, but every punch, friendly gesture, and hug was organized and tagged by game developers much earlier.
For laypeople, neural animation layering basically means squeezing two different animations together so that the character performs them as a single movement. This allows game developers to recombine or modify a character’s movement after they have been trained with motion capture data. Could AI make most of the motion capture technology obsolete and leave those skin-tight bodies covered in sensors to the trash can of history? Perhaps not completely, but Starkte’s technology could transform the use of motion-captured data – resulting in smaller file sizes but smoother, more natural character movements.
Training her neural network with 20 hours of motion capture data taught her system to anticipate different movements – for example a punch or a shuffle step – and to better combine different movements to achieve a smoother animation. You can think of it like a soft ice cream machine or slot machine where pulling a lever brings together some of many different combinations of results (in this case, movements) actions that the team brings together to create a new hybrid animation aren’t coincidental – although the way the AI combines them may be. “[Before]If you want to add another action – such as being able to perform another action while jumping at the same time, opening the door or being able to sit on chairs, you would have to retrain the whole thing, “says Starke. “There [was] no possibility of adding things incrementally. “
News highlights games
- Headline: EA’s new technology will bring the video games of the future to life like never before
- Check out all the news and articles from the gaming news updates.
– Advertising –