Animaj's research on AI-driven motion in-betweening has been accepted at SIGGRAPH 2026, one of the most selective peer-reviewed venues in computer graphics. The method is already running inside our production pipeline on Pocoyo and Maya the Bee, and is also being released as open source.
We sat down with Anton Raël, Lead Deep Learning Research Scientist at Animaj and co-author of the paper, to talk about what the work actually does, why it matters for animators, and what it took to get here.
SIGGRAPH is one of the most competitive venues in computer graphics. What went through your mind when you found out the paper was accepted?
It felt like a huge achievement. As a researcher, being able to share your work with peers is always rewarding. SIGGRAPH sets an incredibly high bar, so getting in feels like strong peer validation from researchers whose work I’ve been learning from for years. It also validates the direction we’re taking at Animaj, bridging the gap between research in 3D animation and the constraints of real production workflows.
For someone working in animation but not in deep learning, how would you explain what motion in-betweening is, and why it's a hard problem?

When animating a character, animators first create the main poses, called block poses. They then connect these poses through intermediate poses to form a continuous motion. This phase is called motion in-betweening. It’s a critical step because it encodes the style and identity of the character. This is where animators shape anticipation, ease-in and ease-out, follow-through, timing, and exaggeration. These effects are difficult to reproduce with simple mathematical interpolation alone.
What makes your approach different from standard interpolation methods, like those used in Autodesk Maya today?
Standard interpolation methods use predefined mathematical functions to connect block key poses. They work for simple motion, but struggle with more complex or stylized animation. In practice, animators often need to manually add breakdown poses and tweak interpolation parameters for each motion, which is time-consuming. Our method is different because it is data-driven. The model learns from existing animation data for a given character. The core element, the Adaptive Interpolation-Synthesis (AIS) layer, switches between two modes: a synthesis mode that predicts complex breakdown poses, and an interpolation mode that predicts the most appropriate interpolation for the motion and the character.
Was there a specific moment where the AIS approach really clicked for you?
Yes. For a while, we were struggling with a synthesis-only approach, directly predicting poses, and it wasn’t working well. The idea came during a discussion over coffee. We realized that most poses are interpolations of surrounding key poses, so instead of predicting everything, we could predict how to interpolate. That shift unlocked the rest of the work and made the system much more stable.

The system is already used in Animaj's production pipeline for YouTube episodes. What does that look like in practice?
Our method is integrated directly into the 3D animation tools animators already use, such as Autodesk Maya or Blender. Animators define their block key poses, then call the model to generate the intermediate motion. The output is expressed as standard rig controller values, so it remains fully editable. The animator stays in control. The model removes a large part of the repetitive work.
What actually changes for an animator using this tool, compared to a traditional workflow?
In a traditional workflow, animators rely on predefined interpolation functions and manually create many intermediate poses. With this tool, they can call a model that proposes interpolations and breakdowns specific to the character. The model has learned motion patterns from previous data, so it can generate more relevant in-betweens from the start. This frees up time to focus on creative decisions instead of mechanical adjustments.
You're open-sourcing the method and building a demo. Why was it important to make this work accessible?
We wanted to give the research community the ability to reproduce our results and build on the work. We also built on ideas from others, so it felt natural to contribute back. The animation industry is going through a shift, and tools like this should not stay locked inside one studio. They can help move the whole field forward.
The paper, “Adaptive Interpolation-Synthesis for Motion In-Betweening on Keyframe-Based Animation,” will be presented by Antoine Lhermitte (CTO) and Anton Raël at SIGGRAPH 2026 in Los Angeles (July 19–23) and published in the ACM Transactions on Graphics.
The implementation and a preprint are available now on GitHub, Huggingface, and arXiv.

.jpg)









