SteadyDancer came out in Nov 2025 from Nanjing University's Multimedia Computing Group. It's an AI model that turns one photo of someone into a smooth dance video using movement from another clip. The main focus is on keeping the person’s look the same from the start and making the motion flow right.
Model size is around 14 billion parameters. It takes one photo and one driving video as input with optional text prompt. Open-sourced under Apache-2.0
It sticks close to the original photo and is built to keep the face, clothes and shape the same through the whole video. That fixes a common issue where the person’s look slowly changes.
It uses a special setup to make sure the animation follows the motion clip but still keeps the photo’s look.
If you'd like to access this model, you can explore the following possibilities: