AI creators tools

DreamActor Move 2.0 video model

Name: DreamActor
Version: 2
Variant: Move
Also Known As: DreamActor M-2
Creator: ByteDance

Bytedance DreamActor Move 2.0 aka DreamActor M-2 is a new AI model by ByteDance that copies human motion to bring still images to life. It was announced on january 24th 2026 and builds upon DreamActor-M1. It reads a person’s face, lips, hands, and body moves then uses that to make realistic videos.

The tech behind it is meant to animate static characters using a single photo and a video as a guide. That’s useful for making AI characters. But most current tools have two big problems. They rely too much on pose models that don’t always match up with the way the main system was trained. And they’re built around human motion, so they don’t work well outside that.

DreamActor-M2 tries to fix that. Instead of adding extra pose tools, it feeds both the motion and the image straight into the model together. That way the original system stays untouched and can use what it already knows. It also makes it easier to plug in and control with no extra steps.

They’ve also taken it further. Now it can use full video frames instead of just pose data. They trained this setup using a custom-made video pipeline. Tests show the new version gets top results in realism, control, and flexibility. So yeah, it’s a solid move toward better and more flexible AI video tools.

Key Features
No performance evaluations available for this model yet.

DreamActor Move 2.0 Examples

Demo published by CapCut Generated on January 24, 2026
for link to original.

Where To Find DreamActor Move 2.0

If you'd like to access this model, you can explore the following possibilities: