LTX Video by Lightricks
LTX-Video, developed by Lightricks, is an advanced AI-driven model for generating and editing videos, offered free of charge when installed locally.
Overview
LTX-Video is Lightricks's open-source video model (with public code and model weights), available both in their web-based platform and natively in ComfyUI. It is free if run locally.
The model is open-source but comes with different licenses for various checkpoints, meaning some models allow commercial use while others do not. Check relevant info per each model.
LTXV can make high-quality videos faster than real time. On a strong GPU like NVIDIA’s H100, it can build 5 seconds of 24fps video in under 2 seconds.
It uses a layered method to speed things up. First it builds simple motion, then sharpens the frames bit by bit. This helps it run fast while keeping the video clear.
You can start with text, images, or even video clips. It also lets you stretch videos forward or back and add keyframe animation.
Extra controls give you more say. Use pose, depth, edge maps, camera moves, and keyframes. You can even fine-tune style or motion with LoRA or outpaint scenes.
LTXV started off making short clips but now it can go up to 60 seconds. It adds new parts while the video plays, so you can direct it in real time.
Even though it’s got a 13B model, it works well on home GPUs like the RTX 4090 with around 8GB VRAM.
Tags
Freeware Apache License 2.0 PC-based #Video & AnimationLinks
This tool offers the following AI models:
This tool is free to use when installed locally and is offered under Apache License 2.0.
Users are impressed by Lightricks’ ability to generate high-quality results quickly, with reports of five-second videos created in just two seconds on powerful hardware like the H100. The model relies on a high compression rate through VAE, making it scalable without compromising quality.
Community members are testing it on various GPUs, with some achieving solid results even on lower-end hardware. While the model runs well on Nvidia cards, Mac users report mixed experiences. Some praise its efficiency, while others struggle with noise issues.
Overall, Lightricks’ model is seen as a strong contender in the AI video space, though some remain skeptical of marketing claims versus real-world performance.
[ Reddit ]
Since Lightricks has introduced new AI video features, focusing on extending videos and keyframe-based interpolation, users can now condition video generation on specific images or short video segments, controlling motion and frame transitions. The update is already supported in ComfyUI, but results so far are mixed, with some users reporting impressive speed but inconsistent quality.
Some testers praise the model’s efficiency, while others find results glitchy, especially for complex human motion. A few users note that shorter, simpler prompts yield better outputs. Keyframe-based interpolation excites many, but skepticism remains over whether it can truly maintain character and scene consistency.
With alternatives like Wan2.1 and the upcoming HunYuan model, comparisons are inevitable. LTX is fast and lightweight, but many say it struggles with quality compared to competitors. Some users are working on fine-tuning and workflow optimizations, hoping to improve stability. Despite mixed reviews, the addition of keyframe conditioning is seen as a step toward more flexible AI-generated video.
[ Reddit ]
The buzz around LTXVideo 0.9.6 Distilled is huge. Folks are freaking out over how fast it runs and how solid the results look. Lots of users say they’re getting good video frames in just seconds even using mid-tier GPUs like the RTX 3060. The model only needs 8 diffusion steps so it cranks out usable stuff super quick without burning tons of compute time like the older ones.
People like using the official ComfyUI setup especially when they throw in a prompt helper node like ChatGPT. That combo seems to give better output. Sure some still feel a bit lost opening Comfy for the first time but being able to just grab and remix shared workflows makes it way easier to jump in. The 0.9.6 update also added new stuff like a smarter guidance node and some kind of random inference trick that helps with quality.
There's a lot of curiosity around the full-size version of the model too. It’s the same size file but might look even better if your GPU has the room to handle it. A few people mentioned that while the speed’s awesome you might still get some weird visuals in spots so it’s not flawless.
All in all this version feels like a solid jump forward and might become the go-to for quick high-quality video runs.
[ Reddit ]
Generated on July 19, 2025:
Generated on June 14, 2025:
Generated on May 18, 2025:
Generated on May 10, 2025:
Generated on May 10, 2025:
Generated on May 10, 2025:
Generated on May 10, 2025:
Generated on May 10, 2025:
Generated on April 21, 2025:
Generated on April 20, 2025:
Generated on April 19, 2025:
Latest LTX Video by Lightricks News
July 19, 2025
LTX Video 0.9.8 is here.
- 2 new distilled checkpoints - 2B & 13B + IC LoRA for detail enhancement.
- improved prompt adherence & detail generation.
- blazing fast.
May 15, 2025
LTXV-13b-0.9.7-distilled GGUFs are here. Run the latest LTX Video on limited consumer GPU power in ComfyUI.
May 7, 2025
LTX-Video 13B is out. The open-source video generation model features 13 billion parameters, multiscale rendering for enhanced fine details, improved motion and scene understanding, and supports keyframes, camera and character movement, and multi-shot sequences. It stays optimized for local GPU use.
April 18, 2025
The LTXVideo 0.9.6 update runs fast even using mid-tier GPUs like the RTX 3060, has smarter guidance node and some kind of random inference trick that helps with quality.
Useful Links
LTXV-13b-0.9.7-distilled-GGUFs
Version
Quantized version for GPU poor. The model files can be used in ComfyUI with the ComfyUI-GGUF custom node.
This page was last updated on August 21, 2025 at 11:55 PM