Lightricks, the AI innovators behind Facetune, Videoleap, and LTX Studio, are making sure that AI-powered video generation keeps racing ahead. The company recently announced the release of LTX Video (LTXV 0.9), an open-source video generation model that will also soon be integrated into its cutting-edge AI film storyboarding app LTX Studio.

LTXV brings new capabilities for text-to-video creation. With the new model, it’s possible to produce five seconds of AI-generated video in just four seconds, making it possible to generate video content faster than real time. Users don’t need to sacrifice quality for speed, and it’s effective even on consumer-grade hardware.

In the crowded world of AI video generation, LTXV stands out for speed, video quality, and open-source accessibility. With the release of the LTX Studio web app in March 2024, Lightricks opened up new possibilities for aspiring filmmakers, small studios, and anyone working on a limited budget. LTXV just turbo-charged those possibilities, in ways that extend well beyond the LTX Studio ecosystem.

 

Toppling Barriers to Creativity

 

Filmmakers and creators have already discovered the difference made by LTX Studio for AI video generation. Being able to quickly generate, tweak and refine storyboards allows teams to visualize and develop their creative concepts and obtain buy-in from stakeholders faster.

Now LTXV has come to rip up restrictions to creativity not just for filmmakers, but for stakeholders in industries as diverse as gaming, the metaverse and academic research. The model produces high-resolution, hyper-realistic video faster than any other open-source model, without the need for industrial-grade specialized equipment.

“LTXV represents a new era of AI-generated video,” said Yaron Inger, CTO of Lightricks, in a statement. “The ability to generate videos faster than playing them opens the possibility for applications beyond content creation, like gaming and interactive experiences for shopping, learning or socializing. We’re excited to see how researchers and developers will build upon this foundational model.”

The LTXV model doesn’t require enterprise-grade hardware to deliver impressive results. Even consumer-grade GPUs can generate high quality footage in real time, thanks to Diffusion Transformer architecture that ensures smooth motion and structural consistency between frames. What’s more, the model can scale to support video segments up to ten seconds long, opening up possibilities for more industries and use cases.

For example, gaming companies could upscale graphics to create more realistic playing experiences. Marketing agencies could produce thousands of ad variations for more targeted campaigns. Retail companies could develop personalized videos for consumers that deliver an immersive shopping experience. The only limit is your imagination.

Text to Video Is the New AI Playground

 

In the world of generative AI, video creation is one of the hottest focus areas.

Numerous text to video apps have been launched in the past year, spanning the gamut from tech giants to startups. Google, Microsoft, ByteDance, MiniMax, and PixVerse are just some of the large and small companies that have hopped on the AI text-to-video bandwagon.

When it comes to AI video generation, performance depends heavily on the capabilities of the underlying AI model, although software and hardware also play key roles. So far, most apps are built on the same handful of AI models controlled by tech giants, primarily OpenAI, Stability AI, Adobe, and Google.

LTXV uses two billion parameters and runs on bfloat16 precision, designed to maintain precision and quality without compromising on speed or memory efficiency.

Google DeepMind is expected to launch a new advanced video generation model called Veo AI, which can produce high-quality videos that last for a minute or more. New text-to-video AI models are appearing in rapid fire nowadays, such as Runway’s Gen-3 Alpha and CogVideoX. The latter was built by Tsinghua University and Zhipu AI, and it’s a rare truly open-source AI video model in a proprietary world.

 

The Future of AI Video Creation

 

One of the most notable elements to LTXV is the decision to go open source. Lightricks has released the model on both GitHub and Hugging Face for testing and feedback, before launching it under an OpenRAIL license so that derivatives remain open for academic and (separately licensed) commercial use.

The company’s CEO and co-founder, Zeev Farbman, freely acknowledges that this compromises LTXV’s status as a proprietary asset, but it’s a tradeoff he’s willing to make to help advance the wider AI-generated video ecosystem. “With many AI technologies becoming proprietary, we believe it’s time for an open-sourced video model that the global academic and developer community can build on and help shape the future of AI video,” said Farbman in a statement.

“We built Lightricks with a vision to push the boundaries of what’s possible in digital creativity to continue bridging that gap between imagination and creation – ultimately leading to LTXV, which will allow us to develop better products that address the needs of so many industries taking advantage of AI’s power.”

Farbman and the rest of the Lightricks team hope that by designating LTXV as open source technology, it will lower barriers to use for small scale creators, educators, and academics. At the same time, it will encourage the development of innovative AI video generation apps by startups and entrepreneurs, who may otherwise be kept out by proprietary models.

 

LTXV Moves AI Video to the Next Level

 

With the release of LTXV, Lightricks offers a new world of video content. The model’s high speeds, impressive quality, and open-source licensing invite researchers and developers to push the envelope in creating new cutting-edge AI video models and apps. For consumers, LTXV allows businesses across several industries to take advantage of AI video and drive better customer experiences.





Source link

Share.
Leave A Reply

© 2024 The News Times UK. Designed and Owned by The News Times UK.
Exit mobile version