• navAI
  • Posts
  • Runway’s ‘better and faster’ Gen-3 AI video model is coming in the ‘next few days’

Runway’s ‘better and faster’ Gen-3 AI video model is coming in the ‘next few days’

AI Spotlight keeps you up-to-speed on the latest cool stuff in AI and tech. This week, we'll show you new tools, how businesses can use AI, and what's hot in the world of AI.

AI Spotlight keeps you up-to-speed on the latest cool stuff in AI and tech. This week, we'll show you new tools, how businesses can use AI, and what's hot in the world of AI.

In today’s email:

  • Runway’s ‘better and faster’ Gen-3 AI video model is coming in the ‘next few days’

  • This AI model is learning to speak by watching videos — here's how

  • Luma Dream Machine AI video generator is getting a huge update

  • How to Utilize Midjourney's Latest 'Personalization' Feature

  • 📢 Top AI Tools of the Week

Runway’s ‘better and faster’ Gen-3 AI video model is coming in the ‘next few days’

Exciting news from Runway! The AI video platform is set to release its Gen-3 model in the next few days. This update promises significant improvements in fidelity, consistency, and motion, along with faster performance, according to Tom’s Guide.

Gen-2, launched last year, was the first commercially available text-to-video AI model and sparked a revolution in synthetic video. Now, Runway faces competition from Pika Labs, Haiper, Luma Labs, and the upcoming Sora. Gen-3 is a major upgrade, built from the ground up using new infrastructure designed for large-scale multimodal training. This model enhances realism by training on both images and videos simultaneously.

Anastasis Germanidis, Runway’s CTO and Co-Founder, revealed that the public will access the Alpha version in the coming days. Gen-3 features improved motion control, photorealism, and longer video creation capabilities—up to ten seconds compared to Gen-2’s four seconds. Internal tests showed surprising results, particularly in handling drastic scene transitions seamlessly.

Stay tuned for this groundbreaking release, and subscribe to our newsletter for the latest in AI innovations!

This AI model is learning to speak by watching videos — here's how

Researchers from MIT, Microsoft, Oxford, and Google have developed an innovative AI model called DenseAV, which learns the meaning of words and the location of sounds without any human input or text. Instead, DenseAV uses self-supervision from videos to achieve this remarkable feat.

DenseAV employs audio-video contrastive learning to associate specific sounds with the observable world. This mode of learning ensures that the visual and audio components of the model learn independently, forcing the algorithm to recognize objects meaningfully. By comparing pairs of audio and visual signals, DenseAV determines which data is significant and matches corresponding signals, enabling it to learn without labels.

How Does DenseAV Work?

The inspiration for this process came to MIT PhD student Mark Hamilton while watching "March of the Penguins." A scene where a penguin falls and groans led him to realize the potential of using audio and video to learn language.

Hamilton aimed for DenseAV to learn language by predicting visual elements from audio cues and vice versa. For instance, if someone says, “grab that violin and start playing it,” the model learns to associate this phrase with images of a violin or a musician. This process was repeated across various videos, teaching the model to match audio with corresponding visuals.

Researchers then focused on the pixels the model observed when hearing specific sounds. For example, hearing “cat” would prompt DenseAV to identify cats in the video. The model was tested to see if it could differentiate between a cat and the sound of a cat meowing. They discovered that DenseAV developed a "two-sided brain," where one side focused on language and the other on sounds, effectively learning the difference without human intervention.

Why is This Useful?

DenseAV showcases an AI algorithm capable of understanding language and sound locations merely by watching unlabeled videos. This fully unsupervised model never sees text during training, highlighting a significant advancement in AI learning capabilities.


Learn more about DenseAV here.

Luma Dream Machine AI video generator is getting a huge update

Luma Labs launched its groundbreaking AI video platform, Dream Machine, last week and is already rolling out its first round of updates. This includes the highly anticipated feature: the ability to extend a video clip by five seconds.

Dream Machine generates photorealistic video and accurate real-world motion, a capability previously seen only in the closed OpenAI Sora model and the Chinese Kling AI. Initial tests of Dream Machine yielded impressive results, though there was a 12-hour wait due to high demand. The platform required some UI improvements, which are now being addressed with the latest updates.

Key Updates

  • Clip Extension: Users can now extend a video clip by five seconds, a feature likely prioritized by Luma Labs. This uses one of the monthly generation allocations, varying by subscription.

  • Enhanced Download Options: Users can more easily download their created videos, and Pro users can remove the watermark.

Competitors like Pika Labs and Runway have had clip extension capabilities from the start, but these often suffer from distortion in longer videos. Luma promises a different approach with more consistent results. The process is straightforward: click extend, provide a fresh motion prompt, and let the system do the rest.

Upcoming Features

  • Discovery Feature: A new discovery tool will soon allow users to explore various video concepts and ideas within the interface.

  • In-Video Editing: One of the most promising upcoming features is in-video editing. Users will be able to change backgrounds and foregrounds on the fly in any generated video. For instance, users could replace a character or change the video’s location.

How to Utilize Midjourney's Latest 'Personalization' Feature

Midjourney has recently unveiled an innovative feature that enables users to generate personalized images using a unique code. This algorithm learns and adapts to your individual style by analyzing images you've ranked or liked on the explore page of their Alpha website.

Note: To activate this feature, you must have a minimum of 200 pairs of rankings.

Here's a step-by-step guide on how to use it effectively:

  1. Access Midjourney: Navigate to the Midjourney website.

  2. Activate Personalization: Add --p to your prompt to initiate the personalization process.

  3. Receive Unique Code: Upon activation, you'll receive a unique personalization code tailored to your preferences.

  4. Adjust Strength: Customize the strength of personalization according to your preference by using --s values ranging from 0 to 1000. The default strength is set at 100, with 0 disabling the feature entirely and 1000 maximizing its effect.

  5. Example Prompt: Formulate your prompt by incorporating the personalization feature. For instance, "A dog running in a field --p --s 750".

  6. Additional Option: Alternatively, you can enable the feature directly from the settings section on the Alpha website.

By following these simple steps, you can harness the power of Midjourney's 'Personalization' feature to generate images that resonate with your unique style and preferences.

Top AI Tools of the Week

SimplyPut is an AI-driven data analytics platform that democratizes data access. Have trusted conversations with your database. Ask questions in plain language and get instant, trusted answers. No more waiting on reports—get real-time insights and uncover trends effortlessly. Seamlessly integrated and super user-friendly, SimplyPut makes data accessible for everyone at your company.

Struggling to engage with your leads? Create personalized, multi-channel conversations at scale on LinkedIn, Email, Voices, Calls & X. Focus on building 1:1 relationships with prospects and closing deals!

AI Tools for your Business -
Practical Online Program🎓

Master the most advanced AI tools to supercharge your processes, minimize costs, and explode your revenue. Whether you’re a business leader or an ambitious professional, this program will equip you with the skills and strategies to harness the full potential of AI.