Competitor Sora has learned to generate videos with complex editing - ForkLog: cryptocurrencies, AI, singularity, the future

Туристов предупредили о фишинге с применением ИИ phishing AI# Competitor Sora has learned to generate videos with complex editing

Chinese developer Kuaishou has introduced the third version of the Kling AI video generation model.

🚀 Introducing the Kling 3.0 Model: Everyone a Director. It’s Time.

An all-in-one creative engine that enables truly native multimodal creation.

— Superb Consistency: Your characters and elements, always locked in.
— Flexible Video Production: Create 15s clips with precise… pic.twitter.com/CJBILOdMZs

— Kling AI (@Kling_ai) February 4, 2026

“Kling 3.0 is based on a deeply unified training platform, providing truly native multimodal input and output. Thanks to seamless audio integration and advanced consistency control, the model imbues the generated content with a stronger sense of life and coherence,” the announcement states.

The model combines several tasks: transforming text, images, and references into videos, adding or removing content, modifying and transforming clips.

Video length has increased to 15 seconds. Other improvements include more flexible shot management and precise prompt adherence. Overall realism has been enhanced: character movements are more expressive and dynamic.

Comparison of Kling VIDEO 3.0 with Kling VIDEO 2.6. Source: Kling AI. The new Multi-Shot feature analyzes the prompt, determining scene structure and shot types. The tool automatically adjusts camera angles and composition.

The model supports various editing solutions: from classic “shot-reverse shot” dialogues to parallel storytelling and scenes with voice-over.

“No more tedious cutting and editing — one generation is enough to produce a cinematic clip and make complex audiovisual forms accessible to all creators,” the announcement says.

Kling 3.0 is truly “one giant leap for AI video generation”! Check out this amazing mockumentary from Kling AI Creative Partner Simon Meyer! pic.twitter.com/Iyw919s6OJ

— Kling AI (@Kling_ai) February 5, 2026

In addition to standard image-based video generation, Kling 3.0 supports multiple reference images and video sources as scene elements.

The model captures characteristics of characters, objects, and episodes. Regardless of camera movement and plot development, key objects remain stable and consistent throughout the video.

Developers have improved native audio: the system more accurately synchronizes speech with facial expressions, and in scenes with dialogues, allows manual designation of the specific speaker.

The list of supported languages has been expanded to include Chinese, English, Japanese, Korean, and Spanish. Dialects and accents are also better conveyed.

Additionally, the team has upgraded the multimodal model O1 to Video 3.0 Omni.

Source: Kling AI. It is possible to upload speech audio from three seconds and extract the voice, or record a video with a character from three to eight seconds to obtain its main features.

Competitors Sora are advancing

OpenAI introduced the Sora video generation model in February 2024. The tool caused excitement on social media, but the public release only happened in December.

Almost a year later, users gained access to text-to-video generation, image “bringing to life,” and scene completion.

The iOS app Sora was released in September and immediately attracted attention: over 100,000 installs on the first day. The service surpassed 1 million downloads faster than ChatGPT, despite being invite-only.

However, the trend soon shifted. In December, downloads decreased by 32% compared to the previous month. In January, the downward trend continued — the app was downloaded 1.2 million times.

Source: Appfigures. The decline was caused by several factors. First, competition intensified with Google’s Nano Banana model, which strengthened Gemini’s position.

Sora also competes with Meta AI and its Vibes feature. In December, pressure on the market increased with startup Runway, whose Gen 4.5 model outperformed competitors in independent tests.

Second, OpenAI’s product faced copyright infringement issues. Users created videos with popular characters like “SpongeBob” or “Pikachu,” leading the company to tighten restrictions.

In December, the situation stabilized after an agreement with Disney, allowing users to generate videos with studio characters. However, this did not lead to an increase in downloads.

Recall that in October, deepfakes featuring Sam Altman flooded Sora.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)