Jin10 Data April 10 News, today, the dark side of the moon Kimi has released the open source lightweight visual language model Kimi-VL and Kimi-VL-Thinking. The new model adopts the MoE architecture, supports 128K context, and activates only about 3 billion parameters; its multimodal reasoning capability surpasses that of large models by over 10 times in multiple benchmark tests.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Kimi Open Source multimodal model Kimi-VL, Kimi-VL-Thinking
Jin10 Data April 10 News, today, the dark side of the moon Kimi has released the open source lightweight visual language model Kimi-VL and Kimi-VL-Thinking. The new model adopts the MoE architecture, supports 128K context, and activates only about 3 billion parameters; its multimodal reasoning capability surpasses that of large models by over 10 times in multiple benchmark tests.