Blog  []

PromptCoT & PromptCoT-Mamba: Advancing the Frontiers of Reasoning

News May 30, 2025: PromptCoT-Mamba released! Introducing an attention-free foundation model for reasoning tasks. Apr 11, 2025: PromptCoT-QwQ-32B model and its training data released, achieving new state-of-the-art results. Mar 7, 2025: PromptCoT project launched, including the problem generation model, distilled models (PromptCoT-DS series), and associated datasets. Overview This repository unifies two synergistic projects aimed at advancing the frontiers of mathematical and code reasoning in Large Language Models (LLMs): PromptCoT and PromptCoT-Mamba....

April 1, 2025 · 4 min · 823 words · inclusionAI, Ant Group

Ring: A Reasoning MoE LLM Provided and Open-sourced by InclusionAI

🤗 Hugging Face   |   🤖 ModelScope News [2025-06]:🎉 Add Ring-lite Model [2025-04]:🎉 Add Ring-lite-linear-preview Model Introduction Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling. We introduce Ring-lite-distill-preview, which has 16.8 billion parameters with 2.75 billion activated parameters. This model demonstrates impressive reasoning performance compared to existing models in the industry. Model Downloads You can download the following table to see the various parameters for your use case....

April 1, 2025 · 2 min · 258 words · inclusionAI, Ant Group

debug

GITHUB 🤗 Hugging Face| 🤖 ModelScope We are excited to introduce Ming-lite-omni V1.5, a comprehensive upgrade that significantly enhances the omni-modal capabilities of the original Ming-lite-omni model (find it on 🤗Hugging Face). This new version delivers remarkable improvements across a wide range of tasks, including image and text understanding, document analysis, video comprehension, speech understanding and synthesis, as well as image generation and editing. Built on the Ling-lite-1.5 architecture, Ming-lite-omni V1....

January 18, 2025 · 13 min · 2746 words · inclusionAI, Ant Group