Alibaba Open-Sources New Qwen Model: A Dragon Boat Festival Gift!

QwenLong-L1-32B is the first Long-Context Language Reasoning Model (LRM) specifically trained with RL for long-context reasoning.

Experimental results from seven long-context DocQA benchmarks show that QwenLong-L1-32B outperforms flagship LRMs such as OpenAI-o3-mini and Qwen3-235B-A22B, with performance comparable to Claude-3.7-Sonnet-Thinking, demonstrating a leading position among current state-of-the-art LRMs.

图片

Open Source Address: https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B

Project Address: https://github.com/Tongyi-Zhiwen/QwenLong-L1

Dataset Available: https://huggingface.co/datasets/Tongyi-Zhiwen/DocQA-RL-1.6K

The significance of R1 is just too high!

图片

Maximum length supported: 120k

图片

Main Tag:Artificial Intelligence

Sub Tags:Large Language ModelsNatural Language ProcessingMachine LearningOpen Source


Previous:ICML 2025 | Fast and Powerful Liger! Transformer Instantly Switches to Linear RNN with only 20M Token Fine-tuning

Next:ICML 2025 | Bursting the AI Bubble with 'Human Testing Methods': Building a Capability-Oriented Adaptive Assessment New Paradigm

Share Short URL