Category: AI Efficiency
- Say Less 'Wait', Do More: NoWait Reshapes Large Model Inference Paths
- Did "More is Better" Fail? ModelSwitch Jumps Out of the Sampling Black Hole, Rewriting the LLM Inference Paradigm
- Breaking the Chain-of-Thought Reasoning Bottleneck! "Soft Thinking" Enables LLMs to Learn Human Abstract Abilities, with Reduced Token Usage