Lates News

date
22/09/2025
Meituan released the efficient inference model LongCat-Flash-Thinking on September 22. Meituan stated that based on the AIME25 test data, LongCat-Flash-Thinking demonstrates a more efficient intelligent agent tool invocation capability within this framework, saving 64.5% of Tokens compared to not using tool invocation while ensuring a 90% accuracy rate. Currently, LongCat-Flash-Thinking has been fully open-sourced on HuggingFace, Github, and can be experienced on the official website.