SenseNova-MARS, an open-source project developed by Shangtang, is breaking through the ceiling of multimodal search and reasoning.
Today, Sensetime officially open sourced its multimodal autonomous reasoning model SenseNova-MARS, which surpasses Gemini-3-Pro and GPT-5.2 with a score of 69.74 in core benchmark tests for multimodal search and reasoning. SenseNova-MARS is the first Agentic VLM model to support dynamic visual reasoning and deep fusion of image-text search. It can plan steps, call tools, and easily handle various complex tasks on its own, enabling AI to truly possess "execution capability".
Latest

