New Delhi: In a push to strengthen India’s advanced AI capabilities, homegrown startup Sarvam launched a 105-billion-parameter foundational large language model (LLM), along with a suite of tools designed for commercial use. Co-founder Vivek Raghavan, in an interview, discussed the company’s progress in Indic languages and its expansion into AI-powered devices. Excerpts:Sarvam’s AI model places it in the frontier category. What differentiates it from global models?This is the largest model trained from scratch in India, with zero external data dependency and a strong grounding in Indian knowledge. While it is a global model, it is built with Indian contexts in mind. Models like Gemini or ChatGPT are still an order of magnitude larger, so we are not claiming parity in scale. However, being smaller makes our models more efficient and cost-effective. For most real-world and agentic use cases, models of this size deliver excellent results without requiring extreme scale.How deeply is it trained in Indic languages… Can it outperform global models in low-resource languages?We are far more focused on Indian languages than most global labs. Among models of comparable size, we are superior in Indian languages. It is not fair to compare systems tens of times larger, but within our size category, we are stronger. We believe Indians will experience AI primarily through voice. In speech recognition across Indian languages and dialects, we believe we are the best in the world. Our newer models are world-class in natural speech synthesis. We also released a small vision model that outperforms much larger systems at extracting Indic scripts from documents and images. Among similarly sized LLMs, we are equivalent or better than most. For instance, we outperformed a DeepSeek model released last year, and even compared favourably with a version six times larger. Our goal is to lead globally within our size class, especially in Indian language and domain-specific contexts.Was the model trained entirely on domestic infra? How will you make inference affordable at scale?Yes, it was trained entirely in India under the AI mission, using concessional GPUs and no external data dependency. Inference is a separate challenge. Training does not guarantee adoption. We will enable access to our models, but competing in pure B2C is difficult when global players offer free services after spending billions. That is a structural reality.You’re moving beyond traditional mobile platforms. What’s the strategy?AI will change interfaces. We see smart glasses as business devices. Feature phone integrations are about inclusion. We also aim to run small models directly on devices.
