New Delhi: At the India AI Summit, a trio of domestic model launches signalled that India is no longer merely trying to keep pace in the global AI arms race, but to redefine what winning looks like. Rather than chasing ever-larger parameter counts, companies such as BharatGen, Sarvam AI and Gnani.ai are positioning their latest systems as solutions to a problem frontier models have yet to convincingly solve: Linguistic and cultural depth in India. Sarvam AI unveiled a 105-billion-parameter model tuned for Indic reasoning and translation, arguing that global systems often falter when faced with code-mixed text or nuanced local references. The company claims its model can even outperform global tech giants in optical character recognition (OCR) and multilingual speech in Indian languages, citing benchmark scores to back the claim. BharatGen introduced a 17-billion-parameter multilingual Mixture-of-Experts model trained across Indian languages and designed for critical use cases spanning governance, education, healthcare, agriculture and enterprise solutions. Meanwhile, Gnani.ai launched a five-billion-parameter voice-to-voice model built to handle heavy accents, background noise and Hindi-English blends common in everyday Indian speech. The company says it delivers this performance with greater efficiency than larger global models. By contrast, leading systems from OpenAI and Google dominate international benchmarks and parameter counts, but are primarily trained on globally aggregated datasets in which English and Mandarin outweigh most low-resource languages. The result, Indian developers argue, is strong general reasoning paired with uneven contextual fluency on the ground. India’s bet is that data relevance may trump raw scale in a market of 1.4 billion people.
