Society bears AI risks. It must have a say in AI governance
If governance remains a technocratic exercise behind closed doors, AI will deepen inequalities and weaken democratic oversight
By Sasmit Patra and Ashwin Upreti
As the proliferation of AI increasingly impacts us, one question looms large: Who gets to govern artificial intelligence? The pace and scale of AI deployment has far outstripped the ability of traditional regulatory models to ensure oversight. Historically, technology governance has been state-centric, with standards set by regulators. The emergence of artificially intelligent systems, however, disrupts this tendency.
AI-enabled systems increasingly impact labour markets, education, healthcare, finance, and even democratic processes. While technical know-how resides in private firms, the risk and impact is borne by the public at large. This is compounded by the ability of machine-learning systems to evolve post-deployment, complicating regulation for static legal frameworks. The cumulative outcome is a fragmented ecosystem where knowledge, regulation and risk are unevenly distributed.
A participatory approach to governance offers one pathway to correct this imbalance. Such an approach would enable the diverse community of end-users, such as citizens, civil society organisations, independent researchers and academia, to detect emerging harms that domain experts or developers overlook. This is particularly helpful when bias relates to linguistic or cultural groups, regional practices or local traditions. Participatory approaches allow models to account for experiential knowledge, enabling audits to bring contextual shortcomings to the surface. Community-led audits can further strengthen transparency and accountability by stress-testing systems under real-world conditions. However, a participatory approach to governance may only be effective when embedded in institutional design.
This requires us to reimagine governance structures. To have meaningful oversight, AI governance cannot remain in silos of state, private sector or civil society organisations. Also, intersectional coordination is contingent on infrastructure. Deliberative governance requires accessible reporting platforms and open datasets. Targeted literacy programmes are required to lower barriers to engagement and democratise knowledge.
If governance remains a technocratic exercise behind closed doors, AI will deepen inequalities and weaken democratic oversight. If India invests in qualitative institutionalised participatory mechanisms for AI audits and builds infrastructure to drive responsible innovation, AI-enabled systems can be aligned with public values rather than narrow institutional priorities. The challenge is not whether to govern AI or not, but to do so in a way that redistributes power equitably and secures public trust going forward.
Discourse around AI governance often treats opacity as a purely technical problem, locating the “black box” within proprietary algorithms and complex model architectures. While technical opacity is real, this framing is incomplete. There is also an emerging social black box. Decisions regarding what problems are worth automating, which datasets are to be deployed for training models, whose errors are tolerable and which harms we embrace are not always made by algorithms. They are shaped by commercial value, strategic priorities and social considerations. So long as upstream choices remain opaque, even transparent AI models may produce unjust outcomes. Participatory governance mechanisms carry the prospect of piercing this veil and opening AI systems to democratic oversight.
Patra is Member of Parliament, Rajya Sabha. Upreti is assistant director at Cyril Shroff Centre for AI, Law & Regulation