Author(s): Gopika Krishna
Paper Details: Volume 3, Issue 6
Citation: IJLSSS 3(6) 03
Page No: 28 – 34
ABSTRACT
Artificial Intelligence (AI) has rapidly moved from experimental technology to a core driver of economic growth, public governance and private decision-making. While India has embraced AI across sectors such as healthcare, finance, education and public administration, its legal framework has not kept pace with the speed and complexity of technological advancement. Unlike jurisdictions such as the European Union, which has introduced a comprehensive, risk-based regulatory model through the EU AI Act, India continues to rely on sector-specific guidelines, general data protection principles and policy documents that lack binding force. This creates visible legislative gaps, particularly in areas relating to accountability, transparency, bias detection, data protection, and liability for autonomous decision-m imoaking. As AI systems grow more sophisticated and opaque, the absence of a unified statutory framework heightens the risk of misuse, discrimination, privacy invasion and unchecked State or corporate power.
This paper argues that India’s regulatory approach must evolve from fragmented policy guidance to a structured, future-ready statute aligned with emerging global governance standards. The core objective is not to replicate foreign models but to adapt principles from global frameworks—such as the EU AI Act, OECD AI Principles, UNESCO’s ethical guidelines, and the United States’ NIST AI Risk Management Framework—into India’s socio-legal context. These international models emphasise transparency, human oversight, risk categorisation, explainability of automated decisions, and enforceable accountability mechanisms. Incorporating these standards would enable India to address current legislative deficiencies while ensuring innovation is not stifled.
The abstract also highlights India’s unique challenges: a diverse population, deep digital divides, significant reliance on algorithmic governance in welfare distribution, and rapid adoption of AI by start-ups without consistent compliance capacities. A balanced regulatory model must therefore safeguard individual rights without slowing economic growth. The proposed analysis recommends a hybrid governance system combining statutory obligations, regulatory sandboxes, algorithmic audits, and sector-specific rules under an overarching national AI law. The paper suggests that aligning Indian regulation with global best practices will not only enhance trust and ethical use of AI but also strengthen India’s position in international digital trade and cross-border technological collaboration.
Ultimately, the study concludes that bridging India’s legislative gaps through globally informed standards is essential for building a safe, transparent and accountable AI ecosystem that supports both innovation and the protection of fundamental rights.
Keywords : Artificial Intelligence Regulation, Global Governance Standards, Legislative Gaps in India, AI Accountability, Risk-Based Frameworks, Ethical AI Governance, Data Protection and Transparency.
INTRODUCTION
Artificial Intelligence (AI) has moved beyond being a mere technological innovation and has become a structural force shaping governance, public services, markets, and individual lives in India[1]AI systems are increasingly used in predictive policing, welfare administration, banking fraud detection, digital health platforms, and e-governance systems [2]Despite this rapid expansion, India still lacks a comprehensive, binding legislative framework dedicated exclusively to AI governance.
Currently, India relies heavily on policy documents such as NITI Aayog’s National Strategy for AI [3] ethical guidelines, and committee reports. Although these contain principles of responsible AI, they do not impose enforceable obligations on developers or deployers. At the same time, global jurisdictions such as the European Union and OECD member states are adopting structured and legally enforceable AI standards, widening the governance gap that India must urgently address [4]
This paper evaluates India’s regulatory vacuum and discusses how international frameworks can guide India in constructing a balanced, rights-based, and innovation-friendly AI law.
INDIA’S CURRENT AI REGULATORY LANDSCAPE
India does not yet have a single consolidated statute governing AI. Instead, AI regulation is scattered across the Digital Personal Data Protection Act, 2023 (DPDPA) [5]the Information Technology Act, 2000 [6], and several sector-specific guidelines issued by bodies such as the RBI[7]. None of these frameworks directly address risk classification, algorithmic transparency, or the accountability of AI systems.
The DPDPA 2023 Introduces the right to data protection but does not explicitly regulate automated decision-making, profiling, algorithmic audits, or bias mitigation [8]Similarly, the IT Act’s intermediary liability rules were never designed for autonomous systems capable of generating content or making decisions without human supervision. Courts, too, have only begun grappling with technology-related rights, leaving AI-specific jurisprudence underdeveloped.
India’s AI governance is currently driven by soft law—strategy papers, advisory documents, and voluntary ethics frameworks issued by NITI Aayog, MeitY, and industry bodies. However, soft law lacks enforceability, leading to uncertainty for developers, businesses, and public authorities deploying AI.
MAJOR LEGISLATIVE GAPS
Several structural gaps exist in India’s present approach:
- Absence of risk-based classification, unlike the EU’s high-risk model for biometric surveillance, credit scoring, and essential public services[9]
- No statutory rights against automated decision-making, leaving individuals without remedies when algorithms determine loan approvals, welfare eligibility, or academic outcomes.
- No legally defined liability framework, making it unclear who is responsible when AI systems cause harm, discrimination, or inaccurate predictions.
- Weak transparency and accountability norms, with no mandatory algorithmic impact assessments or independent audits.
GLOBAL STANDARDS OF AI GOVERNANCE
India can learn from three major international governance models:
- EU AI Act
The EU AI Act is the world’s first comprehensive AI law. It classifies AI into unacceptable, high-risk, limited-risk, and minimal-risk systems and imposes strict obligations on high-risk uses such as biometric identification, credit scoring, and public sector decision-making [10]The Act mandates documentation, human oversight, and conformity assessments before deployment.
- OECD AI Principles
The OECD framework promotes transparency, accountability, human-centric values, and robustness in AI systems [11]Over 40 countries have adopted these principles, making them a global baseline for responsible AI governance.
- US Executive Approach
The United States adopts a sectoral, flexible model. The US Executive Order on Safe, Secure, and Trustworthy AI (2023) requires safety testing, red-teaming, watermarking, and responsible deployment of foundation models [12]This approach supports innovation while providing safeguards.
WHY INDIA MUST ALIGN WITH GLOBAL STANDARDS
Failure to harmonize with global norms will place India at a disadvantage. Indian AI products may face barriers to entering regulated markets like the EU, multinational corporations may hesitate to invest due to uncertain compliance standards, and citizens may suffer privacy violations or discriminatory automated decisions[13]
Global alignment also supports India’s ambition to become an AI innovation hub while maintaining constitutional commitments to equality and due process.
BRIDGING INDIA’S LEGISLATIVE GAPS THROUGH GLOBAL STANDARDS
A hybrid model—combining global best practices with India’s constitutional and socio-economic context—can guide India’s AI law making. The following reforms are essential:
- Introduce a Risk-Based Classification System
India should adopt an EU-inspired model distinguishing high-risk systems in welfare, policing, healthcare, and education[14]
- Guarantee Rights Against Automated Decision-Making
Citizens should have legally enforceable rights to explanation, contestation, and human oversight for significant automated decisions. This aligns with constitutional principles under Articles 14 and 21.
- Mandate Algorithmic Impact Assessments
Before deploying high-risk AI systems, especially in government welfare schemes, authorities must undertake assessments evaluating accuracy, bias, data quality, and systemic harm [15]
- Codify Developer and Deployer Liability
Clear liability across the AI lifecycle would prevent regulatory ambiguity and protect individuals affected by harmful or discriminatory AI decisions.
- Strengthen Audit and Transparency Requirements
Regular third-party algorithmic audits, documentation of training datasets, and watermarking of AI-generated content should be mandatory.
- Create an Independent AI Regulatory Authority
A specialized authority—similar to the Data Protection Board—can monitor compliance, issue regulations, coordinate audits, and enforce penalties.
- Encourage Ethical Innovation
Regulatory sandboxes can support startups experimenting with AI under controlled environments, following the US model[16]
CONCLUSION
AI’s expansion is inevitable, but its governance must be deliberate, rights-based, and transparent. India’s current framework lacks enforceability, clarity, and accountability. By integrating global governance standards—particularly the EU AI Act, OECD principles, and US executive guidelines—India can craft a strong regulatory ecosystem that balances innovation with constitutional protections.
A coherent AI law will not only safeguard citizens but will also strengthen India’s global technological leadership in the coming decade.
[1] NITI Aayog, National Strategy for Artificial Intelligence (2018).
[2] Ministry of Electronics & Information Technology, National e-Governance Plan
[3] NITI Aayog, Responsible AI for All (2021).
[4] OECD Council Recommendation on Artificial Intelligence (2019)
[5] Digital Personal Data Protection Act, No. 22 of 2023.
[6] Information Technology Act, No. 21 of 2000.
[7] Reserve Bank of India, Guidelines on Digital Lending (2022).
[8] DPDPA section 2(5) (definition of processing; silence on automated decision-making).
[9] European Union, Artificial Intelligence Act, 2024 O.J. (L 342).
[10] EU, Artificial Intelligence Act, art. 5–10.
[11] OECD, Council Recommendation on Artificial Intelligence, OECD/LEGAL/0449 (2019).
[12] Executive Order on Safe, Secure, and Trustworthy AI, The White House (2023).
[13] European Commission, Guidelines on Trustworthy AI (2019).
[14] EU, Artificial Intelligence Act, art. 5–7.
[15] MeitY, India AI Mission Proposal (2023).
[16] Draft Digital India Act (2023), Government of India.
