Author(s): Ayush Kumar and Pali Pathak
Paper Details: Volume 4, Issue 1
Citation: IJLSSS 4(1) 42
Page No: 508 – 518
I. EXECUTIVE SUMMARY
The case of STAMP Capital does indicate the presence of an enormous issue at the intersection of fintech and investor protection. In effect, an AI-driven robo-advisor platform attracted more than twelve thousand novice traders and resulted in a major financial disaster since it was automating transactions without any actual safeguard, adequate disclosure or responsibility.[1] Five key regulatory faults, which were exacerbated by this event, included the following: we do not have a statutory definition of AI-based investment advice;[2] we do have a uncodified digital consent standard of automated trading; the algorithms were obviously discriminatory towards women and older investors; responsibility on this matter was shared between platform operators, partners in brokerages, and third-party AI sellers; and there was no pre-deployment audit or supervisory reporting requirement.[3]
This paper proposes an Algorithmic Governance Framework for Securities Markets (AIGSM Framework), implementable through amendments to the SEBI (Investment Advisers) Regulations 2013, new SEBI circulars, and a dedicated AI Supervision Unit. The framework draws on the US SEC guidance on robo-advisors, the EU AI Act 2024, the IOSCO Final Report 2021 on AI in financial markets, and the FEAT Principles developed by Singapore.[4],[5],[6] It is designed on a basis of five pillars namely: definitional architecture, informed consent standards, algorithmic fairness and bias remediation, liability identification and compensation, and audit and reporting. I would suggest a 30-month implementation plan that is introduced in phases to ensure that responsible fintech innovation and investor protection are implemented simultaneously.
II. PROBLEM ANALYSIS: THE STAMP CAPITAL SCENARIO
The STAMP platform, developed by Arjun Mehta, provided investors with customized portfolio suggestions and executed trades automatically using machine-learning engines trained on historical price data, sentiment aggregates, and consumer spending data. Onboarding consisted of a digitalized consent form buried within the terms and conditions.[7] The risks of automated trading during periods of market volatility were not clearly disclosed. When a sudden interest-rate shock triggered simultaneous sell signals to thousands of investors, the automated system executed a cascade of transactions, destroying massive sums of money within hours.[8] The platform’s post hoc explanation was that the model had re-calibrated in response to market volatility.
A few more problems were uncovered by the SEBI investigation. Women investors were being assigned riskier portfolios on a systematic basis than their male counterparts with similar financial profile and older investors were receiving lower volatility even though they reportedly were comfortable making moderate risk taking. It appears that the training data used to train the model was slightly biased which resulted in the occurrence of these mismatches. Never was an independent audit conducted prior to the deployment of the system. Negotiations with the third-party AI supplier did not even address the possibility of being liable in case the model malfunctioned. Moreover, they neglected to incorporate such key real world variables as the cost of execution, slippage and liquidity constraints when they tested the model backwards.
These five failures all coincide with the five pillars of the AIGSM Framework to be discussed below.
III. THE EXISTING REGULATORY FRAMEWORK AND ITS GAPS
There are three major regulations that are used by SEBI to control investment advice. The investor protection is provided by the SEBI Act 1992;[9] suitability, know-your-client obligations, and conduct standards are regulated by the SEBI (Investment Advisers) Regulations 2013;[10] and algorithmic trading circulars of 2021 solely apply to institutional high-frequency traders and do not consider any retail-facing AI platform.[11]
Five large regulatory gaps still remain.
- First, the SEBI Act and the Investment Adviser Regulations never confirm what is an AI-based investment advice or an automatized decision-making system, therefore, it remains unclear whether the typical suitability regulations are relevant to these systems.
- Secondly, there is no regulation that compels real-time automated implementation to obtain a strong informed consent that is independent of normal permissions.
- Third, the SEBI rules do not discuss the detection of bias in the algorithms or how to address it.
- Fourth, the framework does not comment on the responsibility of the parties in cases where a multi-party AI supply chain fails.
- Fifth, system need not be audited prior to launching or to maintain a continuous report of the system by retail users.
Digital Personal Data Protection Act 2023 states that data fiduciaries are required to perform X, and it omits AI disclosure concerns in making investments.[12] General AI law has not reached India yet, which means the only realistic way of sealing these loopholes is with additional SEBI sector regulations.
IV. COMPARATIVE REGULATORY ANALYSIS
The fact that there is no legal definition of AI-investment advice enabled STAMP to escape full regulation as they are performing the same action performed by a licensed adviser. The AIGSM Framework proposes introducing a neutral definition into the SEBI (Investment Advisers) Regulations 2013: on the one hand, any guidance, recommendation, or portfolio created entirely or primarily by an artificial intelligence system, machine learning algorithm, or automated decision-making process is referred to as AI-Based Investment Advice.[13] The material part condition prevents the platform faking that it has been reviewed by someone human before delivering it to an investor.
Any process in which an AI system generates, chooses, or carries out a decision that directly and materially affects the legal or financial interests of an investor such as the execution of a trade, without the prior approval of a real person employed by the regulated person, should be covered by Automated Decision-Making. This includes third-party AI elements, and under this the operator of a platform must be accountable to their outputs, per the IOSCO principle of that regulated entities cannot outsource their responsibilities to technology providers.[14] SEBI ought to also introduce a tiering system: Tier 1 (fully automated, machine execution without human intervention before delivery), Tier 2 (human review before delivery, machine execution), Tier 3 (AI-assisted, human decision required before execution). The rules regulatory, e.g. audit, disclosure, consent, ought to be adjusted to the degree of automation, with the most stringent regulations obtained in Tier 1.
V. THE AIGSM FRAMEWORK: FIVE PILLARS
PILLAR 1: DEFINITIONS OF ‘AI-BASED INVESTMENT ADVICE’ AND ‘AUTOMATED DECISION-MAKING
The STAMP scenario demonstrates the acute danger of regulatory ambiguity at the threshold of a novel technology. When STAMP’s algorithm generated personalised portfolio recommendations and executed trades without human intervention, it was performing functions that are substantively identical to those of a licensed investment adviser – yet the platform exploited the absence of a statutory definition to resist clear regulatory characterisation.[15] A precise, technology-neutral statutory definition is therefore the foundational requirement of any AI governance framework. ‘AI-Based Investment Advice’ for the purposes of the SEBI (Investment Advisers) Regulations, 2013 and any instrument made thereunder should be defined as: any advice, recommendation, or signal relating to the buying, selling, or holding of securities, or the composition or rebalancing of an investment portfolio, that is generated, in whole or in material part, by an artificial intelligence system, machine learning algorithm, or automated decision-making process, regardless of whether such output is subsequently reviewed by a human adviser before delivery to an investor, and regardless of the label applied to such service by the provider.[16]
This definition directly addresses the consent ambiguity in the STAMP case. STAMP argued that investors had consented to automated execution, but the definition establishes that automated decision-making is a distinct and regulated activity – consent to which requires specific disclosure separate from general terms and conditions. The definition also extends to third-party AI components, closing the vendor accountability gap: where a third-party component performs automated decision-making within a platform’s architecture, the platform operator remains the regulated entity responsible for the component’s outputs.[17] This is consistent with IOSCO’s principle that regulated entities cannot outsource their regulatory obligations to technology vendors.[18]
PILLAR 2: VALID AND INFORMED CONSENT TO AUTOMATED TRADE EXECUTION
The STAMP case demonstrates the ineffectiveness of standard digital consent in reality. According to the AIGSM Framework, valid consent to automated execution must fulfill five cumulative principles. First, the Specificity Principle: consent should be obtained via a special instrument, independent of the general terms and conditions, with an explicit reference to the automated execution facility, its working conditions, and the nature of the transactions that the facility will allow.[19].[20] Second, the Risk Disclosure Principle: consent must include explicit disclosure of the specific risks of automated execution, including the scenarios that may trigger automated transactions. Third, the Verification of Comprehending Principle: the first-time retail investors will have a brief survey, which is premised on the human oversight requirement in the EU AI Act and SEC guidance.[21].[22] Fourth, the Override and Opt-Out Principle: the consent may be withdrawn at any time taking effect on the next trading day and with an automatic suspension during declared market stress events. Fifth, the Periodic Renewal Principle: the consent has a time limit of twelve months and must be renewed in person, which is comparable to the time-limited processing consent provisions in the DPDPA.[23] Retail investor obtain a 72-hour cooling-off period on the time of first consent.
PILLAR 3: FAIRNESS AND BIAS: FAIRNESS AND BIAS IDENTIFICATION AND REMEDIATION
The tendency of STAMP to make decisions that are more risky to female investors and less volatile to older investors, despite claiming round-about moderate risk-taking, is a huge regulatory and constitutional failure.[24] That is an archetypal indicator of training data bias that is indicative of archaic socioeconomic capital markets. Under the AIGSM Framework, a pre-deployment Algorithmic Fairness Assessment (AFA) with a SEBI-approved Algorithmic Audit Agency (AAA) is mandated with statistical significance testing of statistically significant differences on the basis of gender, age bracket, region, and income bracket.[25] Then continuous quarterly monitoring of bias that reports to SEBI within 48 hours and disabling any feature that delivers over 10% differentiated output between attribute categories that have been protected. Lastly, there must be a three-phase remediation process, such as root cause analysis, retraining under the supervision of the AAA, and a post-remediation AFA, which must be completed in 90 days, which is also reflected in the training data governance of the EU AI Act and the FEAT Fairness principle.[26]
PILLAR 4: PRINCIPLES OF LIABILITY IDENTIFICATION AND COMPENSATIONS
The situation with STAMP is the diffusion of textbook liability: they all, the platform, the broker, and the AI vendor, denied the responsibility of the harm. AIGSM Framework establishes a three-tier liability framework. The first place liability lies on the platform operator where all harm caused by the platform AI architecture, even third-party components, will be enforced against investors; fiduciary liability, care, loyalty, and best execution, are applicable here and cannot be transferred to technology or obscured by algorithmic black-box.[27][28] Second, co-liability attaches to brokerage partners where they have integrated the platform’s AI-driven execution facility into their own services: such partners are jointly and severally liable with the platform operator to the extent that their integration, onboarding, or settlement processes contributed to or failed to prevent the harm. Third-party AI vendors are also required to satisfy a Vendor Compliance Obligation: necessary SEBI registration and complete technical disclosure of component specifications, training data as well as known failure modes.[29] The compensation scheme is founded on five principles, namely: Make-Whole (return the investor to the financial condition before the harm); Burden-Shifting (when the pattern of losses is related to an algorithmic failure the platform must show that it acted within authorized boundaries); Proportionality (systemic failures lead to SEBI-run aggregate redress and not individual claims); Speed (claims decided within 60 days, subject to 30-day extensions in exceptional circumstances); and Non-Exclusion (any contractual provision which releases the liabilities of an algorithmic failure is unenforceable.[30]
PILLAR 5: AUDIT AND REPORTING REQUIREMENTS
STAMP’s absence of pre-deployment audit, designated AI governance accountability, and formal risk assessment enabled all substantive harms to arise undetected. The AIGSM Framework requires: first, a mandatory pre-deployment audit by an AAA for all Tier 1 and Tier 2 systems, assessing model architecture, data quality, bias performance across protected attributes, historical stress testing, effectiveness of risk controls, and third-party component integrity;[31] second, periodic full re-audits at intervals not exceeding 18 months post-deployment, or within 60 days of any material model change, with all audit reports submitted to the SEBI AI Supervision Unit; third, designation of a Chief Risk Officer-equivalent AI Governance Officer, accountable to the board of directors and required to submit a quarterly AI Governance Report to SEBI; fourth, an annual public AI Transparency Report disclosing model methodology, data categories, performance metrics, AFA results, complaints received, and material model changes; fifth, incident notification to SEBI within four hours where losses exceed ₹10 lakhs or 50 accounts in a single trading session, with a full root cause analysis within 15 business days; and sixth, mandatory backtesting disclosure for all performance presentations, with non-disclosure constituting a misleading statement under section 15HA of the SEBI Act 1992.[32]
VI. IMPLEMENTATION FRAMEWORK
The AIGSM Framework is to be implemented in three phases over 30 months. Phase 1 (Months 1–10): promulgation of definitional amendments to the SEBI (Investment Advisers) Regulations 2013; establishment of the AI Supervision Unit within SEBI; publication of AAA accreditation standards; and issuance of model consent instruments and audit checklist frameworks for industry guidance. Phase 2 (Months 11–20): mandatory pre-deployment audits for all new Tier 1 and Tier 2 platforms; launch of the AI Investor Grievance Redressal Mechanism; establishment of the Vendor Compliance Obligation registration scheme; and application of informed consent requirements to existing platforms with a 12-month grace period.[33] Phase 3 (Months 21–30): full operationalisation of the liability and compensation scheme; initiation of periodic audit cycles; publication of SEBI’s first Annual AI Governance Report; and activation of the RegTech automated reporting interface. A comprehensive review is to be conducted 36 months after full enforcement, with findings published in a formal review report.
VII. RISK ESTIMATION AND MITIGATION
The primary danger of the AIGSM Framework is that it may introduce excessive regulation of early-stage fintech platforms. The pre-deployment audit requirement, especially, might be a big financial burden to small operators. Three of our design characteristics address this: the tiered classification system imposes the greatest burden on Tier 1 systems; platforms with less than ₹10 crores of assets under advisory may seek to self-certify under a simplified regime; and SEBI should partially subsidise audit fees of small qualifying platforms under its Regulatory Sandbox funding programme.[34]
The second danger is that it has many overlaps with the current regulations of the RBI digital lending framework, the Digital Personal Data Protection Act of 2023, and the Information Technology Act of 2000. This will be under an inter-regulating coordination committee constituted of SEBI, RBI, and MeitY. The role of the committee is to untangle incompatible requirements and provide joint guidance.[35]
The third threat is regulatory arbitrage, in which platforms can intentionally classify themselves as such, such as claiming their systems are Tier 3 to escape stricter compliance. The implementing guidelines issued by SEBI shall implement a substance-over-form principle, such that the classification of tiers is enforced as per the actual functioning of the system and not as per how the operator has articulated it to be.[36]
VIII. BUDGET CONSIDERATIONS
The AIGSM Framework requires an approximate outlay of ₹170-210 crores over the 30-month implementation horizon, allocated across five regulatory resource categories. The largest single allocation—approximately ₹60-75 crores—is for establishing a dedicated AI Supervision Unit within SEBI, encompassing the recruitment of machine learning, algorithmic finance, data science, and cybersecurity experts; the procurement of RegTech for automated regulatory reporting and real-time algorithmic monitoring; and the construction of secure data repositories for platform audit files, model documentation, and incident reports.
AAA accreditation infrastructure—covering syllabus development, examination, and ongoing quality control—is projected at ₹25-35 crores. Upgrading the SCORES platform to support the AI Investor Grievance Redressal Mechanism, including technical integration, dispute resolution staffing, and AI-specific complaint taxonomy, is estimated at ₹30-40 crores. Investor financial literacy campaigns, standardised disclosure templates, plain-language AI risk documentation, and industry compliance training will cost a combined ₹20-25 crores. The remaining ₹15-20 crores will fund inter-regulatory coordination with RBI and MeitY, annual framework reviews, and engagement with IOSCO working groups. Annual post-implementation recurrent expenditure is estimated at ₹40-50 crores. Funding sources should include SEBI’s existing regulatory fee revenues, an AI compliance levy on Tier 1 and Tier 2 platforms proportional to assets under advisory, and eligible grants under the Digital India programme.
IX. CONCLUSION
The STAMP Capital case is a consequential illustration of the cost of regulatory lag: twelve thousand retail investors, the majority of them first-time participants in India’s securities markets, suffered preventable financial harm because the applicable regulatory framework had not kept pace with the technology it was meant to govern.[37] The Framework that I propose through the AIGSM is intended to be a direct, principled, and internationally benchmarked response to the five regulatory lacunae that the case exposed. The five pillars—definitional architecture, informed consent, algorithmic fairness and bias remediation, clear liability attribution, and mandatory pre-deployment and ongoing audit—represent the minimum regulatory architecture that AI-powered investment advice in India requires. This framework does not impede innovation; it creates the conditions under which responsible innovation is sustainable, by replacing the current regime of indeterminate definitions, inadequate disclosure, embedded bias, fragmented liability, and absent audit oversight that has allowed platforms to deploy consequential AI systems against the most vulnerable participants in India’s securities markets.
One of the most disruptive economic trends of this decade is the retail investor revolution in India, and I believe that the AIGSM Framework can ensure that that revolution has a regulatory backbone of the quality that it merits, such that we can continue to innovate, such that we can hold people responsible, and such that we continue to make the investor central to all of the algorithms.[38]
X. REFERENCES
A. PRIMARY SOURCES — LEGISLATION AND REGULATIONS
- Constitution of India 1950, art 14.
- Consumer Protection Act 2019, s 47(1)(v).
- Digital Personal Data Protection Act 2023, ss 4, 6.
- Indian Contract Act 1872, s 14.
- Information Technology Act 2000, s 43A.
- Investment Advisers Act 1940 (US), s 206.
- SEBI Act 1992, ss 11(1)–(2), 12(1B), 12A, 15HA.
- SEBI (Investment Advisers) Regulations 2013, regs 2(1)(l), 16(b), 16–17.
- SEBI (Investment Advisers) (Amendment) Regulations 2020.
B. PRIMARY SOURCES — REGULATORY INSTRUMENTS AND CIRCULARS
- SEBI, ‘Circular on Algorithmic Trading by Retail Investors’ (SEBI/HO/MRD2/DCAP/CIR/P/2021/6, 13 January 2021).
- SEBI, Annual Report 2022–23, ch 3.
C. INTERNATIONAL INSTRUMENTS AND REPORTS
- European Parliament and Council, Regulation (EU) 2024/1689 on Artificial Intelligence [2024] OJ L 1689/1 (EU AI Act), arts 3(1), 6, 9, 10(2)(f)
- IOSCO, ‘Artificial Intelligence and Machine Learning in Financial Services’ (Final Report FR17/2021, October 2021) 8–15.
D. SECONDARY SOURCES — JOURNAL ARTICLES
- Sanjay Kumar, ‘Algorithmic Liability: Proposing a Framework for Automated Decision-Making in Indian Financial Markets’ (2023) 15(4) National Law School of India Review 112.
[1]SEBI Act 1992, s 11(1).
[2]SEBI (Investment Advisers) Regulations 2013, reg 2(1)(l).
[3]IOSCO, ‘Artificial Intelligence and Machine Learning in Financial Services’ (Final Report FR17/2021, October 2021) 12–15.
[4]European Parliament and Council, Regulation (EU) 2024/1689 on Artificial Intelligence [2024] OJ L 1689/1 (EU AI Act), art 6, Annex III.
[5]Monetary Authority of Singapore, ‘Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT)’ (November 2018) 4–6.
[6]SEC, ‘Commission Guidance Update: Robo-Advisers’ (Release No IA-4776, February 2017) 2–4.
[7]SEBI (Investment Advisers) Regulations 2013, regs 16–17.
[8]SEBI, ‘Circular on Algorithmic Trading by Retail Investors’ (SEBI/HO/MRD2/DCAP/CIR/P/2021/6, 13 January 2021).
[9]SEBI Act 1992, ss 11(1)–(2).
[10]SEBI (Investment Advisers) Regulations 2013, reg 2(1)(l).
[11]SEBI, ‘Circular on Algorithmic Trading by Retail Investors’ (SEBI/HO/MRD2/DCAP/CIR/P/2021/6, 13 January 2021).
[12]Digital Personal Data Protection Act 2023 .
[13]EU AI Act (n 4), art 3(1).
[14]IOSCO (n 3) 14.
[15]SEBI (Investment Advisers) Regulations 2013, reg 2(1)(l); SEBI Act 1992, s 12(1B).
[16]EU AI Act (n 4), art 3(1); IOSCO (n 3) 8–10.
[17]Sanjay Kumar, ‘Algorithmic Liability: Proposing a Framework for Automated Decision-Making in Indian Financial Markets’ (2023) 15(4) National Law School of India Review 112, 120.
[18]IOSCO (n 3) 14.
[19]SEBI (Investment Advisers) Regulations 2013, reg 16(b).
[20]Indian Contract Act 1872, s 14.
[21]EU AI Act (n 4), art 14(4).
[22]SEC (n 6) 4.
[23]Digital Personal Data Protection Act 2023, s 6.
[24]Constitution of India 1950, art 14.
[25]EU AI Act (n 4), art 10(2)(f).
[26]Monetary Authority of Singapore (n 5) 8.
[27]SEBI Act 1992, s 12A.
[28]Investment Advisers Act 1940 (US), s 206.
[29]IOSCO (n 3) 14.
[30]Consumer Protection Act 2019, s 47(1)(v).
[31]EU AI Act (n 4), arts 9, 12.
[32]SEBI Act 1992, s 15HA.
[33]SEBI (Investment Advisers) (Amendment) Regulations 2020; SEBI, ‘Regulatory Sandbox Framework for Fintech’ (SEBI/HO/MRD1/DSAP/CIR/P/2020/234, 19 November 2020).
[34]SEBI, ‘Regulatory Sandbox Framework for Fintech’ (SEBI/HO/MRD1/DSAP/CIR/P/2020/234, 19 November 2020).
[35]Digital Personal Data Protection Act 2023; Information Technology Act 2000, s 43A.
[36]EU AI Act (n 4), recital 12.
[37]SEBI Act 1992, s 11(1)–(2).
[38]SEBI, Annual Report 2022–23, ch 3.
