SEBI’s AI Roadmap: Navigating Innovation While Protecting Investors

Share

5 min well spent
SEBI.webp

The rapid adoption of Artificial Intelligence and Machine Learning technologies by market participants has prompted the SEBI1 to issue a consultation paper titled “Guidelines for Responsible Usage of AI/ML in Indian Securities Markets” on June 20, 2025. This move by SEBI signals a proactive approach to regulate the evolving landscape of financial markets, aiming to harness the potential of AI/ML while safeguarding market integrity and investor interests. 

SEBI’s Paper aims at the immense potential of AI/ML to enhance market efficiency, streamline complex decision-making processes, and bolster regulatory investigations through advanced data analysis. However, it also highlights the inherent risks associated with these powerful technologies. The scale, pace, and impact of AI/ML-driven decisions in financial markets mean that any misuse or malfunction could have significant consequences for market integrity and investor protection. Therefore, the core objective of the Paper is to establish guiding principles that balance AI/ML innovation with robust investor safeguards. Stakeholders can fill the feedback form until July 11, 2025.

Key Recommendations for Responsible AI/ML Adoption

The Consultation paper sets forth a series of overarching principles for Market Participants to guide the governance of AI/ML applications within the securities market. These pivotal recommendations encompass the following critical areas:

  • Model Governance: Market participants deploying AI/ML are expected to have skilled internal teams to provide effective human oversight. This includes ensuring robust governance frameworks, establishing fallback plans, and executing strong agreements with third-party vendors. Continuous monitoring, independent audits, and regular reporting of AI/ML model accuracy to SEBI are also stipulated requirements. 
  • Investor Protection and Disclosures: AI/ML models that directly interact with customers, such as those involved in algorithmic trading or advisory services, are mandated to furnish clear and easily understandable disclosures to their clients. These essential disclosures must encompass critical information including product features, the intended purpose of the AI/ML model, associated risks, the model’s accuracy, and any applicable fees. Moreover, the mechanisms established for addressing investor grievances related to AI/ML systems are required to fully comply with SEBI’s existing regulatory framework.
  • Testing Framework: Prior to deployment, AI/ML models are required to undergo rigorous testing in a segregated environment to ensure expected behavior under both stressed and unstressed market conditions. Comprehensive documentation of models, input, and output data for a minimum of five years is also mandated. A significant shift highlighted is the expectation of “shadow testing” with live traffic, indicating a move towards a lifecycle-oriented, real-time validation approach to keep pace with dynamic market conditions. The logic of AI/ML models must also be documented to ensure explainable, traceable, and repeatable outcomes. 
  • Fairness and Bias: Implement processes and controls to identify and eliminate biases from datasets, ensuring no favoritism or discrimination among client groups. While the Paper doesn’t explicitly define “fairness,” it suggests that businesses may need to conduct fairness impact assessments as part of their AI governance framework. 
  • Data Privacy and Cybersecurity: Adherence to applicable data protection laws for the collection, use, and processing of personal investor data is mandatory. Prompt reporting of technical glitches or data breaches to SEBI and other relevant authorities is also required, signaling increased regulatory convergence with broader data protection and cybersecurity laws. 

Risks and Control Measures

The Paper presents a comprehensive checklist for managing anticipated threats from AI/ML applications, categorizing risks and outlining mitigation strategies for market participants.

  • Malicious Use: To combat fabricated financial statements, misleading news, or deepfake content that could destabilize the market, strategies include watermarking and provenance tracking, reporting suspicious activities, and educating investors about AI-generated misinformation risks.
  • Concentration Risks: To address systemic risks from over-reliance on limited Generative AI providers, SEBI recommends proactive monitoring of market concentration, diversifying service providers, and enhancing oversight of critical vendors and their AI tools.
  • Herding and Collusive Behaviour: To combat the risk of collective or coordinated actions from similar AI models or datasets, mitigation strategies involve promoting diversity in AI architectures and data sources, monitoring stock exchanges for herding, conducting regular algorithmic audits for collusive patterns, and deploying circuit breakers to manage AI/ML-amplified market volatility.
  • Lack of Explainability: To ensure transparency in AI systems, the Paper suggests mandating detailed documentation of AI processes, encouraging the use of interpretable AI models or explainability tools, and requiring human review of AI-generated outputs.
  • Model Failure / Runaway Behaviour: To counteract the potential for AI/ML flaws and to cause financial instability, SEBI proposes robust measures. These involve rigorously stress-testing AI systems in extreme scenarios and implementing volatility controls such as kill switches and circuit breakers. Furthermore, the guidelines emphasize ensuring human oversight to prevent over-reliance on AI systems and to establish clear accountability for decisions driven by AI.
  • Lack of Accountability and Regulatory Non-Compliance: To address the mounting concerns of regulatory infractions and investor losses stemming from unaccountable AI use, and crucially, to prevent the perilous deflection of liability onto the AI systems themselves, a robust, multi-pronged approach is essential. This strategy entails rigorously testing AI tools in controlled regulatory sandboxes to preempt risks, extensively training staff on AI-related compliance vulnerabilities to empower human oversight, and critically, embedding “human-in-the-loop” or “human-around-the-loop” mechanisms to ensure that human judgment and accountability remain integral to every AI-driven decision.

Conclusion

While India is still developing an overarching framework for AI governance, SEBI’s consultation paper aims to be an early adopter in establishing practical guardrails for AI/ML use in financial markets. By outlining expectations for testing AI/ML applications, ensuring fairness, and emphasizing human accountability, SEBI is laying crucial groundwork for responsible AI adoption in the financial sector, potentially paving the way for a broader, cross-sectoral legal framework in the future. This initiative reflects a thoughtful and proactive approach to managing the transformative impact of AI/ML, aiming to foster innovation while ensuring robust investor protection and market integrity.

Citations

  1. Securities and Exchange Board of India 

Expositor(s):  Adv. Archana Shukla