The use of artificial intelligence and machine learning (collectively, “AI”) in the securities market has sparked both significant opportunities and concerns. The Securities and Exchange Board of India (“SEBI”) recognises this, and has been discussing the use of AI since 2016. SEBI first prepared an inventory of the AI tools used in the Indian securities market and later shifted the accountability on the use of AI (which operates without human intervention) to intermediaries.
More recently, SEBI published a consultation paper on June 20, 2025, through which it proposes to regulate the use of AI in the securities market. This note summarises SEBI’s proposal and the practical steps that SEBI intermediaries must take to prepare for compliance.
The consultation paper proposes specific guidelines with respect to the use and deployment of AI models by intermediaries. These include:
Testing: The consultation paper expects intermediaries to test and monitor AI models on a continuous basis, in segregated environments that replicate the market. More specifically, AI models must be shadow tested with live traffic data before deployment.
Designate resources and increase awareness: Intermediaries should start putting together a team to oversee the overall functioning of AI throughout its lifecycle, led by a member of the senior management. Risks associated with AI must be flagged as part of the organisation’s training program.
Security Measures: To mitigate security risks around AI, SEBI proposes to require (a) implementation of appropriate risk control measures, (b) setting up processes to address disruptions and downtimes, and (c) conducting independent audits of AI models. The consultation paper also suggests watermarking to help clients verify the authenticity of content.
Transparency: While some intermediaries already need to disclose to their clients the extent of use of AI, intermediaries must now gear up to prepare detailed yet user-friendly disclosures where they use AI that may impact clients. This includes details regarding the risks involved, limitations and accuracy rates of the AI model, quality of data that the model relies on to make decisions, and fees that may be charged for using the product.
Bias: Intermediaries ought to start evaluating whether algorithmic bias, data errors, misleading outputs, and other black box issues have crept in, as SEBI has emphasised that AI models must be fair and have adequate controls in place to detect and remove inherent biases.
Though the consultation paper does not prescribe specific measures to mitigate instances of bias, in our experience, conducting regular testing and audits, maintaining periodic human oversight, and monitoring training modules of the AI go a long way to mitigating these issues – all of which are somewhat required under the consultation paper.
Liability and Contractual Arrangements: Intermediaries remain accountable for AI, even if it is outsourced. Intermediaries must therefore carefully structure their contracts for procurement or development of AI to mitigate potential risks and liabilities. For instance, in addition to outsourcing obligations that are typical in the securities market, an intermediary ought to consider including the following provisions in its contracts:
Record Maintenance: In line with other record-storage requirements, the consultation paper expects intermediaries to maintain all relevant documentation with respect to the AI model and any input and output data associated with the model for a minimum period of 5 years. Such documentation should also provide details on the logic used by the model to make decisions.
Oversight: SEBI expects intermediaries to periodically submit accuracy results and audit reports with respect to their AI models, we presume these will go alongside the cybersecurity audits that intermediaries are required to conduct. Further, intermediaries may also be required to report the name of third party AI service providers to SEBI. The consultation paper specifies that the intention for imposing these reporting obligations is to allow SEBI to detect market concentration and emergence of dominant players.
Update policies: Although not expressly required in the consultation paper, intermediaries must prepare to update their information security policies to account for SEBI’s proposed norms on AI. These information security policies include:
SEBI is on its way to becoming the first Indian regulator to prescribe comprehensive regulations on the use of AI. Like SEBI’s cybersecurity norms, SEBI’s AI norms would apply in a graded manner, where the extent of compliance is proportional to the size of the intermediary.
Additionally, while SEBI’s proposal to prescribe principles-based regulations is commendable, it would be helpful if SEBI could prescribe a few implementation guidelines for norms that are slightly loosely worded. For instance, the consultation paper proposes that intermediaries must share with SEBI “accuracy results”, but neither prescribes a format for these results nor explains how to conduct accuracy tests.
This website is owned and operated by Spice Route Legal, and is exclusively meant to be a source of information on the firm, it’s practice areas, and its members.
It is not intended and should not be construed as any form of advertisement, solicitation, invitation or inducement of any sort from the firm or its members.
Spice Route Legal does not warrant that any information provided on the website is accurate, complete or updated, and further denies liability for any and all loss or damage caused to the user as a result of their reliance on the content provided.
The information made available on this site must in no way be relied upon, or construed, as legal advice. If you need legal assistance, we recommend you seek help from competent counsel licensed to practice and advise in the relevant jurisdiction.