Over the next few years, we expect most companies to rely on or integrate AI-tools within their content development efforts. It is important to design internal systems to enable the use of AI in a manner that mitigates risk – these steps could involve mandatory guidelines on the extent of human intervention, limitations on the use of works created solely by AI and a set of checks and balances to ward off the threat of an infringement action.
Companies ought to consider implementing policies that require developers and other employees working on content development projects to document their involvement in content created with the assistance of AI tools. Employees should also be encouraged to exercise their own creative judgment rather than relying entirely on AI-generated suggestions while developing the work.
As the protectability of AI-generated works under copyright law is uncertain, companies ought to avoid structuring their business around such products until regulatory clarity is received. Internal policies that categorise AI systems and products based on risk profiles may help business teams make this determination. Considerations for this can include the level of human involvement in the work, whether the work is integrated within a customer-facing offering or used for back-end processes, and impact on business operations if a court places an injunction on the use of the work.
Including screening processes led by legal departments prior to deploying critical AI products may help mitigate the threat of infringement actions. Companies may also prepare checklists for business teams to identify whether thresholds for authorship and creativity are met and the product infringes third party copyright.
While not mandatory, companies may choose to register AI-assisted works with the Registrar of Copyright if they are confident that the content meets existing thresholds for authorship and originality. This would provide a presumption of validity in infringement cases.
Before approaching the Registrar, companies ought to compile relevant evidence regarding the extent of human involvement and originality in creating the work.
Until regulatory guidance is provided by the government, companies should consider using open-source or adequately licensed datasets when training in-house AI models. While Indian courts have not specified that training AI models with information on publicly available platforms is unlawful per se, businesses ought to take a conservative approach to avoid potential infringement actions, to the extent feasible. Companies may also conduct periodic employee training sessions and audits to ensure that there is an ongoing commitment to ward off the threat of infringement claims.
As the copyright regulatory framework is in a state of flux, implementing contingency measures ought to be a priority for companies that intend to build their business around AI. These measures may include: (a) maintaining fallback or alternative versions of the product hould the AI-related element fall under judicial scrutiny, (b) preparing internal SoPs to address copyright infringement claims from third parties, and (c) obtaining sufficient liability insurance.
This website is owned and operated by Spice Route Legal, and is exclusively meant to be a source of information on the firm, it’s practice areas, and its members.
It is not intended and should not be construed as any form of advertisement, solicitation, invitation or inducement of any sort from the firm or its members.
Spice Route Legal does not warrant that any information provided on the website is accurate, complete or updated, and further denies liability for any and all loss or damage caused to the user as a result of their reliance on the content provided.
The information made available on this site must in no way be relied upon, or construed, as legal advice. If you need legal assistance, we recommend you seek help from competent counsel licensed to practice and advise in the relevant jurisdiction.