Articles

The Liability of AI Intermediaries under the Information Technology Act – All you need to know.

AI Intermediaries

Introduction

The Information Technology Act, 2000, defines “intermediary” broadly under Section 2(1)(w) as “any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record”. The legal responsibility of such entities that facilitates the transmission, storage, or hosting of any third-party content on their platforms refers to Intermediary Liability. As Artificial Intelligence powered platforms become active participants in the information ecosystem, it is important to delve into the intersection of Artificial Intelligence as an Intermediary and analyse the scope of current legislature governing it, and the extent of the liability it can hold. With advancements in AI technology, it is now possible for an AI system to gather, store, curate, amplify or even generate content and therefore, poses important questions about who holds the liability for AI disseminated unlawful content.

Framework for Analysis

Artificial Intelligence mimics human intelligence and possesses the ability to do tasks that typically require human intellect, such as learning, problem-solving, and decision-making etc. It does so by analysing and identifying patterns within vast datasets that have been used to train them. Although India does not yet have legal framework to govern this rapidly advancing sector, legislation like the Information Technology Act, 2000, the Digital Personal Data Protection Act, 2023, and the Information Technology Rules, 2021 are essential to monitor AI activities.

This paper will rely on a doctrinal format of research and look into the surrounding legislations to explore the topic of the Liability of AI Intermediaries under the Information Technology Act by looking into these research questions –

  1. Is an AI powered platform even an “intermediary” under the Information Technology Act.
  2. Who is to be held liable for any unlawful content put out by an AI model.
  3. The application of “Due Diligence” under section 79 of the IT act on an AI intermediary.

Critical Analysis

With the growing integration of AI into digital platforms, many questions arise about its classification under existing legal frameworks. This classification rests on the functional analysis of the role of an AI in the Information cycle. While an AI powered platform may qualify as an intermediary, it falls in a legal grey area because intermediary laws, such as Information Technology Act, were not framed with AI in mind. This leads to arguments on both sides, an AI platform can either be seen as an active creator of information or even as a passive intermediary, simply involved in facilitating the creator.

Several arguments can be raised to support an AI as an active creator and not an intermediary. The objection arises from the very language of the IT act, namely, the Information Technology Act, 2000, while defining “intermediary” under Section 2(1)(w), speaks of a “person” acting “on behalf of another person”; thus, an AI system cannot per se fall within the definition of an intermediary. The entity that uses the AI, though, such as a platform, developer, or user, would fall within the ambit of intermediary if its service involves receiving, storing, or transmitting electronic records generated or processed by the AI. Secondly, because traditional intermediaries act as passive platforms for user-generated content, an AI platform’s active role in generating, modifying, or curating content creates hesitation in classifying it as an Intermediary. The principal applied in the landmark case of Christian Louboutin v Nakul Bajaj can also be applied to AI platforms wherein the Delhi High Court ruled that entities actively involved in the inventory, management, or quality control cannot claim to be passive intermediaries.

On the other hand, the basis for the argument of defining AI as an Intermediary is based on the idea that that the focus must lie on AIs function as a facilitator rather than a creator. According to this view, the users create the original electronic record while the AI merely serves as a tool that processes, organises and formats the input into an output and does not act independently. The user would still qualifier as a primary author of the information and the AI platform’s role is limited acting on their behalf to simply receive and transmit this user-initiated information. Further, it must be taken into consideration, that after a literal interpretation of the statutory definition, an AI platform perfectly fits the role of a traditional intermediary as it doesn’t act for itself, it acts on behalf of another person. An AI platform is involved in the acts of receiving, storing and transmitting information.

The fundamental purpose of AI is to provide a service, which is the transformation of an input into an output which are the same as that of a neutral service provider such as cloud storage systems. It is also important to note that he basis of intermediary liability is dependent on the concept of actual knowledge. Since an AI has no intent (mens rea) or a mind of its own, and cannot possess actual knowledge in the legal sense because the operations of an AI are the result of a pre-programmed algorithms that has been trained on a vast dataset unlike a human mind. Moreover, as per the purposive rule of interpretation, the purpose of intermediary liability is to ensure accountability for unlawful content. If an AI platform is not considered an intermediary, it could potentially become a safe harbour for the proliferation of harmful content such as misinformation, deepfakes etc because due diligence obligations would not be applicable to it.

It is a basic understanding of legal personality that an AI platform must be considered an intermediary, as it cannot hold personal liability considering its, in essence, an algorithm without consciousness, and the liability must then fall on a human entity. Liability for illegal content produced by an AI is a complicated matter that may fall on the developer who created and trained the system, the user who inputs the data, or the AI platform provider.  If a user purposefully uses AI to create illegal content, such as deepfakes, copyrighted content, or hateful or defamatory speech, they could be held accountable.

However, it is also essential to hold the developer or the owner of the AI platform liable. As held in Google India Private Limited v Visaka Industuries, the intermediary platform cannont claim absolute immunity if they fail to remove objectionable material and since the developer is responsible for the designing, building and training the AI algorithm, therefore, it also their responsibility to add adequate safety filters. If an AI was to generate unlawful content owing to the flaws in its programming then the developer must be held liable for negligence. Ideally, an AI developer and the user should be seen as “joint tortfeasor” since both the user and the creator hold certain liability with the user being accountable for the input and intent and the developer being responsible for enabling generation of harmful content and negligence. In the EU, the approach to AI platform is structured and places liability on the developers and operators of AI systems. It follows the AI Act, which is the world’s first legal framework for AI, and focuses more on risk management and prevents unlawful content generation before it happens.

Section 79 of the Information Technology Act, 2000, provides a “safe harbour” to intermediaries, protecting them from liability. The emergence of AI-powered platforms poses a major threat to the framework for Intermediary Responsibility. Further, the IT Rules, 2021, establish a strict framework for “due diligence” under Section 79, making intermediaries appoint compliance officers and adhere to strict timelines for content removal. This creates a “comply or lose safe harbor” regime, making liability dependent on procedural adherence rather than just passive conduct. The traditional “notice and takedown” approach, as discussed in Shreya Singhal v Union of India, which relies on court orders for illegal content, is not suitable for algorithmic damages. This precedent, designed for human-generated content, does not fulfil the scale and speed of the AI generated content. Its active participation could potentially undermine a platform’s claim to safe harbour protection under Section 79(2)(b) because it could be interpreted as “modifying” information. The DPDP Act, 2023, while majorly focused on data fiduciary obligations, introduces accountability of the AI platforms to look past the role of a passive intermediary and become responsible for how their algorithm processes personal data, regardless of the safe harbour under the IT act.

A new legal framework is therefore needed to redefine “due diligence” to include algorithmic accountability and transparency in order to guarantee that the law appropriately addresses the unique liability issues brought up by AI without impeding technological advancement.

Conclusion

AI-powered platforms are not specifically accounted for in the present intermediary liability structure, as specified in Section 79 of the Information Technology Act of 2000. Although AI platforms may act as intermediaries, the law’s assumption of a passive service provider is challenged by their active curation and content generation functions. The rapid, algorithmic spread of damaging content makes the “notice and takedown” strategy ineffective. The active engagement of an intermediary might undermine its safe harbour protection. Therefore, a modern legal system is essential. By demanding proactive risk assessments and transparency, this paradigm should redefine “due diligence” to include algorithmic accountability. To guarantee a fair and efficient legal response to the challenges brought by AI, responsibility for illegal content must be split among the user, developer, and platform provider based on their responsibilities.

FAQs

  1. What is “liability for AI-enabled platforms”?

It is the legal liability that can be aimed at the developers, owners, or users of artificial intelligence systems for their platforms to inflict harm, such as the spread of false information, violation of privacy, or intellectual property rights violation.

  • How is the Information Technology Act, 2000 applicable to AI?

The IT act had been prepared long before the widespread use of AI. However, it regulates online activities as well as digital intermediaries, for platforms that use AI to transmit or store information, especially within the intermediary liability framework.

  • What an “intermediary” is as defined under the IT Act?

As per Section 2(w) of the IT Act, the intermediary is the one who stores, receives, or transmits electronic records on behalf of others. The list ranges from social networking sites, online shops, to even artificial intelligence applications that relate to user information.

  • Could the AI system be seen as itself operating as intermediary?

Quite the contrary. The AI platform is a platform, but it is a tool, not a legal person. The company or person running the platform can be said to be an intermediary if the platform hosts or displays user-created contents through the use of AI.

  • What is “safe harbour” under Section 79 of the IT Act?

Safe harbour means that intermediaries are not liable for third-party content if they follow due diligence and act quickly to remove unlawful material once notified. It’s a kind of legal shield—provided the intermediary acts responsibly.

  • Are AI platforms entitled to safe harbour protection?

That will depend. If the platform is doing nothing but hosting user content without any involvement, it can be eligible. However, where the AI creates, modifies, or facilitates illegal content, the operator will be stripped of the safe harbour because of increased control over the matter.

  • Who is responsible for an AI when it creates harmful or illegal content?

Responsibility can be attributed to:

  • Users, who exploit AI technologies.
  • Developers, who build broken or malicious systems.
  • The platform administrators, who don’t control or respond to violations.

The courts can use the doctrine of joint liability as per the case.

  • What is “due diligence” in the case of AI intermediaries?

Due diligence involves taking prudent measures, such as tracking content, reporting mechanisms, and transparency. For artificial intelligence, it can be as diverse as auditing algorithms, avoiding bias, and making sure the output is legal.

  • Currently, are there any India specific laws pertaining to AI?

No, there is no specific AI law yet in India. The IT Act as well as the IT Rules, 2021, are still utilized to address AI issues, but they can’t be said to be fully capable of confronting AI’s self-governing and dynamic nature.

  1. Why is a legal reform required?

AI systems challenge traditional legal categories like “intermediary” and “publisher.” As they become more autonomous, laws must evolve to clearly define responsibility, ensure algorithmic transparency, and protect users from harm.

About Author

Anvi, a law student at Symbiosis Law School, Pune, is an emerging legal researcher and writer with a keen interest in evolving digital legal landscapes. Passionate about Intellectual Property Rights, Artificial Intelligence Law, and Information Technology Laws, Anvi actively engages with contemporary legal nuances offering insightful perspectives through her writing.

Bibliography

Leave a Reply

Your email address will not be published. Required fields are marked *