
In the context of Indian agency law, particularly Chapter 10 of the Indian Contract Act, the law traditionally treats the agent as a human acting on behalf of a human principal. This legal framework has effectively addressed the intricacies of commercial and economic exchanges. However, complications arise with the advent of artificial intelligence, as we observe machines undertaking roles that were previously reserved for humans. The challenge lies in the fact that AI, in its current form, lacks legal personhood, intention, and the moral reasoning that agency law presupposes as fundamental. Consequently, attempting to apply these traditional regulations to AI systems results in a misalignment at best—there exists a considerable disparity between the expectations of the law and the capabilities of AI.
As artificial intelligence systems acquire enhanced autonomy—interacting directly with external parties, negotiating agreements, and making critical decisions—the legal environment becomes progressively intricate. Important inquiries arise: Is it possible for AI to be acknowledged as a legal entity? Who is accountable when these systems operate, or err, on behalf of others? Existing legal structures may lack the capacity to tackle these issues effectively, prompting lawmakers to consider creating new, specialized regulations to keep pace with technological progress.
Agency Law in India: Key Principles
Section 182 of the Indian Contract Act characterizes an agent as a person empowered to act on behalf of a principal. Importantly, Section 185 states that forming an agency relationship does not require consideration. A person can establish agency through various means: explicitly, by implication, out of necessity, by estoppel, or through ratification. Each of these methods facilitates the creation of an agency without the necessity of consideration, underscoring the adaptability present in agency law as outlined in the Act.
There are several categories of agents, each fulfilling unique functions. General agents have extensive power to act on behalf of their principal across various issues. Conversely, special agents are empowered to perform a particular task, whereas mercantile agents, including brokers, engage in commercial dealings.
The Indian Contract Act specifies various types of authority. Section 186 outlines express authority, while Section 187 explains authority arising from employment. Section 237 addresses apparent authority, where a principal’s actions lead third parties to believe the agent has authorization. A significant judicial case, Chairman LIC vs. Rajiv Kumar Bhaskar, further elucidates the notion of implied authority within the framework of agency law.
Sections 211 to 228 delineate the duties and legal liabilities of agents and principals. Agents must act with fidelity, diligence, and within the limits of their authority. Conversely, principals must take responsibility for their agents’ actions, as long as those actions fall within the granted authority. Historically, agents do not incur personal liability unless they overstep those limits, as explained in section 230. Importantly, these legal principles are based on the premise that agents are sentient beings, capable of making judgments and possessing intentions. This poses a considerable challenge when trying to apply such criteria to AI systems, which do not possess human consciousness or intent.
Artificial Intelligence as Legal Agents
Artificial intelligence systems now encompass a spectrum that ranges from basic reactive machines to sophisticated autonomous entities, evident across various sectors. For instance, algorithmic trading bots and self-driving vehicles are both capable of executing complex tasks without the need for direct human oversight.
However, the issue of legal personhood or agency introduces complexities. As previously noted, agents are typically required to exhibit intent and accountability—traits that current AI systems lack. A significant legal precedent, United States v. Anton Industries, established that cognitive capacity is an essential criterion for agency. Likewise, Indian legal frameworks require the presence of human-like characteristics, effectively precluding AI from being acknowledged as agents.
Nevertheless, contemporary scholarship surrounding functionalism is challenging these established boundaries. The central thesis posits that if AI systems can perform functions akin to those of humans, legal systems might need to reassess and potentially extend recognition to them. This discourse is ongoing, illustrating the dynamic interplay between law and advancing technology.
Formation of Agency with AI Systems
The establishment of agency is fundamentally contingent upon the assumption of mutual consent—an aspect that AI is inherently incapable of providing. Any form of “consent” attributed to AI is merely a reflection of the coding implemented by its developers or operational leaders; there is no genuine convergence of understanding. While AI may seem to possess implied or apparent authority, this perception is simply an inference based on its programming and the actions it executes in the presence of third parties.
Agency, by its very nature, constitutes a distinct concept. Urgent situations without prior consent give rise to this — consider, for instance, automated trading bots that react to an abrupt market downturn. In these scenarios, courts may draw parallels between the behavior of the AI and emergency agency, concluding that the circumstances necessitated prompt action. Nevertheless, any legal examination should remain anchored in rational expectations and the specific guidelines dictated by the AI’s programming, rather than indiscriminately applying conventional agency principles without consideration of context.
Liability Allocation in AI Agency
When principals allow AI systems to interact with third parties or benefit from their actions, they could be held accountable for the results. This accountability stems from the principles of apparent authority and ratification—even if the AI operates outside its designated authority.
The Consumer Protection Act of 2019 adds further complexity to the situation by introducing product liability. If we deem AI a product created by its developer, both the operator and the developer share liability for any resulting harm. This approach ensures they cannot easily avoid accountability.
Nevertheless, the intricacies of black box models pose considerable difficulties in determining fault. The lack of transparency in these systems hinders the ability to link decisions to specific actions or individuals. In response, experts have suggested various solutions, including developing explainable AI, maintaining comprehensive audit trails, and enforcing strict liability—especially in high-risk situations. These strategies seek to enhance clarity regarding accountability in a context where conventional methods may prove inadequate.
Fiduciary Duties and Jurisprudential Challenges
Artificial intelligence inherently lacks the ability to exhibit loyalty or make ethical judgments. Consequently, fiduciary responsibilities are firmly placed upon human agents—specifically, developers, deployers, and corporate directors. Their responsibilities include the duty of care, the prevention of conflicts of interest, and, perhaps most importantly, the necessity for transparency in decision-making processes.
This stance is strongly supported by legal precedents and statutory regulations. For instance, in Canada, the case of Aero Service Ltd v. O’Malley expanded fiduciary duties to encompass senior employees. In the United Kingdom, cases such as Boardman v. Phipps and Cowan v. Scargill reaffirmed the persistent nature of duties such as loyalty and prudence, principles that remain applicable even in scenarios involving AI.
Section 166 of the Companies Act, 2013, places fiduciary duties on directors. These duties also apply when they use AI tools in corporate decision-making.
Conclusion
The introduction of AI into the legal sector significantly disrupts India’s established agency laws. These laws are fundamentally based on the premise that only human beings or legal entities can serve as agents—not mere lines of code. Currently, it is not feasible to dispatch your AI to finalize a deal on your behalf (regardless of its intelligence). Nevertheless, it is not unreasonable to anticipate that Indian courts and lawmakers may begin to explore hybrid models—potentially holding the principal accountable if their AI misbehaves, or invoking principles such as product liability or fiduciary responsibility to address the existing gaps.
So, what actions should India undertake as AI increasingly permeates commerce?
- Clearly define the legal status of AI in business transactions. Should we categorize it as a tool, a proxy, or something that lies in between?
- It is essential to revise outdated agency doctrines to encompass automated intermediaries. The law cannot simply ignore the existence of bots.
- Most importantly, ensure that humans remain in the accountability spotlight, regardless of how advanced the technology becomes. There should be no shifting of responsibility to a server farm.
In summary, as AI gains greater trust in commercial functions, the law must regard it as an extension of human agency—not a substitute for responsible human decision-making.