31.1 C
Delhi
Monday, April 6, 2026

AI Liability in India: Legal Framework, Governance Challenges, and Future Accountability

HomeLawAI Liability in India: Legal Framework, Governance Challenges, and...

The history of the current research is the fast-changing technology, which has evolved into more sophisticated intelligent systems with the capability of learning, adaptation and manipulation of human decisions. Artificial Intelligence has slowly become a part of the law, governance, commerce and public administration, as well as discussion of rights. With the growing influence of AI systems on privacy, safety, dignity, economic opportunity and accountability of the population, the law system has to re-evaluate the conventional regulations of responsibility. The problem has taken a particular topicality in India where the use of AI is rapidly increasing and the liability system is yet to be designed in its entirety. Thus, the history of the given study should be perceived both historically, legally and institutionally1.

Evolution from Traditional Machines to Intelligent Systems

The history of the liability in the era of intelligent machines starts with the previous legal understanding of machines as the inactive tools at the disposal of the human. During the era of industrial and mechanical age, machines were perceived to be machines that simply followed instructions issued by the operators. In case of any damage, the liability was normally placed on the manufacturer, owner, employer or immediate human user by the application of available doctrine of negligence, breach of duty, defective product or vicarious liability. With the advent of computers and software, this framework was broadened and it was still considered by the law that digital systems were programmed tools whose behavior was traceable to the identifiable human decisions. This was however altered greatly with the evolution of the Artificial Intelligence. The AI systems can be pattern recognizing, predictive, autonomous processing, content generating and decision supporting which is not always capable of delivering entirely predictable results. Such systems generally work with dynamic data inputs, machine learning models, algorithmic tweaks and deployment situations to make their activities more complicated and less understandable, unlike traditional machines do.

Consequently, the legal problem has been created by the historical change of the immobile mechanical to intelligent computer systems. The law cannot just rely only on the direct human action as the only accountability. This is a technological change that has become the primary basis of the current research2.

Emergence of Artificial Intelligence as a Governance Concern

The creation of Artificial Intelligence as a legal issue did not start as a scientific and technological desire but as a concept of automation, efficiency and innovation. However, with time, AI stopped being used in laboratories and entered normal social and institutional life. It started to affect search engines, social media modulation, consumer profiling, recommendation systems, fraud detection, facial recognition, hiring, digital payments, health technologies, smart mobility and service delivery to the populace. This growth made AI a technological change to a governmental issue. As soon as the rights, opportunities, reputations and access to services began to be influenced by AI, the issues of fairness, transparency, explainability, discrimination, safety and responsibility became inevitable. The higher the level of AI application in critical social activities, the better the legal systems of most countries in the world realised that innovation could not be held above responsibility. The question now had shifted on whether AI is effective but whether it is effective legally, safely and fairly. Governance was also needed due to the possibility of harmful effects of the AI systems not only in the form of mechanical malfunction but also biased outputs, non-transparent decisionmaking, invasion of privacy, misinformation and systematic marginalization. Thus, the emergence of AI as one of the governance concerns is also a significant point in the historical evolution of the current subject. This change in the discourse of innovation to the discourse of accountability is the one that renders the issue of liability the focus of the intelligent machines research3.

Historical Development of Digital Regulation in India

In India, AI governance is closely related to the digital regulation development in general. The first significant move towards this direction was the passage of the Information Technology Act, 2000 that gave a legal status to the electronic records and transactions as well as creating the groundwork on cyber regulation. By that point, the legislation was not created with.

Artificial Intelligence in mind, but it developed the initial legislation on handling digital behavior, intermediary role, electronic evidence and cyber-related offenses. With the development of technology, the Indian legal system was responding slowly by subordinating the law, sectoral regulations, data security expectation and platform requirement. In the long run, the emergence of social media, digital platforms, online business and algorithm logic influenced the law by making it more focused on due diligence, accountability of the intermediary and the regulation of data. This was further enhanced by the fact that the issue of privacy has become a constitutional issue and probably with the subsequent adoption of a specific data protection system. In such a way, the process of forming the AI liability in India did not commence with a specific AI statute, instead, it is the development of the digital law in general. This incremental change is significant since the new AI governance demands in India are now being erected to have a foundation on already existing frameworks of the law of information technology, privacy law, intermediary liability and constitutional ideals. The Indian digital regulation history as such is thus a crucial background to the current topic of research4.

Rise of Data Protection and Rights-Based Accountability

One of the milestones of the background of this topic is the emergence of data protection and rights-based accountability in India. The current AI relies significantly on data to be trained, classified, profiled, predicted as well as automated inferred. In its turn, this implies that the question of the data collection, processing, storage and utilization should also be addressed when it comes to AI liability. Traditionally, the Indian law lacked a holistic system of personal data protection over a long time, despite the fact that the digital information was becoming a primary focus of an administration and business. Due to the growth of the issues related to privacy, surveillance, profiling and misuse of personal information, the law of technological harm was also modified. The issue was no longer that of a physical harm or economic damage, rather, it was invasion of privacy, injury to dignity, making of unjustifiable decisions and exploiting the data. The privacy right was a constitutional right with a great normative basis on subsequent regulatory developments. Subsequently, the trend towards the data protection law reinforced the relationship between the AI implementation and responsibility. Such a historical trend is very topical since AI systems tend to work based on the personal and behavioural data translation to decisions, which impact a person in real life. Thus, the liability of the era of intelligent machines cannot be perceived with the help of the conventional terms of responsibility or negligence. It also has to be examined using the data responsibility, informational fairness and legal protection using rights5.

Need for Reframing Traditional Liability Principles

Another reason as to why the current study background is based on the increasing ineffectiveness of the classical liability doctrines in the case of intelligent systems is the issue of their inapplicability to them. The classical legal thought evolved based on easy differentiation between actor and instrument, intention and result, manufacturer and consumer or wrongdoer and victim. The artificial intelligence makes these differences more complex due to the possibility of a chain of events that leads to an apparently harmful result and contains data providers, software developers, model trainers, deployers, institutional users, intermediaries and even end users of the system. This poses confusion on the causation, fault, foreseeability, standard of care as well as the legal attribution. Unfortunately, in the case when an AI tool attracts a discriminatory hiring recommendation, then it might be hard to determine whether it is a training data, model architecture, deployment environment or dependency on the employer. In the same way, in case the generative AI generates deceptive or defamatory content, it can be spread among the developer, platform, prompt-user and distributor. The cases when such situations take place show that, even in the new technological reality, traditional liability rules could be helpful, they only need to be interpreted. The legal system should consequently be transformed to a limited human-action model to a strata of accountability model which is founded on the areas of control, design responsibility, due diligence, risk allocation and context of deployment. This theological issue is among the most significant historical and theoretical reasons that lead to the given research6.

India’s Emerging AI Governance and the Centrality of Liability

The last and the most direct background of the given research is the modern emergence of AI governance debate in India. With the advent of AI being a key focus of the economic growth, modernization of the public sector and digital services, as well as the policy of innovation, India is starting to realize the necessity of responsible and trustful AI. The policy discourses, government policies and ethical theories have started to shift towards a more orderly system of governance. Nevertheless, the principles of governance are not enough in case they are not related to legal implications. Only under the circumstances that explain why a person should provide the required compliance, who will be responsible in case of failure and what kind of remedies can be taken in case of harm, a system of AI governance will have a sense. That is why the discussion of the liability is in the center of the given issue. India is at a significant legal phase today when the adoption of AI is growing rapidly and the responsibility doctrines are not yet clear. The problem is in that it should be possible to avoid the discouragement of innovation, but not to violate the legal responsibility. Accordingly, the context of this study is in the necessity to relate the new AI governance demands with the enforceable principles of liability in the Indian law. The study then continues based on the realization that intelligent machines are no longer just a technological product; they are legally important actors in a system of highly complex governance, which is controlled by humans but under a human controlled system7.

See Full Research by Ms. Jasleen Kaur (6th Sem – Law student at Amity University) Below:

  1. Abhinav Kumar and Kartik Tyagi, “Legal Framework and the Governance of AI in India” 70 Indian Journal of
    Public Administration 609 (2024). ↩︎
  2. Rowena Rodrigues, “Legal and Human Rights Issues of AI: Gaps, Challenges and Vulnerabilities” 4 Journal of Responsible Technology 100005 (2020) ↩︎
  3. Abhinav Kumar and Kartik Tyagi, “Legal Framework and the Governance of AI in India” 70 Indian Journal of Public Administration 609 (2024) ↩︎
  4. Subhajit Basu and Richard Jones, “Indian Information and Technology Act 2000: Review of the Regulatory Powers under the Act” 19 International Review of Law, Computers & Technology 209 (2005). ↩︎
  5. Acharaj Kaur Tuteja and Digvijay Singh, “Data Protection in India: Privacy, Personal Data, and the Saga of a Legislative and Economical Approach” 6 GNLU Journal of Law & Economics 92 (2023) ↩︎
  6. Miriam Buiten, Alexandre de Streel and Martin Peitz, “The Law and Economics of AI Liability” 48 Computer Law & Security Review 105794 (2023). ↩︎
  7. Akmal Pervaiz Ghazi, “AI Accountability in India: Need for Dedicated Legal Framework” 5 Indian Journal of Legal Review 1128 (2025). ↩︎

Article Word Jumble

Test your skills by unscrambling words found in this article!

Most Popular Articles

Play The Word Game!