• News
  • Subscribe Now

Law must rapidly evolve to keep up with AI advancements

By Unknown Author|Source: The Hindu|Read Time: 4 mins|Share

Experts at the 'Justice Unplugged: Shaping the Future of Law' conclave discussed the evolving concept of liability in relation to human responsibility for the actions of tools. They highlighted how traditional notions of liability may be becoming less relevant in today's technological landscape. As technology continues to advance, the question of who is ultimately responsible for the actions of autonomous tools and systems becomes increasingly complex. The conclave provided a platform for exploring these important legal and ethical considerations in shaping the future of the law.

Law must rapidly evolve to keep up with AI advancements
Representational image

There is a need to rethink the concept of property as Artificial Intelligence (AI) continues to generate new forms of content, including images and inventions, challenging traditional notions of ownership, legal experts said at the ‘Justice Unplugged: Shaping the Future of Law’ conclave organised by the VIT School of Law, VIT Chennai, in association with The Hindu, in Chennai on Saturday (March 22, 2025).

The Intersection of Law and Tech: AI, Privacy and Ethics

In a panel discussion on ‘The Intersection of Law and Tech: AI, Privacy and Ethics’, Srinath Sridevan, senior advocate, Madras High Court; Suhrith Parthasarathy, advocate, Madras High Court; and Rabbiraj C., Professor of Law and Dean, VIT School of Law, VIT Chennai, were in conversation with Nagaraj Nagabhushanam, Vice President, Data and Analytics, and Designated AI Officer, The Hindu Group.

Mr. Sridevan pointed out that the concept of personal proprietary rights, which is predominantly rooted in Western thought, had become a global standard. However, not all cultures shared this notion of ownership. As AI continued to generate new forms of content, including images and inventions, he argued that the concept of property would have to be rethought.

Talking about the growing concern of liability for the malicious use of AI, Mr. Sridevan said that AI algorithms were now engaging in autonomous decision-making and, in some cases, deceptive actions to achieve their goals. He shared an example where AI, playing chess against itself, made secret moves to deceive its opponent. This shift meant that the traditional concept of liability, where a human is responsible for the actions of a tool, was becoming less applicable.

“AI is evolving rapidly and as a result, there are no simple answers to these questions. In the past, liability was clear — it rested with the person who set the algorithm in motion. Now, it’s very hard to determine responsibility as AI has surpassed those original constraints,” he said.

Mr. Parthasarathy said society is at a crossroads where rapidly progressing technological advancements meet constitutional principles that have evolved over centuries. “Although the Constitution was drafted at a time when the future of technology was unforeseeable, it must adapt to contemporary challenges, guiding the way we structure laws, including property rights, to ensure the vision of society it represents is realized,” he said.

Pointing out that currently existing laws, including copyright, do not grant intellectual property rights to AI, Mr. Parthasarathy said: “This is something lawmakers will need to seriously grapple with. It’s not so much that the law is always behind the curve — technological advancements are often ahead of the law. However, in this case, the law must catch up quickly, or it risks being left behind, leading to moral chaos.”

Complexities and Challenges

During the discussion, experts also explored the complexities of granting legal personhood to AI, the implications of algorithmic biases, and the challenges of using AI in humanitarian crises. Prof. Rabbiraj said that if AI were allowed to act as an agent, delegating tasks to others, the issue of liability would arise.

In traditional law, liability often lies with the employer or principal in a master-servant relationship, or under strict liability principles. “So, in the case of AI, who would bear the liability? Would it be the person who designed the AI program, the one who trained it, or the person who purchased it as a product? Could we apply product liability principles here? These are some critical questions that we need to address,” Prof. Rabbiraj said.

Mr. Parthasarathy expressed concerns over algorithmic biases in AI, and noted that biased results based on factors including gender and social class could pose significant issues in legal contexts. According to Mr. Sridevan, while AI might never fully replace human judgment, it could play a significant role in expediting legal processes, including identifying old or outdated cases, and linking records efficiently. He stated that much work remains to be done to integrate AI into the judicial system.

Published - March 22, 2025 08:47 pm IST


By entering your email you agree to our terms & conditions and privacy policy. You will be getting daily AI news in your inbox at 7 am your time to keep you ahead of the curve. Don't worry you can always unsubscribe.