“AI Accountability In Legal Practice”

Author(s): S. Sharath Chandra

Paper Details: Volume 3, Issue 6

Citation: IJLSSS 3(6) 39

Page No: 398 – 403

ABSTRACT

As of December 23, 2025, the use of artificial intelligence (AI), especially generative and agentic systems, in legal practice has moved from being an experiment to a key part of the infrastructure. This shift raises important questions about accountability. Lawyers are still personally and professionally responsible for all outputs, even when using AI tools that can produce content that seems plausible but is actually inaccurate, often referred to as “hallucinations.” High-profile sanctions in U.S. courts throughout 2025, along with guidance from the American Bar Association (ABA) Formal Opinion 512 and various state bars, highlight that ethical duties of competence (Rule 1.1), supervision (Rule 5.3), confidentiality (Rule 1.6), and honesty to courts cannot be handed off to algorithms. At the same time, the European Union’s AI Act, with major provisions fully in effect by mid-2025, places risk- based duties on high-risk AI applications in justice and legal services. This act demands transparency, human oversight, and systemic accountability. This article explores the changing accountability landscape in legal practice. It reviews ongoing challenges like verification burdens and risks of overreliance. It also looks at regulatory developments in key areas and suggests practical frameworks for responsible AI use in law firms and the judiciary. The article argues that while AI improves efficiency and access to justice, real accountability needs a cultural shift towards proactive education, strict verification processes, and institutional policies that maintain human judgment at the core of legal professionalism.

INTRODUCTION

THE ACCOUNTABILITY IMPERATIVE IN AN AI-AUGMENTED LEGAL PROFESSION

By the end of 2025 Artificial Intelligence tools are really important for work all around the world. Artificial Intelligence helps lawyers do things like write contracts summarize what the law says about

cases check everything is okay before a company buys something and even make the first versions of court papers or notes.

A lot of lawyers are now using Artificial Intelligence. Then 75 percent of them. This is a jump from just two years ago when less than 20 percent of lawyers used Artificial Intelligence.

Now we have Artificial Intelligence systems that can do lots of things on their own and make decisions without being told what to do. These Artificial Intelligence systems are not just helpers they are, like partners that lawyers work with to make decisions.

The new way of doing things with artificial intelligence promises to make things easier, cheaper and give people access to justice, especially those who do not have a lot of resources. It also brings some big problems. Artificial intelligence is different from the tools we used to use. Artificial intelligence can make things up. Present them as facts. It can give information cite cases that do not exist or give biased opinions that sound like they are true. When this false information is used in court documents advice to clients or decisions made inside a law firm it can cause a lot of trouble. This trouble can include getting in trouble at work damaging the reputation of the firm being sued for malpractice and making people lose faith in the system. Artificial intelligence can really cause some problems if we are not careful, with it.

Accountability in this context means that lawyers cannot shift responsibility to the technology. They must retain ultimate oversight, ensuring that AI serves as an aid rather than a substitute for professional judgment. This principal echo longstanding ethical rules but gains new urgency with AI’s scale and opacity. The profession now faces a reckoning: how to harness AI’s benefits while safeguarding the core values of diligence, competence, and integrity.

Challenges in AI-Assisted Legal Practice:

HALLUCINATIONS AND VERIFICATION BURDENS

Generative models predict statistically probable outputs based on training data, not verified truth. This leads to plausible falsehoods, such as invented precedents or distorted legal interpretations. Throughout 2025, courts worldwide documented hundreds of such incidents. In the U.S., sanctions ranged from modest fines to referrals to bar authorities, with notable cases involving major firms like Morgan & Morgan and Butler Snow LLP facing penalties for AI-generated fictitious citations. Judges emphasized that Rule 11 obligations (or equivalents) require reasonable inquiry into factual and legal accuracy — a duty that AI misuse violates when outputs go unchecked.

OVERRELIANCE AND AUTOMATION BIAS

Lawyers may defer excessively to AI recommendations, especially under time pressure, leading to diminished critical thinking. This “automation bias” creates accountability gaps, as human judgment

— essential for nuanced legal reasoning — is sidelined.

CONFIDENTIALITY AND DATA SECURITY RISKS

Inputting sensitive client information into public or inadequately secured AI tools risks unauthorized disclosure. Even enterprise-grade tools require careful evaluation of data retention policies and breach notification mechanisms.

SUPERVISION OF NON-HUMAN ASSISTANTS

Ethical rules treating AI as akin to nonlawyer assistants impose supervisory duties. Firm leaders must establish policies, provide training, and enforce verification protocols.

BIAS AMPLIFICATION AND FAIRNESS CONCERNS

AI trained on historical data can perpetuate systemic biases in areas like sentencing predictions or hiring tools, raising discrimination risks.

REGULATORY AND ETHICAL FRAMEWORKS IN 2025

UNITED STATES: ETHICS GUIDANCE AND JUDICIAL ENFORCEMENT

The American Bar Associations Formal Opinion 512 which was released in July 2024. Is still important, in 2025 is the main guide. It uses the Model Rules for the American Bar Association and applies them to intelligence that can create things specifically generative AI. The American Bar Associations Formal Opinion 512 is what lawyers look at when they deal with AI.

Lawyers have to learn about what Artificial Intelligence can. Cannot do. They need to understand Artificial Intelligence well. This means they should keep studying Artificial Intelligence to know more about it and stay up to date with the developments, in Artificial Intelligence.

We need to think about keeping things private which’s Rule 1.6 Confidentiality. This means we have to look at the tools we use to make sure they keep our information safe. We also have to get permission from people when we need to use their information and they have to know what they are agreeing to.

This is called consent and it is necessary, in many cases. We have to follow Confidentiality rules to protect data security and get informed consent when we need it.

SUPERVISION (RULE 5.3): OVERSEE AI AS NONLAWYER ASSISTANCE, WITH FIRM-WIDE POLICIES

When we talk about communication, we have to think about being honest with our clients. This is really important so it is a rule Rule 1.4. We should tell our clients when we are using Artificial Intelligence because they have the right to know what is going on. This is about being open and transparent, with the people we work with so we should consider disclosing Artificial Intelligence use to our clients.

When it comes to fees there is a rule to follow. This rule is called Rule 1.5. The idea of Rule 1.5 is that we should not charge people for time that artificial intelligence saves us. We also should not charge for things that artificial intelligence learns in a way. Rule 1.5 is important because it helps us be fair to people when we are using intelligence. Rule 1.5 is all, about fees. How we should handle fees when artificial intelligence is involved.

State bars have built upon this: California, Florida, New York, and Texas issued detailed opinions emphasizing verification and prohibiting blind reliance. Judicial responses have been stricter, with 2025 seeing a spike in sanctions for hallucinations — fines, mandatory training, and in severe cases, referral to disciplinary bodies. The ABA Task Force on AI reported in late 2025 that AI has shifted from experiment to infrastructure, urging governance, training, and risk management.

EUROPEAN UNION: THE AI ACT’S RISK-BASED ACCOUNTABILITY

The European Union Artificial Intelligence Act, which is being introduced in stages throughout the year 2025 groups Artificial Intelligence systems, into categories based on the level of risk they pose. The European Union Artificial Intelligence Act looks at the Artificial Intelligence systems. Decides how risky they are.

Prohibited Practices (effective February 2025): Bans on manipulative or social-scoring AI.

High-Risk Systems have a lot of rules to follow. These rules are put in place a little at a time. High- Risk Systems need to use Artificial Intelligence in a way when it comes to justice administration. This means High-Risk Systems have to make sure they are doing things correctly. High-Risk Systems also have to think about how to manage risks. High-Risk Systems need to take care of the data they have. High-Risk Systems must be transparent. This means High-Risk Systems have to be honest, about what

they’re doing. High-Risk Systems need people to watch over them. High-Risk Systems have to keep track of what they do so they can go back and look at it later.

People who make General-Purpose AI need to be honest about what General-Purpose AI does. They have to make sure General-Purpose AI does not break any copyright rules. They also have to make sure General-Purpose AI is safe to use. These rules, for General-Purpose AI will start in August 2025. For people who work with the law using intelligence in legal services that comes with a lot of risk like trying to predict what will happen in a case means we need to have ways to keep track of what is going on like a record of all the steps taken. We also need to think about how these systems affect human rights. The Act works with the GDPR, which is a set of rules about data protection and it says that we need to be able to understand how artificial intelligence makes its decisions and that it should not discriminate against people. The people, in charge of making sure the rules are followed, like authorities and the European AI Office will make sure that everyone is doing what they are supposed to do and this will affect companies all around the world that provide these services.In contrast to the

U.S. ethics-focused approach, the EU imposes direct statutory obligations on deployers, including law firms using high-risk tools.

GLOBAL PERSPECTIVES

Other jurisdictions blend these models: the UK emphasizes AI literacy for barristers, while international bar associations advocate best practices for transparency and oversight.

PRACTICAL STRATEGIES FOR ENSURING ACCOUNTABILITY

To navigate these demands, law firms and practitioners should adopt multifaceted approaches:

  1. AI Governance Structures

Establish internal AI boards or committees to evaluate tools, set policies, and handle escalations. Regular audits and provenance tracking enhance traceability.

  • Training and Literacy Programs

Mandate CLE on AI risks, verification techniques, and ethical use. Focus on recognizing hallucinations and bias.

  • Verification Protocols

Treat AI outputs as initial drafts: cross-check against primary sources, maintain audit logs of human review, and use specialized legal AI with built-in citation validation.

  • Client Communication and Consent

Disclose AI use in engagement letters where material, especially for high-stakes matters.

  • Tool Selection and Security

Prioritize enterprise-grade, legal-specific AI with strong data protections and no-training-on-client- data policies.

  • Documentation and Audit Trails

Log AI prompts, outputs, and modifications to demonstrate diligence in disputes.

CONCLUSION: PRESERVING HUMAN ACCOUNTABILITY IN AN AI-ENABLED FUTURE

On December 23 2025 Artificial Intelligence is really a part of how lawyers work. It has the power to change things a lot. With Artificial Intelligence lawyers have to be more careful. There are rules about what happens when Artificial Intelligence makes mistakes and lawyers have to follow these rules. The European Union has a law called the EU AI Act and there are rules that say the same thing: lawyers are responsible, for what they do. Artificial Intelligence cannot take the place of a lawyer’s judgment. Lawyers have to use their brains and make good decisions when they use Artificial Intelligence.

The path forward requires a deliberate cultural evolution — from viewing AI as a shortcut to treating it as a supervised tool demanding rigorous oversight. By investing in literacy, governance, and verification, the legal profession can harness AI to enhance justice while upholding its ethical core. Failure to do so risks not only individual careers but the integrity of the rule of law itself. In balancing innovation with accountability, lawyers reaffirm their role as guardians of reasoned, humane decision- making in an increasingly automated world.

Scroll to Top