Artificial intelligence is emerging as a transformative and, at times, disruptive force in numerous fields. Scientists can use AI to predict earthquakes, and retailers can prevent food waste using algorithms to forecast demand. AI is also making waves in law. Legal professionals at all levels must be familiar with emerging AI technologies to take advantage of the benefits and avoid the pitfalls that could lead to ethical transgressions and even malpractice.
To illustrate, consider the case of two New York personal injury lawyers who used ChatGPT in court last year, leading to sanctions. The issue arose when they submitted a brief that relied on research completed using ChatGPT, the advanced AI chatbot created by OpenAI, but without understanding its limitations. ChatGPT referenced several legal cases that did not exist and even provided fabricated quotes and citations. The lawyers and their firm were fined $5,000.
AI in the Legal Realm: A Double-Edged Sword
AI has opened up new avenues for efficiency because AI-driven tools are excellent for automating labor-intensive processes. Within a law practice, staff can use AI to conduct research, draft and review documents, and complete data entry. Using AI to automate routine tasks and generate standardized legal documents leaves more time for strategic thinking that can lead to better outcomes for clients.
Research is one of the most time-consuming processes for law firms. AI can analyze vast volumes of data from diverse sources. With the proper tools, lawyers can identify relevant precedents, statutes, and case law faster than ever.
Of course, as seen in the case of the New York lawyers facing sanctions, AI can also fabricate information. It is up to the lawyers and anyone assisting with research to verify AI-generated information. At the end of the day, high-quality legal practice requires direction that only a knowledgeable practitioner can provide.
Lawyers Using ChatGPT Should Reconsider
At a cybersecurity panel at the New York State Bar Association conference in the fall of 2023, AI was a hot topic of discussion. Panelist Emily Lewis, an associate at ArentFox Schiff, concluded that law firms should not use ChatGPT.
As of September 2023, ChatGPT can access the internet for more up-to-date information. However, this does not mean that outputs will always be accurate. ChatGPT is known to “hallucinate,” meaning the chatbot aggregates data and inaccurately predicts what text should come next. All current predictive language models suffer from this problem.
In addition to so-called hallucinations, the use of ChatGPT can potentially break attorney-client confidentiality and privilege. How? Though users can sometimes opt out, AI models accumulate input data to continue training their algorithms. Inputting details of a case so AI can write a legal brief, for example, might violate a client’s right to privacy.
At the panel, Lewis suggested that law firms could create in-house AI software to protect clients. All firm members must learn to use the software and understand its limitations. Software creation and employee training are no easy tasks. However, the importance of accountability and compliance with rules of professional responsibility when using AI tools cannot be overstated.
The Importance of Human Oversight
AI is not going away. Lawyers who do not embrace this technology may fall behind their peers. The issue is not that AI will replace people. Instead, humans who understand and use AI might replace those who do not.
AI algorithms excel in processing data and identifying patterns but lack the intuitive reasoning and contextual understanding humans have. At the end of the day, personal injury law requires real-world experience to address the intricacies of each case, as well as human empathy to understand the needs, goals, and challenges of clients in their day-to-day lives.
At Chopra & Nocerino, we have over 20 years of combined experience specializing in our practice area. Our real-life experience, understanding of the local courts, local judges and deep expertise are why we have won numerous multi-million dollar settlements and verdicts for our clients.
While AI might offer valuable assistance in taking a first stab at research and organizing information, it can never replace the in-depth legal analysis our experienced lawyers provide—nor our passionate commitment to justice and zealous advocacy on behalf of our clients.
Legal professionals must thoroughly and critically assess all outputs generated by AI. By leveraging human intelligence along with the practical benefits of AI, we can maximize our capacity to hit home runs for our client’s cases.
Addressing Ethical and Professional Concerns
Upholding the highest standards of ethical and professional conduct is essential for lawyers, clients, and society alike. Not only is this needed to avoid committing malpractice, but it is also essential to preserving public trust in the legal profession.
We have discussed how more lawyers are turning to AI tools for efficiency, but it is important to note that even judges are using AI to improve administrative workflows. Even the Chief Justice of the U.S. Supreme Court has affirmed the undeniable benefits of using AI in court. Still, acknowledging that misuse of AI could also undermine the integrity of the entire legal system, he urges caution.
“AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike. But just as obviously it risks invading privacy interests and dehumanizing the law. . . . As AI evolves, courts will need to consider its proper uses in litigation. ”
– Chief Justice John G. Roberts, U.S. Supreme Court, Year-End Report on the Federal Judiciary (2023).
In response to the emerging impact of AI on the practice of law as a whole, the American Bar Association is addressing AI’s impact on attorneys, courts, and the judicial system. That is why the ABA has created a Task Force on Law and Artificial Intelligence to address this unfolding issue. The task force aims to:
- Address AI’s impact on the legal profession and any related ethical issues
- Offer insights on how legal professionals can responsibly use AI
- Identify how to mitigate risks
As personal injury attorneys, we fight for our clients and work with integrity to uphold the core principles of our legal system—fairness, justice, equal protection, and due process. We must protect our legal processes and outcomes from the pitfalls of the haphazard integration of AI. We are tracking ongoing developments and guidance from professional organizations like ABA and the New York State Bar.
A Call for Cautious Integration
Lawyers who see the benefits of AI tools will want to prioritize responsible integration of these technologies. To effectively integrate AI into the practice of law, clear guidelines and training on the appropriate use is essential.
Regulatory agencies and professional associations are vital in guiding and supporting lawyers as we integrate AI into legal practice. By establishing ethical frameworks, best practices, and educational resources to promote the responsible and ethical use of AI tools, law firms can begin reaping the benefits of these tools.
AI can benefit lawyers and clients if used cautiously and carefully. At Chopra & Nocerino, we recognize that AI is a helpful tool, but also that it cannot replace the role of a seasoned advocate in delivering high-quality legal counsel and representation. AI will never be a substitute for the skills, experience, and judgment of a trained professional.
If you are looking for a personal injury lawyer who can hit a home run in your case, you are in the right place. Schedule your free consultation by calling (855) NYC-HURT or filling out our online contact form.
Further Reading
- American Bar Association, Model Rules of Professional Conduct (see Rule 1.6: Confidentiality of Information).
- New York State Bar Association, New York Rules of Professional Conduct (see Rule 1.6: Confidentiality of Information).
- New York State Unified Court System, Rules of Professional Conduct (see Rule 1.6: Confidentiality of Information).