Back to News

Generative AI Is Here and Ready to Disrupt

In a New York Times article in July 2020, Elon Musk said, “[W]e’re headed toward a situation where A.I. is vastly smarter than humans and I think that time frame is less than five years from now.  But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.” 

Cue our “weird” world. Before November 2022, “generative AI” was not a household term. By early December 2022, ChatGPT had 1 million users; by the next month, it had more than 100 million users. This explosive growth far outstrips any user adoption of consumer applications in history. And generative AI technology is becoming increasingly sophisticated at an exponential pace. For example, Open AI’s model GPT-4 recently passed the Uniform Bar Exam scoring a 297, a score in the 90th percentile and eligible to pass the exam in every state where the UBE is offered.  Generative AI has demonstrated it is capable of passing licensing and other examinations in a wide range of fields.

As we head down the path Musk envisioned just a few years ago, it’s important to learn how generative AI tools can potentially help—rather than hinder—the practice of law, though risks remain.

Before we do that, here’s a quick primer on key terms.

Key Terms

  • Generative AI – Artificial Intelligence, or AI, is “the general ability of computers to emulate human thought and perform tasks in real-world environments.”Generative AI creates new content based on prompts.
  • Machine learning – the use of technologies and algorithms “to automatically learn insights and recognize patterns from data, applying that learning to make increasingly better decisions.”
  • Natural language generation – enabling computers to “generate human-like language using machine-learning techniques.”
  • Parameters – the values a machine learning algorithm “can change independently as it learns.”
  • Language model – a probability-based algorithm designed to analyze words and predict the next contextually coherent word.
  • Hallucination – when a large language model generates false information, deviating from an external fact or contextual logic.

Bottom Line

Lawyers and clients alike should celebrate the introduction of this disruptive and widely adopted technology. The legal profession, while historically slow to embrace technological advances, should not be caught flat-footed with generative AI.  Generative AI can reduce expenses while allowing lawyers to increase their capabilities through canny utilization of the technology. Responsible use of generative AI will decrease client expenses and increase access to justice.  As with all other legal tools, however, lawyers must understand generative AI before using it and must always comply with their duties of candor and professional conduct—even if the information was created through generative AI. See, e.g., Model Rules of Prof’l Conduct R. 1.1 (lawyers have a duty of competence, which includes keeping abreast of developing technologies).

As discussed below, most lawyers agree with this position conceptually, but few actually use generative AI—yet. It is evident that lawyers can harness generative AI to advance their clients’ positions but are unsure how to do so while adhering to their ethical commitments and first principles.  We thus sketch a roadmap toward such careful and intentional adoption.

    I. The Worry

    ChatGPT introduced an exponential leap in AI technology to the mainstream audience as “the largest language model in existence” (for now).  And it operates under self-supervised learning, exposed to 175 billion parameters. This means that it can interface almost seamlessly with a human to converse, digest language, follow prompts, and create content, among other things. The key word: almost.

      A. Hallucinations

    One aspect of AI that frightens legal practitioners is the technology’s current propensity to “hallucinate.” As relevant here, that means asserting a fact that is false. In other words, sometimes AI, like humans, can make mistakes. In May 2023, a plaintiff’s attorney in New York made all kinds of headlines, including the New York Times, when he used ChatGPT to provide supplemental legal authority for a federal court brief. The chatbot identified six citations to cases that did not exist, which the lawyer included in his brief. The attorney asserts that he did not realize ChatGPT could provide him with non-existent cases. We can give him the benefit of the doubt since he could have been used to specific legal databases like Westlaw or Lexis that are limited to real cases. It is possible the attorney truly did not fathom that a platform might generate false information.

    At the end of the day, though, the New York attorney’s intent was not the thing that grabbed headlines. He became a self-proclaimed “poster child[] for the perils of dabbling with new technology.”

    And attorney reliance on AI and exposure to its pitfalls (such as hallucinations) is not limited to the Southern District of New York. A defense lawyer in Colorado Springs recently found himself in similar circumstances when he relied on ChatGPT to conduct legal research. In that case, the lawyer also cited cases that were nonexistent. The newly licensed lawyer explained, “I felt my lack of experience in legal research and writing, and consequently, my efficiency in this regard could be exponentially augmented to the benefit of my clients by expediting the time-intensive research portion of drafting.”

      B. Privacy & Privilege

    In addition to concerns about false information, others worry that using generative AI in a legal practice might violate IP laws, perpetuate biased data, or create unique avenues for cyber attacks or fraud. And looming large above them all, lawyers face the potential of breaching the attorney-client privilege, the attorney work-product doctrine, or confidentiality agreements with their clients, through their prompts submitted to generative AI models. See Model Rules of Prof’l Conduct R. 1.6. For an open-source chatbot like ChatGPT, giving a detailed enough prompt to generate useful content could require divulging privileged or confidential facts. And there is no guarantee that such facts will not be divulged to others; indeed, ChatGPT is designed to learn from information submitted through prompts to improve its responses in general.

      C. Copyright

    Lawyers must also be cognizant of copyright issues while utilizing generative AI. Generative AI raises new questions about how principles of copyright law—such as fair use, infringement, and authorship—will apply to content created by generative AI. Lack of attribution and compensation for use of copyrighted works is quickly becoming a peril for those who use generative AI. And litigation in this space is just getting started. See, e.g., Anderson v. Stability AI, Ltd., case no. 3:23-cv-00201 (N.D. Cal. Jan. 13, 2023); J. Doe 1, et al. v. Github, Inc., case no. 22-cv-06823 (N.D. Cal. 2023); Getty Images (US), Inc. v. Stability AI, Inc., No. 1:99-mc-09999 (D. Del. Jan 01, 2023). For practitioners, it is critical to remain aware of the information you are putting into generative AI and stay cognizant of the information that generative AI produces in response to your request.

      D. Rules of Professional Responsibility

    Rule 11 of the Federal Rules of Civil Procedure prohibits frivolous and unwarranted claims, defenses, and legal contentions. State courts coast-to-coast feature equivalent rules. See, e.g., Cal. Civ. P. Code § 128.; IL Sup. Ct. R. 137; Mass. R. Civ. P. 7, 11; NYCRR § 130-1 et seq. Rule 11(b), in relevant part, states that a lawyer who submits a pleading, motion, or other paper to the court “certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances” that the claims, defenses, and other legal contentions are warranted by existing law and that the factual contentions have evidentiary support. As officers of the court, lawyers have a duty to advance their clients’ positions while maintaining a duty of candor to the court. Model Rules of Prof’l Conduct R. 1.2, 3.3.

      E. Lack of Regulation

    Forthcoming court orders and regulations generate further uncertainty regarding the future of generative AI and the practice of law. Some judges now require litigants to certify whether they have used generative AI in their filings. Large technology companies are also working with governments to determine the future of generative AI. On July 21, 2023, seven companies in the generative AI space announced their commitment to new standards involving generative AI. It is nearly certain that governments will adopt regulations involving generative AI. It is incumbent on lawyers to stay abreast of ongoing developments in this space. Indeed, as discussed above, the ethical rules require it.

    All of these concerns have driven some lawyers to simply sign off of generative AI altogether. In a survey published in April 2023, of the 440 lawyers interviewed, 82% said generative AI could be used in a law practice, but only 51% said it should be used.

    But how should attorneys use generative AI? As with most questions posed to lawyers, the answer is “it depends.”

    II. Why Go There

    Generative AI can vastly speed up and improve legal research, document drafting, and even legal analysis, among other things. This, in turn, can improve lawyer well-being.  Lawyers should consider obtaining client approval about certain uses of generative AI before using it in practice. Clients deserve to make educated decisions about the legal services their lawyers perform. In many instances, generative AI can allow a lawyer to provide legal services at a lower cost, which could allow a client to open an engagement they otherwise would not have. We anticipate that generative AI might make fixed-fee agreements more appealing to both attorneys and clients. It is imperative, however, that all parties understand and agree on how generative AI will be used in each matter.

    Some generative AI vendors specific to the legal space, such as Harvey.AI, Logikcull, LawGeex, and DISCO, boast that they can reduce time spent on discovery by as much as 90%. Harvey.AI in particular claims that it “assists with contract analysis, due diligence, litigation, and regulatory compliance and can help generate insights, recommendations, and predictions based on data. As a result, lawyers can deliver faster and more cost-effective solutions for client issues.”

    Harvey.AI also markets a claimed ability to avoid privilege breaches by customizing generative AI models for law firms. According to public descriptions of the model, Harvey.AI enables lawyers to store the data on their own servers, with even the information revealed in prompts hidden from third party consumption. Harvey also claims to apply its machine learning to the firm’s own work products and templates, so it becomes “smarter” as to that firm’s particular practice, tone, and content.

    Some “Big Law” firms are already on board. For example, Allen & Overy ran a beta trial using Harvey starting in November 2022.  By February 2023, 3,500 of Allen & Overy’s lawyers had used Harvey to answer 40,000 prompts. The Head of Allen & Overy’s Markets Innovation Group said the technology application “deliver[ed] unprecedented efficiency and intelligence.” These are just a smattering examples; the list of AI applications for law grows every day.

    III. The Way Forward

    An opportunity exists to incorporate generative AI technologies into the practice of law, so long as those technologies are adopted thoughtfully, carefully, and in compliance with applicable rules and ethical guidelines. The technologies, however, need to be understood for what they are. In a best-case scenario, lawyers may be able to leverage this new technology to perform some of the “grunt work” involved in the practice of law and use the new-found time to stay engrained in niche, complex areas of law. Generative AI (at least for now) can best be used to distill datasets and records, which a lawyer can cite-check and edit accordingly. And lawyers will continue to add value through their knowledge of “soft” variables. For example, experienced litigators know how to read judges—a “human” skill that AI cannot yet duplicate.

    Clients hire lawyers (not generative AI) for their expertise. The lawyer (not generative AI) must sign and file documents  The lawyer (not generative AI) must understand the subject matter, case law, and local rules. And the lawyer (not generative AI) must understand the other soft variables, such as their client’s goals and business, venue, jury population, and judges. The immediate benefits of generative AI are already becoming clear. So too are the dangers generative AI poses to unwary lawyers who seek to marshal its benefits while remaining ignorant of its limitations. Despite the technology being in its infancy, it is now imperative for lawyers to understand the technology and use it responsibly.

    The founder of OpenAI, which operates ChatGPT, said it best: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on [generative AI] for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.” To us, that sounds like a promising young associate.

    That’s the good news. Law firm partners are no strangers to the task of taking something—or someone—new to the practice of law and supervising them, training them, investing in them, and giving them opportunities to flourish. A first-year associate typically isn’t sent in to first-chair a major trial; they play a supporting role and learn from the best.  If they’re smart and capable, they’ll start practicing what they’ve seen modeled for them in lower-risk scenarios. Then, over time, they’ll be given greater opportunities as they prove themselves to be credible, reliable, accurate, etc.

    Should we treat generative AI tools any differently? Thanks to machine learning, the system will get better with time as it absorbs more data and learns from mistakes. But for now, ethical and sound use of the technology means close supervision, verification, and training by a human.

    Going back to the April 2023 study, only 3% of respondents were using generative AI in their legal practices. That might be because 80% of partners and managing partners interviewed were concerned about the risks. But a good lawyer’s job is to help their client navigate risks to leverage probabilities in their favor. Lawyers need to apply that careful framework to their own businesses and treat generative AI for what it is—in beta mode, just like that promising first-year associate.

    Click Here for Printable Version