Artificial intelligence (AI) use cases are expanding at a rapid rate, and the pressure is mounting for businesses to leverage that technology or risk being left behind by their competitors. In addition to open-source applications, businesses are using enterprise-specific tools that enable employees to use generative AI technology at work. This includes licensed versions of the open-source models or business-specific tools developed alongside the applications the business is already using.
Given the rapid adoption of these tools, many businesses may not have considered the impact that their use may have on e-discovery and future litigations. Though predicting the future is a difficult task, it is worth including litigation and e-discovery as part of an impact assessment of using AI tools in your business and what steps should be taken to mitigate those risks.
As generative AI has become more prevalent, regulators and lawmakers have made clear that the old laws still apply even though it is a new technology. So what does this mean for e-discovery? This article explores some of the ways that AI tools may complicate, or raise questions about, the current landscape.
Retention Policies
It is worth thinking through what an appropriate retention policy should be for the inputs and outputs of the AI tools that your business uses. There may be competing considerations here. On the one hand, it may be beneficial to retain less data to limit the amount of “evidence” that exists and to minimize the burden on the organization on data size. On the other hand, federal regulators have indicated that companies should be taking reasonable steps to preserve “ephemeral messages” as a matter of course so that they are available in the event of a subpoena or civil investigative demand. Whether or not a “chat” with an AI bot is an “ephemeral message” is an unanswered question. It is difficult to predict exactly what risks might be at play five or 10 years down the line—but having a well thought out and documented retention policy in place that includes any AI applications will certainly help if it becomes necessary to defend that policy.
Litigation Hold
No matter what a company’s normal course of business retention policy is, when a lawsuit is filed or subpoena received, a litigation hold supersedes that policy. Any data in connection with AI applications and tools should be included in the litigation hold. In this regard, it is necessary to understand the tools at your company’s disposal for placing a hold on the data created with the AI applications. Is the e-discovery software your company already has sufficient? Is there a specific license that your company needs to upgrade to in order to ensure a litigation hold is able to be placed on the data? Ideally, these questions should be considered and thought through before a company is faced with a subpoena or lawsuit.
Affirmative Discovery
Companies should also consider the implications of generative AI tools when conducting affirmative discovery in their case. This should start at the Rule 26(f) conference or state equivalent—find out what tools your adversary uses and what data sources are available as a result. Of course, if pursuing this avenue, litigants should be prepared to answer the same question for their own business. Litigants might consider including a specific request for queries made to generative AI tools and resulting responses and data output. Questions remain, though: Once generative AI data is produced in a litigation—then what? Can it be introduced as evidence? How can it be authenticated? Does all the source material have to be introduced with it to comprise a “complete” document? Can this actually be done? What if the “original” data is not available? These questions have not been answered but should be on the mind of litigators dealing with this data.
Protecting Privilege
A potential pitfall with certain AI tools for businesses is that the output from the tools might be based on communications previously received that contained privileged content, but there is no way to determine that based on the output from the AI tool. For example, an employee might ask a tool to prepare her for an upcoming meeting regarding a potential transaction. The summary generated by the tool might contain privileged information gleaned from the employee’s email inbox or file share repository. Or, a board might use an AI board observer to give input in real time. In both cases, the output might not be labeled “privileged” or attributed to an attorney, but it might contain privileged information. This presents challenges for counsel responsible for collection, review and production of documents. Companies can consider ways to mitigate the risk of privilege waiver, including using sensitivity labels or user access restrictions. At a minimum, counsel need to be aware of the concern while collecting and reviewing documents.
Generative AI tools are gaining widespread adoption, and the use cases are expanding rapidly. Though future litigation that does not relate directly to AI may not be at the forefront of the minds of leaders of businesses using these tools, it is prudent keep the above areas in mind when adopting generative AI technology. That way, businesses can anticipate and even mitigate risk, rather than be in reaction mode when knee-deep in an investigation or litigation.
Reprinted with permission from The Legal Intelligencer, © ALM Media Properties LLC. All rights reserved.