What do we mean by risk?
Compare the dictionary definition to one more common in enterprise risk management:
Dictionary
“The possibility of loss or injury.”
Enterprise Risk Management
“The possibility that an event or situation will impact the stakeholder’s objectives.”
The Enterprise Risk Management definition provides a helpful refinement where risk is value-neutral, and potential impact can be either positive or negative.
Risk Assessment for AI
As with anything, a risk assessment of AI-based legal services can vary widely, depending on a number of variables.
An assessment of potential risk of harm to a consumer from a particular product may vary substantially depending on the area of law, status of the consumer, or even current political situation. And that assessment may be quite different if the perspective is that of the developer of the technology, a member of the judiciary, or a law professor.
The considerations detailed here aim to identify potential risks related to AI-driven legal services targeting consumers across a range of stakeholders: consumers, providers, institutions (primarily courts), and the civil justice system.
In addition, for each of these stakeholders, we posit one or more objectives to guide our risk identification and assessment. We also, where possible, note the likelihood of the risk occurring and the potential impact.
Some key caveats:
- The outline represents a “crowdsourced” collection of concepts created with input from the many participants in the RAILS working group. There is a significant need for quality research on these questions to carefully assess the actuality of these risks and how they develop as we move forward. The lists below should be considered examples offered for consideration within the context of the risk framework, and not definitive or complete findings.
- While myriad risks related to AI could be identified, we have intentionally limited this analysis to some of the risks resulting from either an interaction with–or inability to interact with–a provider of GenAI-driven legal services to consumers.
AI continues to change rapidly; we are intentionally not evaluating some use cases for AI which are not yet broadly available to consumers (to our knowledge), such as an AI agent acting on behalf of a consumer.
1. Consumers
Definition
For the purposes of this project, consumers will be defined as individuals and small businesses.
Note: “Consumer” might also be thought of as customer, patron, client, court user, court services user, visitor to the courts, unrepresented, self-represented litigant (SRL), pro se litigant, pro per litigant, litigant, party, general public.
Background
This portion of the American market is currently underserved, and usually unserved, by lawyers (the traditional authorized providers of legal services). Because of this inability to access or afford lawyer-provided services, and the lack of other authorized providers, this sector has traditionally tried a “do-it-yourself” approach or sought help from friends, family, and other similar resources.
Legal technology platforms, such as Legalzoom and Rocket Lawyer, selling software services that assist with form completion and operating (until recently) entirely outside of legal services regulation, have had remarkable success. The two companies, and other smaller ones with similar models, serve millions of customers each year. The B2C Legal Services market overall (not specific to AI products) is estimated to be over $175 billion in 2023. Given this situation, it is reasonable to assume that consumers will seek legal assistance through AI-platforms, and likely in large numbers.
Key Considerations
Some key considerations arise when considering the risks, both positive and negative, to consumers using AI-driven legal services:
1) Consumer behavior does not exist in a “legal vacuum” nor is it rational.
A consumer’s decision whether or not to take an action may be driven by many different factors. Further, a consumer’s perception of, and framing of, their issue may be different from an “objective” framing of the issue. The ability of AI to assist consumers with their legal issues will be highly dependent on the consumer themself.
2) Any consumer-AI interaction for legal help exists in the context of the established larger civil justice system and other highly impactful systems (health care, criminal justice, the social services system, etc.).
There are major systemic challenges and barriers which exist and which often have significant impacts on legal outcomes that are entirely independent of access to legal help. This issue impacts interventions beyond those made possible by AI. The best legal aid lawyer, for example, usually cannot keep a tenant in their home if the tenant cannot pay their rent.
3) Any legal form or query which requires personal legal information automatically engenders a level of risk in this environment.
Terms and conditions, as well as data privacy and security are of paramount importance. These terms should always be clear to the user and trust is key with the AI vendor. Unless the user is sophisticated enough to understand the terms and conditions, security and privacy setting in place at the time and turn off any settings that would train the model on personal data, there is a potential for loss of privacy, training on personal data, and/or data leakage. Consumers should be given clear information prior to interacting with these models regarding: Where and how is the data stored? Does the tool comply with industry-standard security protocols such as SOC 2 etc? Does the model use the data for training? If so, can the user opt out? Does it comply with jurisdiction specific data protection laws like the GDPR, CCPA, HIPAA (if applicable?), etc.
Sources:
2. Providers
Providers of AI-based, consumer-focused legal services may hold a variety of professional backgrounds, with licensed attorneys being a significant group.
Yet once attorneys become tech founders, they are prohibited from delivering legal services or even hiring experienced paralegals to support clients, unless they also maintain a law firm business model. More critically, consumer-focused providers risk running afoul of vaguely worded regulations regarding “unauthorized practice of law” (UPL), which prevents anyone or anything other than licensed lawyers from performing a wide range of activities defined as ‘the practice of law,” and the related prohibition against sharing revenue from fees generated by legal service delivery with nonlawyers.
The threat of UPL litigation constrains these companies from scaling effectively, as they must set aside a compliance budget that could otherwise be spent on product development and marketing. Moreover, as they consider expanding to new regions, they commit significant time and financial resources to explore the state’s likelihood to challenge their company’s business model, with the potential to drain funds and close their doors entirely.
The precarious nature of how to ensure D2C regulatory compliance also stifles market progress. Entrepreneurial risks include an uphill battle for funding due to a reluctance to fund B2C models and lack of market awareness of the problem’s vast scope and the opportunity for disruption. Customer acquisition is also a challenge, given that most users are not candidates for long-term subscriptions.
The regulatory regime in each state can be considered a helpful constraint, encouraging innovation around UPL and fee-splitting concerns, or a challenge.
3. Institutions
Courts/Administrative Tribunals
The primary institutional stakeholder on which we focus this exercise is the court system.
The court system is the primary state administrator and adjudicator of claims and disputes incurred by parties (e.g. people, businesses, governmental entities in our broader society) by analyzing and applying a body of pre-existing rules, laws, and norms. While every individual system (state by state, country by country) may have its own set of goals, we focus here on the “simple goal” of providing efficient and fair resolution of claims and disputes brought to court/tribunal.
4. Civil Justice System
There is a long literature on the goals of the civil justice system and many theories positing distinct, and sometimes conflicting, goals. For purposes of this analysis, we articulate two systemic objectives that, while perhaps less philosophical, are likely to be shared by most stakeholders in the system today.
- Ensuring the efficient and fair resolution of claims and disputes, both those brought to court (addressed above) and those not brought to court but rather addressed through private negotiation/settlement.
- Protecting and increasing the public faith in the justice system and the rule of law. A functioning civil justice system must not only resolve disputes but also inspire trust in its processes and outcomes.
Today, the civil justice system is largely failing to meet either of these two objectives.