LawTech: Time for a cybernetic legal ethics?

The following is an updated and somewhat expanded version of a talk I gave at a recent CILEX roundtable on law and technology.

I am sometimes sceptical, but basically a proponent of innovation in legal services. I believe that the application of technology, principles of design, and learning from the behavioural sciences in particular will improve the delivery of solutions to clients with legal problems. Indeed, in the last five years, I have taught a course on the future of legal practice. And many students leave that course excited, sometimes too excited, by the potential for legal innovation.

My job here in this talk is to be more sceptical. I’ve been asked to think about ethical and regulatory perspectives on legal technology. What I offer is short, preliminary, and by no means comprehensive. But here goes…

Let me set the scene by talking about what I’ll call the innovation ethics dilemma. Let me illustrate that dilemma with a story about DoNotPay and a somewhat fictionalised account of the birth of my third child. Do not pay is a chatbot that lets you challenge a parking ticket. Answer a few very basic questions and it issues a letter sent to the relevant authority. I practised on it using an imaginary journey to take my wife to hospital to have our little boy. I wasn’t in a panic and nor, perhaps surprisingly given her predicament, was my wife. Unable to park quickly, I took a chance and used one of the staff spaces. For the purposes of the app, let’s imagine I got a ticket.

The letter it produced was a thing of supreme ugliness. Badly written. I have written about it elsewhere. But I could see, with a little finessing, it would be fine and might well work. However, it did contain one sentence which raised an eyebrow. It said, “the urgency of the situation prevented me from having enough time to ensure that my parking was in complete compliance.”

This was a confabulation. The application had made it up on the assumption that the medical emergency prevented me from parking properly. It didn’t, I was in a rush, and I’d taken a chance, but it wasn’t an emergency and I could have taken more time and part properly. To my mind, it lied on my behalf.

The psychology of that is interesting, because I didn’t make anything up, it did. I might be inclined –I’m certainly more inclined – to send such a letter than to draft one myself. And someone less familiar with the legal system might think that’s just the kind of thing one needs to write to challenge a parking ticket.

So here is the innovation ethics dilemma. Do not pay was, to use the jargon, a minimally viable product. It is crude, but for some effective: Browder’s unverified testing estimates about 50% of users win their cases. If that is right, it would help many people challenge parking tickets, or similar events, and does so in a way which is very easy-to-use, has none of the problems associated with using lawyers or websites to cobble together information. It allows some people to hop and skip effortlessly over traditional access to justice barriers. But

  • It also encourages the making of meritless claims;
  • If my experience is widespread, encourages the making of misleading or dishonest claims.
  • And it provides a very rough, low competence version of lawyer in which may jeopardise the interests of clients with good cases if, and it is a big if, they would otherwise go elsewhere.

Are those ethical risks worth the access to justice gains?

In evaluating the trade-offs, it is worth concentrating on a second aspect of the innovation ethics dilemma. Who is making those trade-offs, and with what information? If it is the innovators themselves, should we be concerned about their values and how they make the trade-offs? For instance, a series of psychological studies suggest that an innovatory mindset is also associated with a greater risk of misconduct.

Creative types and those focused on getting the job done rather than doing the job according to the rules are more prone to lying and cheating than their alternates. That’s before we factor in any effects caused by not knowing legal professional obligations or the incentives and mindsets that come with particular business models.

And yet, and yet, with access to justice around the developed world in a very poor state: and powerful bureaucracies and organisations arguably using the systems of law and their power to perpetuate injustices on consumers and citizens in just as crude and perhaps more deliberate a way than the robot lawyer challenging parking tickets. They may get the job done, or get the job done often enough, to be worthwhile.

I do not entirely swallow that argument. An escalation of mechanical or designed in malfeasance does not constitute progress. But we do need to be careful not to throw out the baby of innovation, with the bathwater of its less seemly side.

Having set the scene in that way, let me now turn to a few more specific points.

Firstly, these technologies and innovations tend to avoid regulatory categories. If DoNotPay is recent for into US small claims, we could query whether DoNotPay’s services would fall under claims management if they did say housing disrepair or PI (are they advising? would they be representing [in writing] when they provide letters or advocacy scripts, as the US bot is reported to do). But if they were not claims managers, they are not, it seems, generally providing reserved services and so they are not a regulated provider. And no prospect of being regulated beyond the ordinary law.

And beyond these kinds of bots we can expect problem-solving approaches which integrate mediation, arbitration and adjudication in particular. Indeed these are already happening. Regulation of these areas was carved out from the legal service acts definition of legal services that can be regulated to protect the judges from political encroachment. Leaving that space to be filled by unregulated dispute entrepreneurs is a worrying black hole in the legal services framework. It may be that only statutory amendment could bring such organisations within the regulatory framework as things stand.

Relatedly, and to take the conversation in a different direction, we may need to rethink conflict-of-interest rules in relation to certain kinds of delivery. When people were looking at implementing the Rechtwijzer, the Dutch online divorce system, the British lawyers involved battled to ensure separate lawyering within the system. A system that was designed to encourage parties to negotiate and mediate sometimes with the help of a professional (lawyer) mediator or adjudicator, was to have double the personnel at key points. This immediately made an already precarious system less viable. And it probably threatened the collaborative ideals of the system.

An advantage of an online mass delivery system is that the aim of a conflict-of-interest rule (the protection of the interests of both parties, ensuring that they are both dealt with fairly) might be better protected by other means. The outcomes such a system delivers for both parties, and the perceptions of those parties, may be a better test of how a system balances those parties interests than applying a conflict-of-interest rule. But note also the second problem here. Online dispute resolution systems rely heavily on theories of negotiation and mediation (the collaborative ideal I spoke of) which may ignore major power differentials between clients: discrimination, against women and others is a significant risk if the organisation of negotiation and mediation is too casual.

Secondly, where law tech systems are used by regulated providers, a lot of regulatory work will need to be done by existing professional obligations to be competent and to supervise work product and systems of work. How competent and goot at supervising do lawyers and law firms have to be to use these systems?

I have not researched this, but I have an uneasy sense from talking to those who deliver and sell E-discovery and machine learning systems to law firms that this may be a significant area of training need and professional development. How many lawyers understand sufficiently how E-discovery systems work, or how to balance the risks of being over- and under-inclusive in what the system produces, and how reliant are they on the providers with their own commercial incentives on selling their systems?

I suspect that lawyers will rely on an experiential approach, it feels like it works then it works. This is contrary to a scientific approach which thinks carefully about testing and structural vulnerabilities in systems. Although some testing is done, at least by some, how rigorous is this? So, for example, let us imagine some contract analytics software struggle that works really well with contracts of type X or from firm Y, but then work considerably less well with type Z firm Alpha. How well do understood are such structural vulnerabilities?

On the more general point about competence with technology, I note in passing a lot of US bars of introduced rules around technological competence. We probably need much more research as to what lawyers need to know, and what the vulnerabilities of the systems that are used really are. At the moment there is rather little of this. A risk is that the lawtech world may be run on sale-pitches and the folk-knowledge of tech-insiders.

Thirdly, and I think more profoundly, the digitisation and systematisation of law probably requires a completely different mindset to that which dominates lawyers, law firms, and perhaps regulators. How will we set standards for inadequate professional services, or negligence, for instance in relation to legal services delivered through systems? What kinds of evidence and testing will we require, if any? How will regulators scrutinise such systems? What should our tolerances be for misconduct or poor practice?

If we return to E-discovery for a moment as an example, in thinking about how to tune an E-discovery system the lawyers involved have two decide what their, and their clients, risk appetites are. What proportion of relevant documents are they willing to risk not discovering? And what proportion of irrelevant documents are they willing to tolerate seeing? Getting the latter number lower increases the former. There is no magic, mathematical, or professionally correct answer to balancing false positives and false negatives. Does that mean anything goes? Is simply counselling the client on the risks and benefits sufficient?

It might be said that this, although it looks like a problem, is actually a benefit of these newer approaches. These new systems do, or should, enable the articulation of trade-offs and problems in practice which have hitherto been ignored or not understood. If you are an experienced litigator, ask yourself, what are the rates of false positives and false negatives when sending six young lawyers off to a warehouse full of documents to conduct a discovery exercise?

Thinking systemically, and quantitatively, about such risks suggest the final innovation ethics dilemma. Once they become known, do we simply tolerate such risks, or are there levels at which we should not do so? Might we become too risk tolerant, too blasé, in the knowledge that systems make mistakes. Imagine we have a system, like the DoNotPay application, which we know creates a risk that people will misuse the application to make false claims to encourage them to lie to opponents. Let us imagine we cannot find a way of designing the system, economically, which gets that risk below (say) 15% of cases. Should professionals tolerate the use of that system? Should regulators? Who decides? What if there are powerful business reasons for changing the system so that that risk increases to 20%? Is tolerating an increase in professional? What if that increase reduces prices and increases access to justice?

I am often asked how these issues should reshape legal education. I have been trying to work out answers to that for the last five years in designing and redesigning the future of legal practice course each year. While understanding technology is important, and there may be benefits in getting students to learn how to do pivot tables in Excel, I suspect the real juice will be in getting law students to better understand how an analytical world is transformed by digital thinking; how rules and the social systems that surround them influence behaviour; and how the complexities of this may be systemic and need lawyers, legal engineers, and others able to take system-wide views. I do not see law schools abandoning doctrine, reasoning, and critical thinking. But I do see law schools building up their contextual and social scientific side. One path into the future of legal education may be technology, but I suspect social scientific and behavioural dimensions to law, and the regulation of its lawyers, will come to the fore. A whole system approach is necessary. We well need a cybernetic legal ethics to cope with interlocking technologies, logics and values that should the demanded of a good legal system.

 

 

7 thoughts on “LawTech: Time for a cybernetic legal ethics?

  1. No doubt one of the app’s attractions is the appeal of getting out of an “unfair” parking ticket. (I use “unfair” in the same sense as many feel about a speeding ticket: Yes, I broke the law–but so did 999 others and they were not singled out for punishment!) Yet another appeal to this app surely lies in its feel of “poking authority in the eye”.

    Along the lines of the concerns you raise, John P. Mayer, the Executive Director of CALI.org, very recently posted a Twitter thread that also comments (short form, but thoughtfully!) on some of the questions DoNotPay raises: https://goo.gl/qjb41m. The problem that no one seems to have a good answer to, however, is this: The organized bar has so long, so universally, and so successfully fought to maintain its monopoly — while for all practical purposes ignoring access to justice needs — that it may simply be too late. Imperfect, flawed or dubious solutions seem better to many than nothing at all. When life-saving medicines are priced out of the reach of all but the 1%, the masses reach for snake oil. Particularly when it occasionally works!

  2. Excellent. As far as use of documentary search engines is concerned, surely the found items would still need to be checked for relevance and data rules etc. before being disclosed to other side. Therefore the dangers of missing something in the search are greater than the dangers of including wrong or inappropriate material?

Leave a comment