10 August 2017

openDemocracy: Do we still need human judges in the age of Artificial Intelligence?

Before going any further, we should distinguish algorithms from Artificial Intelligence. In simple terms, algorithms are self-contained instructions, and are already being applied in judicial decision-making. In New Jersey, for example, the Public Safety Assessment algorithm supplements the decisions made by judges over bail by using data to determine the risk of granting bail to a defendant. The idea is to assist judges in being more objective, and increase access to justice by reducing the costs associated with complicated manual bail assessments.

AI is more difficult to define. People often conflate it with machine learning, which is the ability of a machine to work with data and processes, analyzing patterns that then allow it to analyze new data without being explicitly programmed. Deeper machine learning techniques can take in enormous amounts of data, tapping into neural networks to simulate human decision-making. AI subsumes machine learning, but it is also sometimes used to describe a futuristic machine super-intelligence that is far beyond our own.

The idea of  AI judges raises important ethical issues around bias and autonomy.  AI programs may incorporate the biases of their programmers and the humans they interact with. For example, a Microsoft AI Twitter chatbot named Tay became racist, sexist, and anti-Semitic within 24 hours of interactive learning with its human audience. But while such programs may replicate existing human biases, the distinguishing feature of AI over an algorithm  is that it can behave in surprising and unintended ways as it ‘learns.’ Eradicating bias therefore becomes even more difficult, though not impossible. Any AI judging program would need to account for, and be tested for, these biases. [...]

The AI judge was able to analyze existing case law and deliver the same verdict as the ECHR 79 per cent of the time, and it found that the ECHR judgments actually depended more on non-legal facts around issues of torture, privacy, fair trials and degrading treatment than on legal arguments. This is an interesting case for legal realists who focus on what judges actually do over what they say they do. If AI can examine the case record and accurately decide cases based on the facts, human judges could be reserved for higher courts where more complex legal questions need to be examined. [...]

Even so, AI judges may not solve classical questions of legal validity so much as raise new questions about the role of humans, since—if  we believe that ethics and morality in the law are important—then they necessarily lie, or ought to lie, in the domain of human judgment. In that case, AI may assist or replace humans in lower courts but human judges should retain their place as the final arbiters at the apex of any legal system. In practical terms, if we apply this conclusion to the perspective of American legal theorist Ronald Dworkin, for example, AI could assist with examining the entire breadth and depth of the law, but humans would ultimately choose what they consider a morally-superior interpretation.

No comments:

Post a Comment