News Strange
  • Facebook
    Facebook
  • Twitter
    Twitter
  • Pinterest
    Pinterest
  • +
  • Linkedin
    Linkedin
  • WhatsApp
    WhatsApp
  • Email
    Email
SHARE THIS
  • Facebook
    Facebook
  • Twitter
    Twitter
  • Pinterest
    Pinterest
  • Linkedin
    Linkedin
  • WhatsApp
    WhatsApp
  • Email
    Email

While the thought of a robot judge seems like a far away dystopian future, it’s closer to being a reality than you might think.

Artificial Intelligence is growing rapidly and will soon undoubtedly start being incorporated into the workforce, specifically the legal system. In many cases, people blame a lack of resources for a better lawyer or a different jury or judge might have changed the outcome, often blaming the classic “human error”. But what if a robot decided your fate? It’s not as far-fetched of a concept as it sounds – legal research and paralegal positions are expected to being obsolete in the next 10 years due to advancements in artificial intelligence. With exponential growth, it’s not unlikely that the next step could be replacing lawyers or judges.

According to Tom Girardi, a lawyer who spoke with Forbes: “It may even be considered legal malpractice not to use AI one day… It would be analogous to a lawyer in the late twentieth century still doing everything by hand when this person could use a computer.”

Humans are flawed, and this extends to judges and jurors. While the courtroom is supposed to focus on the facts and being objective, biases can still linger subconsciously. How would defendants feel about it? An experiment was conducted and the results showed that people were more honest with a non-judgmental machine deciding their fate compared to a potentially judgmental person, something that could be key in regular courtroom procedures like cross-examinations.

Another consideration for implementing AI in the legal system is economics.

“If a lawyer can use AI to win a case and do it for less than someone without AI,” said Girardi, “who do you think the client will choose to work with next time?”

It isn’t assured robot judges will necessarily work. The technology needs to be developed first and tested. AI, like humans, can make errors, an issue former British Prime Minister Theresa May brought up a year ago to the Davos Forum.

“Make the most of AI in a responsible way, such as by ensuring that algorithms don’t perpetuate the human biases of their developers.”

She goes on to point out an example of errors the current state of AI makes.

“People of colour are more likely to trigger a ‘false positive’ match than white people on facial recognition software, which means they are more likely to be subjected to a wrongful police stop and search.”

There is, however, a major con to AI judges. If someone is wrongfully sentenced by a robot, it’s likely that it will extremely difficult to plea your innocence. Assumed to have no bias or human error, the robot’s judgement would likely not be subject to the scrutiny a human would, something that could become a real danger to those heading into the courtroom.

AI clones are one way people are trying to be digitally preserved after death