The A.I. lawyer’s courtroom debut is a stunt, but it raises some big questions

January 10, 2023

Hello, it’s Fortune tech editor Alexei Oreskovic filling in for Jeremy this week.

They say the lawyer who represents himself in court has a fool for a client. But what to make of the person who represents himself with an A.I. lawyer?

In 2023, that curious question is not a mere hypothetical. On Monday, a company called DoNotPay said that it will use an artificial intelligence bot to argue a case in traffic court. The company’s founder Joshua Browder told Politico that a defendant would surreptitiously wear an earpiece in court, feeding the discourse and proceedings to a remote A.I. bot. The A.I. attorney’s responses will then be piped into the ear of the defendant, who will parrot the legal arguments as if they were their own.

No doubt, Browder’s courtroom caper is more a self-serving stunt than a legal game-changer. But it got me thinking about the role that A.I. might one day play in the legal system. And while the notion of A.I. advocates is fun to imagine, the more consequential role of A.I. technology might be with adjudicators—the judges and juries who decide things.

It sounds dystopian (and given the many biases and flaws that we know plague A.I. today, it should). But it’s not outlandish. After all, A.I. algorithms already serve as judges in a lot of aspects of our lives—they decide what content you see, and don’t see, in your social media feed; what route you take when you put directions in a mapping app; and whether it’s your face or another’s when you try to unlock your smartphone. Some insurance companies now use A.I. to make an initial determination about whether a claim is legitimate or fraudulent.

So why not legal cases?

Imagine if a jury was composed of 12 bots, each trained and calibrated to have a different blend of distinct values and views. One algorithmic trait might favor classic liberal social values, another might favor gun ownership rights, and another might lean libertarian. Some algorithms might favor punishment, while others favor rehabilitation.

To prevent canny lawyers from gaming the system the way an SEO expert games Google’s search algorithm, the values would have to be scrambled and assigned to the 12 juror bots before each case, with thousands, or millions, of possible permutations.

Or consider the nation’s highest court.

The biggest criticism of the Supreme Court’s justices these days is how human they are, allowing their petty personal politics, prejudices, and grievances to seep into their jurisprudence. A bench of A.I. justices could replace such fallible egos with the dispassionate logic of the machine.

This is all super fanciful of course, and unlikely to occur anytime soon, if ever. More realistic is A.I.’s gradual expansion to take on more decisions in other realms of our lives, from loan-making to medical treatment. It’s not as high-profile as an A.I. Supreme Court, but probably no less deserving of our awareness and discussion.

With that, here’s the rest of this week’s A.I. news.


Microsoft’s big bet on OpenAI. The software giant is exploring an investment of as much as $10 billion in OpenAI, sources told Fortune. OpenAI is the San Francisco firm behind red-hot “generative A.I.” products such as ChatGPT and DALL-E 2. Microsoft’s investment is part of a complex transaction that could take place over several years, and which would allow Microsoft to collect 75% of OpenAI’s profits until it earns back its investment, according to the news site Semafor. While Microsoft had previously invested $1 billion in OpenAI, the new investment would eventually give Microsoft a 49% stake in the company. Generative A.I. tech is all the rage these days, and there is speculation that Microsoft could plug OpenAI’s tech into its Bing search engine, and throughout its catalog of software products.

The Department of Justice touts machine learning in Meta settlement. Facebook-parent Meta is using a newly created A.I. system to ensure that housing and employment ads on the social network do not discriminate. According to a report in the Wall Street Journal, Meta’s new “Variance Reduction System” was developed in collaboration with the DOJ and federal housing officials. It’s part of Meta’s settlement to the DOJ’s charges that the company targeted housing ads users based on characteristics like race, religion, familial status, and national origin, in violation of federal law. “This groundbreaking resolution sets a new standard for addressing discrimination through machine learning,” the DOJ said in a press release.

Even A.I. startups can’t escape the pink slips. The artificial intelligence industry may be hot, but it’s not immune to the pain of layoffs. On Monday, Scale AI announced that it was laying off 20% of its staff. Scale AI plays an important role in AI by helping companies label and curate data that A.I. applications rely on. The 600-person company was last valued at more than $7 billion by investors including Tiger Global and Founders Fund. But in a refrain that sounded all too familiar on Monday, Scale AI CEO Alexandr Wang apologized for failing to anticipate a post-pandemic business slowdown and making the decision “to grow the team aggressively in order to take advantage of what I thought was our new normal.”


Microsoft’s potential investment with OpenAI grabbed headlines on Tuesday, but there’s another recent A.I. development from the Windows maker that shouldn’t be overlooked.

In a paper published this month, Microsoft researchers detailed advances in text-to-speech technology that can imitate a specific person’s voice. Microsoft has dubbed the “neural codec language model” it developed VALL-E (a riff off of OpenAI’s DALL-E image generator and perhaps a reference to the so-called uncanny valley concept regarding the unease that hyper-realistic digital characters can trigger in humans).

The technology was trained from an audio library compiled by Meta that includes 60,000 hours of speech from 7,000 speakers, according to Ars Technica. But what’s really impressive is that VALL-E can then be used to mimic anyone’s voice using just a 3-second audio clip of the person talking. You can listen to some samples of VALL-E here.

Microsoft is not releasing the tool to the general public at this time, acknowledging the “potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker.”

Alexei Oreskovic