AI Victim Statement

For two years, Stacey Wales kept a running list of everything she would say at the sentencing hearing for the man who killed her brother in a road rage incident in Chandler, Ariz.

But when she finally sat down to write her statement, Wales was stuck. She struggled to find the right words, but one voice was clear: her brother’s.

“I couldn’t help hear his voice in my head of what he would say,” Wales told NPR.

That’s when the idea came to her: to use artificial intelligence to generate a video of how her late brother, Christopher Pelkey, would address the courtroom and specifically the man who fatally shot him at a red light in 2021.

Death must have been a sweet release for this poor dude. His family couldn’t be assed to say a few genuine words over his death and outsourced the job to AI.

There are whole swaths of population that are outsourcing their basic human functions to AI.

That a judge allowed this travesty in an actual courtroom is even more of a travesty.

18 thoughts on “AI Victim Statement

  1. NPR is propaganda central, though. Thinking through the implications of NPR pushing AI outsourcing of personal communications generally, and of courtroom statements in particular.

    That’s not a good direction.

    Like

    1. …and let’s not forget, using AI to resurrect simulacra of dead people. Don’t even want to think about where that’s going, given that I quit NPR over their nonstop, totally relentless pushing to normalize, desensitize the public to, every sexual fetish imaginable.

      Next stop: AI necrophilia.

      Liked by 1 person

      1.  using AI to resurrect simulacra of dead people. 

        For the noble purpose of ~~checks notes~~ forgiving black murderers of white people.

        Liked by 1 person

      1. \ AI trials: Evidence is fed into AI which determines the verdict.

        How is this worse than “AI determines the disease and the course of treatment”?

        Or “AI determines the remaining years of life and the cost of health insurance, if one is permitted to buy it at all”?

        Liked by 1 person

        1. Disease is biological and objective. Justice is human, subjective and very culturally and historically circumscribed. A heart attack or a tumor are always that, at any time in history and in any culture. But what’s considered crime and what mitigates it varies dramatically.

          As an example, killing under the influence of alcohol was considered an aggravating factor in the USSR. But it’s considered a mitigating factor in the US. I was really stunned when I found out because that’s so different from what I always knew.

          Liked by 1 person

          1. \ Justice is human, subjective and very culturally and historically circumscribed. …  But what’s considered crime and what mitigates it varies dramatically.

            First of all, the main aggravating and mitigating factors can be easily fed into AI.

            Even more importantly, people will argue that AI is the only entity capable of dispensing true justice,  impartially enforcing laws free from human emotions and prejudices. For the first time in human history, justice will be blind.

            One could hope woke people would help you fight this trend after seeing the disproportionate effect of AI verdicts on people of color, yet I am not optimistic both since AI can be programmed to see a minority status as a mitigating factor and since it’s much cheaper than human judges.

            It’s already a reality btw:

            Elon Musk has confirmed that Grok 3 will include all court cases in its training set, enhancing its ability to analyze and provide legal insights.

            Quoting Musk, “Grok can already do this. With Grok 3, we are adding all court cases to the training set. It will render extremely compelling legal verdicts.”

            Or this article:

            “This Verdict was Created with the Help of Generative AI…?” On the Use of Large Language Models by Judges

            In selected jurisdictions such as the USA, China, and Singapore, the integration of AI into judicial proceedings has become a tangible reality. Algorithms play a pivotal role in supporting decision-making processes during the formulation of judicial verdicts. Large Language Models (LLMs), such as ChatGPT, are also being increasingly used by judges, often in circumstances where the legal parameters surrounding their usage remain unclear. This comment delves into the realm of interdisciplinary research inquiries surrounding the application of AI in the judiciary. It begins by exploring how judges currently use ChatGPT in rendering verdicts within cases that have been publicly disclosed so far. 

            2 Use of LLMs by Judges to Date

            In February 2023, for the first time worldwide, a judge publicly acknowledged using ChatGPT in arriving at a judicial decision. The judge, situated in the Colombian city of Cartagena, was presiding over a case concerning whether a health insurance provider should cover the entirety of expenses for the treatment of an autistic child. The judge raised a total of six queries to the Large Language Model, all of which have been documented in the final verdict, including the following:

            https://dl.acm.org/doi/full/10.1145/3696319

            Like

            1. AI is hardly free from prejudice. It’s a lefty propaganda machine. Every time I use it I have to battle it from not giving me lists of propaganda points.

              Like

              1. There’s another issue, much more important than accuracy. AI in court decisions and all other avenues of life is a technocratic libtard dream. And here’s why:

                Like

  2. It’s strange and maybe dehumanizing to me to make an AI statement putting words in the victim’s mouth at a victim impact hearing of the guy who murdered him. And then having this AI mouth, “I’d forgive you” even if his family is right that he’d say that. The judge loved it added another year and a half to the murderer’s sentence over what the prosecution asked for.

    Like

  3. There’s a very interesting lawsuit filed against Meta/Facebook which I hope is successful. Using AI to destroy your political enemies is so effective because there’s always plausible deniability. “Sorry, it was the algorithm.”

    And that needs to stop.

    Robby first discovered the issue when a Harley Davidson dealership shared a screenshot of META AI’s description of Robby, which was filled with falsehoods, including:

    – Jan 6 criminal and participated in a riot (plead guilty) – Holocaust denier – Suggested authorities take his children away in favor of giving them to somebody who would be more accepting of DEI and transgenderism for children

    Liked by 1 person

Leave a reply to el Cancel reply