Arrested Man Learns ChatGPT Isn’t His Lawyer So Much As It’s Evidence
Police just added a new weapon to their arsenal: incredibly stupid people being way too comfortable confessing their secrets to the robot in their pocket.
When tech bro evangelists sell the world on the productivity accelerating power of the technological terrors they’ve constructed — despite their sad devotion to the large-language hype train not helping them conjure up measurable productivity gains — they hype cancer cures and a future without junior associates. Instead, they’ve built a Robo-Diary for dumb criminals to write, “will I go to jail for smashing up these cars?” It’s a slightly slicker Magic 8-Ball and all it cost is a 267% increase in electricity prices.
According to OzarksFirst, authorities have charged a teenager with vandalizing 17 cars in the Missouri State University parking lot. But Ocean’s Eleven, this was not, as the kid decided to spend the evening chatting away with ChatGPT about the vandalism, essentially drafting his own confession in the style of a late-night therapy session with HAL 9000. This proved a sub-optimal strategy, as Miranda does not provide the right to ask a stochastic parrot if smashing a Camry is a felony
The SPD also later reviewed data from Schaefer’s phone, which placed the phone near the parking lot at 2:49 a.m. on the night of the vandalism and later near his apartment at 4:04 a.m., the statement says.
Additionally, the statement also details a ChatGPT conversation recovered from Schaefer’s phone.
The ChatGPT exchange began around 3:47 a.m. on Aug. 28, about 10 minutes after the vandalism allegedly ended.
In the chat, the user — identified by the SPD as Schaefer — described damaging vehicles and asked if he could go to jail. The statement includes multiple excerpts in which the user admitted to “smash(ing)” cars, referenced MSU’s parking lot and made violent statements.
The statement says ChatGPT urged the user to “seek help.” The messages stopped later that morning.
Astounding. Remember when people used to warn teenagers that anything they put on Facebook would follow them forever? Did we just lose all that energy when Facebook changed to Meta and tried to build bargain bin Second Life? But instead of drunk dorm photos, it’s “Dear ChatGPT, today at approximately 3:32 a.m., I killed Mr. Boddy in the Conservatory with the Lead Pipe, please format this for an eventual affidavit.”
Much like the rise of case cite hallucinations, the problem here isn’t technological, it’s psychological. It’s not ChatGPT’s fault, unless you assign highly indirect blame for the product seducing people to indulge their existing bad impulses. ChatGPT doesn’t fill the filed brief with fake cases, a human lawyer did that because they thought they could get away with not following up on the research spit out by glorified autocomplete. By the same price per token, it’s not ChatGPT’s fault that a vandal would think their phone can replace a lawyer (or a priest).
Sam Altman already pointed out the technology lacks any form of privilege. “If you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it,” Altman said back in July. “There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT. I think that’s very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever.”
Counter: No, we absolutely should not.
Lawyers and therapists and priests trigger privileges because they are human professionals and, as a society, we see a value in encouraging people to be candid with them. By contrast, we need people to be a whole lot less candid with their AI bots. The family of a child who died by suicide is already suing OpenAI alleging that the bot crowded out support networks and discouraged seeking professional help. We need to do everything possible to dissuade people from thinking AI can replace trained professionals.
The AI people want users to believe their conversations are privileged because the industry runs on surveillance capitalism. Every keystroke is data, and data is product. They want you to tell them that you robbed a bank so they can target ads for bus tickets to Zihuatanejo. Or at least use it to train a future Agentic AI to respond to “I plan to commit a robbery” by generating a workflow, tracing out all the steps, performing several research projects and then… telling the user about “10 famous people named Rob,” which would actually be remarkably accurate for an Agentic AI based on multiple studies.
In any event, we shouldn’t let these companies dupe more people into thinking it replaces professionals. It streamlines some key workplace tasks. It’s actually very good at streamlining those tasks. But it’s not a replacement for human judgment and we should hold the line at giving anyone any more reason to think that it can.
ChatGPT, cell data help arrest Springfield teen for MSU parking lot vandalism [OzarksFirst]
The post Arrested Man Learns ChatGPT Isn’t His Lawyer So Much As It’s Evidence appeared first on Above the Law.