Arrested Young Man Discovers ChatGPT Chats Can Be Used as Evidence in Criminal Cases

A Missouri teenager’s conversation with ChatGPT — in which he admitted to damaging vehicles — played a role in his arrest, highlighting a growing legal trend: AI chatbot conversations are not private and can be used as evidence in criminal investigations.
In late August 2025, authorities in Greene County, Missouri, charged a 19‑year‑old Missouri State University student, Ryan Schaefer, with vandalizing 17 vehicles in a campus parking lot. After his arrest, police obtained his phone with consent, discovering a ChatGPT conversation in which the suspect openly described damaging cars and asked whether he could go to jail for those acts.
According to reports, the police statement included dialogues from the ChatGPT session that referenced the unidentified parking lot and admissions of wrongdoing — material investigators used alongside cell‑location data to build probable cause.
Why AI Chats Can Be Used as Evidence
Despite how casually many people treat AI chatbots, they do not offer legal confidentiality or privilege (unlike communications with lawyers under attorney‑client privilege). Courts treat chatbot queries similarly to email, text messages, or browser searches if they are collected through proper legal procedures such as warrants or consent.
A criminal defense attorney told reporters that AI prompts can be admissible evidence in U.S. courts — and that users who are in legal trouble should disclose any chatbot activity to their lawyers immediately. AI interactions may reveal intent, planning, or admissions just like traditional digital records.
Legal experts agree that because these tools are publicly accessible and non‑privileged, conversations with them can be subpoenaed or obtained by law enforcement under the same rules that govern other digital communications.
What This Means for Privacy and Criminal Law
Many people assume AI chats are private or ephemeral, but recent reporting shows AI providers may retain and preserve logs when required by court order — meaning even deleted conversations can be subject to legal preservation.
Legal analysts emphasize:
- AI chats typically are not protected by doctor‑patient or attorney‑client privilege;
- Conversations stored by the platform can be discovered, subpoenaed, and entered into evidence;
- Users should never assume their AI interactions are confidential.
These realities align with broader digital evidence law: courts have long treated digital communications — from emails to text messages to search engine inquiries — as admissible when relevant to a crime. AI chat logs now fall into that same category.
Context: Legal and Tech Industry Takeaways
Other recent legal reporting confirms that AI‑generated content is already being introduced in legal proceedings. Judges in at least one jurisdiction have upheld that AI chatbot responses can be admissible if properly authenticated, just like any other electronic record.
Lawyers warn that treating chatbots as substitutes for professional legal advice is dangerous: not only do these tools lack legal privilege, but they also lack the ethical obligations and confidentiality protections of human attorneys.
Privacy experts also caution that users sometimes share far more detail in AI chats than in typical internet searches, creating a richer evidentiary record that can be used in investigations.
What You Should Know Before Talking to AI About Sensitive Issues
- Chatbot interactions are not legally privileged — unlike conversations with licensed lawyers, therapists, or medical professionals.
- Anything you write can be preserved and used if requested by law enforcement via legal orders.
- Law enforcement can sometimes collect AI interactions alongside other digital evidence such as location data to corroborate criminal conduct.
- Legal experts recommend disclosing AI usage to your attorney immediately if you are under investigation for anything related to those interactions.




