Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Unfounded fear, not a single case of anyone finding YOUR personal data that YOU posted within ChatGPT. #1 You can opt out of having your chats used for training data AND you can turn on temporary chats so they delete after 30 days. However, note: A court order in May 2025 forced OpenAI to preserve all user chat logs, including deleted ones, due to a copyright lawsuit. OpenAI is fighting this order, arguing it violates user privacy and undermines their commitment to letting users control their data. #2 ChatGPT is a better therapist than 99% of the one's I've seen in my life, especially when prompted with specifics frameworks like, "answer me as a CBT therapist" or "Use Internal Family Systems Theory and ask me insightful questions to help me reflect." So, yes, get the help you need with an incredible resource (see the research on this and it does better than most therapists). But ofc, know that like anything, it's your name, account, and data so in the same way that police can subpoena google for your search history they may be able to do that for OpenAI as well. #3 Personal Info, what did they find? e.g., Researchers from institutions like Google DeepMind, Cornell, and UC Berkeley found that if you asked ChatGPT to repeat a word endlessly (e.g., “poem” or “company”), the model would eventually start producing unexpected outputs—including: Email addresses Phone numbers Usernames Bitcoin wallet addresses Snippets of code Book passages Company contact info Even text from dating websites These were not hallucinations but verbatim excerpts from the model’s training data, which had been scraped from public websites. It's been patched.
youtube AI Moral Status 2025-06-17T21:1… ♥ 1
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzTyzqSSqau4w33Jjp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzdUeza-aurm-Lo9cF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz1pzIend2jPlnLQZZ4AaABAg","responsibility":"user","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyDCA6So5vZATcs5Q14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz1lmAjqb2Qz1GM4nR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzl_VO3_T5hDLBNV1p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzYknvCayfcjpKmmEZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzsWtdTv8w5A-7zjjx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyHsX2IcLk9mYQHA4R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyiFQa3xQAw3fbpmDN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"} ]