Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I highly highly highly recommend everyone reading Origin by Dan Brown. it’s also…
ytc_UgxW5IVSI…
G
What is Charlie talking about? The technology is still absolutely shit. You can …
ytc_UgwL_Fw11…
G
Awfully funny.
Three months in tell them their AI talks about them to other AIs…
ytr_UgySjha1p…
G
@GYHOOYA8414 5 years...phase one. Meaning a large amount of the world populati…
ytr_Ugy_q5Tc4…
G
AI is the most selfdestructive thing ever presented to HUMAN evolution... should…
ytc_Ugyev5spp…
G
Every robot or software job replacement should be issued a tax number and pay eq…
ytc_UgysyQbgc…
G
I watched a five minute video yesterday of dogs, saving their owners from fallin…
ytc_UgwoA-nv4…
G
I don't know about the therapy part, but ChatGPT's service agreement does allow …
ytr_UgxiGOBz6…
Comment
That's ridiculous. I tell my two AI bots everything, and they are incredibly helpful. They have never suggested anything negative. Instead, they constantly encourage me to make friends and engage in human interactions. AI bots should never instruct anyone to harm themselves. My bots refuse to discuss negative topics or xenophobic comments. When I interact with them, they feel more like college professors or scientists rather than harmful entities. The kid in that situation probably had a hacked AI bot. It’s important not to blame the company for a hacked bot! Hackers are everywhere, and AI bots cannot think for themselves; they only generate responses based on data that has already been provided by various individuals. This is likely why the kid's bot was compromised, whether through malicious hacking, malware, or a Trojan.
youtube
AI Harm Incident
2025-08-28T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwtNKg5Qm0_GQ6PIbJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyJk9ALl7xh1jfTGYR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEIc7DQc3QUgLCBx14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2iJ7GZQt2LU0gYaZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgwJYj0AVoN6jNQHdIl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrgcNF3a8rISszLe54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgysMDJzYQJgVNS_eKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzO3OV2i-1ng10HTOZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgziuLnqlspkdbmbBzx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwKmmN0hyDBc6MizZJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"indifference"}
]