Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
it was 2019 autopilot (6 years ago!), worthless 60 minutes bs. What was the guy …
ytc_UgwlpRQ6L…
G
The people who say that art requires "blue blood" just have too narrow a definit…
ytc_UgzaPjIGa…
G
I went to visit family in San Francisco and it's sad to see all these Waymo cars…
ytc_UgyAiSlUT…
G
@lolandie the level of of hypocrisy you’re showing is astounding. AI engineers c…
ytr_UgwJImAiM…
G
THEY have airplane, and helicopters AI systems at this same point right now as y…
ytc_UgytlE-O7…
G
Ai billionaires & Corporate Media bought the presidency of a convicted criminal…
ytr_Ugzw2Ptrf…
G
You see end of the world. I see a more efficient wife. When I tell her to do som…
ytc_UgxiC1hh6…
G
Forget about the worries of AI what about the worries of Hybrid AI, When we have…
ytc_UgysNTFQW…
Comment
Those poor billionairs can't afford to test their systems properly in closed environments, weither it's chatbots or self driving cars.
While shocking, none of it is surprising. Chatbots act similar to fortune tellers etc. in that they gather your information and extrapolate from it.
I'm pretty sure they'll have saveguards when it comes to terrorism/amok runs/assassination (esp. of politicians in charge), because it's a small market and would have enough impact to get them canceled.
In other words, they could have prevented this very easily and referred to a suicide prevention hotline, there was a calculated choice against it.
If you look at Sora for example, they had the safeguards ready to prevent MLK and others to be used in outrageous content. They just implemented them after the calculated outrage.
youtube
AI Harm Incident
2025-11-07T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzx7F1iQA9ibpWraUh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxFLvVKNBNTzjY0OG14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzKAZy5fHaF9SpUOXp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwbihkOBMfRbi6xa2t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyfT_MdOPaFwXQeCyx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy49EU2CxS1uUcLBch4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtjCsMkaAP3x90tGx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz6cCovNlLLweF644V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwf_v-TTN_DgJk0_794AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyxF7LFmxiU3Z9BeaJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]