Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a cyclist who has been hit by cars on three separate occasions, I say bring o…
ytc_Ugz4hMq_q…
G
I liked when AI art was a fever dream like google deep dreams, nowadays it's mos…
ytc_UgzjD6uAg…
G
nteresting theory, but "energy = God" doesn’t hold up scientifically.
Energy is…
ytr_UgxEHuTz5…
G
I'm one of those people, i used to be able to draw (mediocre) but for the past f…
ytr_Ugz679GRD…
G
> 12 acres, or 4,8 hectares is nothing, what can you plant in that?
About $…
rdc_d2xtf5s
G
If oil-rich countries, where the governing body owns the oil, can provide their …
ytc_UgyRjzFa7…
G
@VeronicaGorositoMusic The thing is that they chose the legality, not us. Like, …
ytr_UgzNgxb0x…
G
I LOVE all of the "experts" in the comments saying "this isn't AI, these warning…
ytc_UgxDy4TCe…
Comment
I agree that AI making mistakes is understandable, but I find it EXTREMELY problematic that it can be influenced in such a way that it blatantly lies if it benefits OpenAI (or any other AI company) to do so. This opens a huge door to AI being untrustworthy because you can never trust it being factual and simply running off of data, which is one of the main points of AI. I truly hope this story gets more publicity, especially regarding the part of ChatGPT denying it had ever happened
youtube
AI Harm Incident
2025-11-25T17:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzUun5RlN2ZPm6zaqR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugze31M3bq6QGnWknfB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz_R_GM_4EmBnCCS794AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwT6qJIHczk0mWXVLF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwFTAp_724pNK2ahsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyLDpsV3oY45imOJXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxQ7OqcZoZFRdI-dqx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfwwRatTXMl0yCmQp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxEd38gvbJsy6ZG8Dd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxqkrYVZ-lztvRHo0R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]