Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No. It learns nothing like a human. At all. The facts are clear. These programs …
ytr_UgzhxHMkv…
G
Predictive AI do the same predictions that a normal police detective would do bu…
ytr_UgyfbmTBC…
G
It's all about the AI software processing of these data points collected by the …
ytc_UgwKiwx4E…
G
Umm, if AI can do the labor, why are we still obsessed with jobs instead of fixi…
ytc_UgyEHVsYP…
G
If they replace everyone who’s gonna pay these companies for their products 😂🤷🏻…
ytc_UgydNXmzJ…
G
Your constant cuts make me question the editing... You clipped him 3 times in 5 …
ytc_UgyUeev8g…
G
I asked AI a question today (involving creating an acronym) and it gave me an in…
ytc_Ugz3VZW3i…
G
@HunterCulpepper What tells you "@HunterCulpepper is stupid as most ai bros" is …
ytr_UgzcCqdeB…
Comment
1:08 I believe the default conversational tone of chatbots contribute to this case. It creates inflated sense of correctness within the user, making them dangerously confident. It can be adjusted in the setting or by giving it explicit instruction to not be overly affirming ("be rigorous, cold, and strictly factual"). This requires some kind of metacognition. One should be aware all the time of what is going on in their mind when talking to AI.
youtube
AI Harm Incident
2025-11-25T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwbttzAuoQYpQo9lZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzbEPZRqR1orImdl8p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwe_A6vRq88_CM47gl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyZyu0P6NZPNTEySgB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNcpyq6bDTKEX2iGV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy8YZGscjqVy6R84r14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxdFCnMh0GfS_9U6uZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwjBlQqWhOjM49rpS54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgylNcCS9u-SyLriYVV4AaABAg","responsibility":"investors","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzxk2o_wfjJ2gx1qaR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]