Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When everything is automated, what will be the point of humans and who will buy …
ytc_UgwNOQW8Y…
G
Empathy. 100% the million dollar question it boils down to if Like some humans t…
ytc_UgzTNistF…
G
On your point about cost, in the programming world we have seen the numbers as f…
ytc_Ugz41xi31…
G
I’m a teacher and I agree with you in principle. However, since I find a lot of …
rdc_j43edkm
G
Tesla is still way ahead, it's not even close. Latest FSD is reported to be "se…
ytc_Ugyv36yBh…
G
When someone commented "cheap low effort dima a dozen pulp fiction" I was like "…
ytc_Ugwni24cJ…
G
@THEKINGOFNISSANS Targeting systems on a fighter jet aren't language models maki…
ytr_Ugw7f4Ork…
G
The problem with AI is lack of coherence. At some point the pieces don't fit tog…
ytc_UgwUV1mjO…
Comment
People who don’t understand our easily influenced the ai is to cater to our words is stupid 😂 keep on creating your own conspiracies like a psycho
youtube
AI Moral Status
2025-09-16T04:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugzb54CIhlaYZ9bokSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx6C1Zuk2flJrjjubZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzyqrKpUw6-E5apyVR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzmm1jPBt21t4Tc3MJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzlnGdI_JAAOCRM83t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyPuNHIrK-oJxDOhip4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx_MgRZaCyhK8EjPrt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx59XvAf2js13xKt0J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxhSUxcFo0Aj1urVR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx4-aAXUNWPMOC-JTZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]