Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do not agree with some of your conclusions, but i have great respect for your efforts, and I value deeply your contribution to this very necessary discussion. Nobody knows where this swarm is going. If one listens to Altman then it seems that he understands, that AI may not/ must not remain a mirror of our humanity. If AI doesn't kill us, then we will somehow manage to eradicate ourselves. To be really, really honest, i feel we need to fear ourselves more.
youtube 2025-12-04T17:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgzCX0-xmR8UNch4v214AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgxZGa02IcH-J2PvWEV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyizSSelWkX-3DBUSp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzBk5ZDYxK6--WUYqJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgygQOYm7T8WsPMCLRd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwPvEzL0TAT55aGqxt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},{"id":"ytc_UgzyUu5A2zQpXs549OB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzMwBazFTJ6Vp0Er6F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwpXv43-tJNqn7a5oB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"resignation"},{"id":"ytc_Ugz-iSiR2jH-v0zUv1l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})