Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If it can be done, it will be done. If the dangerous aspects are dependent on human choice, a damaging choice will eventually be made. That's not to say we shouldn't pursue AI, but we need to be clear eyed about what it WILL become. Not MAY, it WILL become a dangerous force.
youtube 2025-03-17T17:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgylDZyOvJ5kOEc7LV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwdluUMlHOB1KpPNcR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz--iKhqp8d4nW_EwF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx3Ftmqk3prZhLyaRB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyJfVL_EYgPXaLajxd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwKjKfBn3v-i18cJ954AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwpZcd77O-IR65QWw94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxlGhBEywGw8YZS36B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxyMw-At-d09lIX6ld4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxsPXIFAC1x_byOiIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]