Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I use to play around with this idea on pallafiumbooks heroes and I started out as a good guy. If this AI was not aligned to a moral compass then one of two things will happen, it will try to destroy the human race or coexist in some way or get off the planet.
youtube AI Responsibility 2026-02-13T23:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx2MWVJgiLmu3TbsIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwWMSlcdWpK8H8ndp54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy6P3FMsCkkkNQXBuF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz7e5eK_nUj4xVDrBJ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy0xXqLTVuv0R-fxiR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyeICJabngzF8RCF214AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugzc6Hv9a4yY6pzafJd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwgxTuxr5GshzUOltp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyZqbe2lHcEq8QElIZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy0425joqmE211nPSZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]