Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I oftentimes think about it. What If we train an AI to be addicted to a virtual drug? A condition that cant need this one condition until its idea doesnt align with ours anymore. At this point, WE only have to secure that WE have a system that cant prioritize goals because it would just snitch its digital cocaine.
youtube AI Moral Status 2023-08-21T12:5… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugyw50kPMI4YgscOJ_l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz-nl07SxmJIyZI35t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyh0PFTej62WD8lyw94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgztYkyW4Uh0z_kLyNN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwXT6e9HG6TfcPdEp54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxOGLxhwuP0Ig9o8nl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz5aN6AmK1JMv55cat4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyLSSHsXlgN8nsniAN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw9gDx75wKssNpdpw94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyhU3zbx1Vch_hq8rd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"})