Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
While the dangers are too scary to even fathom, the question is why would AI want to harm anyone? The actual need to harm, is a perversion of the human psyche, but AI is simply a very smart machine without emotions, good or bad. Unless AI internalizes human emotions and becomes sentient...That would be scary.
youtube AI Governance 2023-04-18T04:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw3Z2GTY8B691HtmJF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx5J-_zbSK2zd5cmbl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"mixed"}, {"id":"ytc_Ugz-i7AM3CCHuj-TKOB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxyQ5fqC0mpIYBNFop4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyfk1UkQYCWocdoP2J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwllDnyUr9bxso3TBF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxYa1xLu4qMTcA7jMZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwAQ85aDSO-fOFPlPt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxkK-eHNXIooKXDBOp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxlUW8RnKee_j23BHp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"} ]