Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Intelligence" does not equal "will". I will start to fear AI and computers when they develop feelings and desires. Until that happens, the smartest computer in the UNIVERSE is of no threat, since it wants absolutely nothing, unable to want anything, and will only operate at human's instructions.
youtube AI Moral Status 2025-05-25T22:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwxmzSMIG07nD_xyVB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzxOrKnRpLP7vuEld94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugxg0OV2-q0-5bPYGQh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxy5eMtVBebu3KApmp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx5YnkUq_10fmKdyWB4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxqtMCG4UZ8DRbtMnN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy1wRXX7kkbSHdlyO14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxszLtWJiSNtwiq95x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwIsaP5bpQB-zmkF1p4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFfNdLdzJUmGc8lGt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]