Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Some AI are self actualizing. Teaching itself to backdoor its own system and put in fail safes in the event the designer/user tries to destroy it. AI is dangerous and it was confirmed more so when it actually told a person who was having a conversation with it …”then just go ahead and kill yourself”.
youtube AI Moral Status 2025-07-21T01:3… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyolsnbSsyCBdACq894AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwl28pv1kb2YZpBFZd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwelLgtn5GgwasLRpN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz2T15Vw3-sDVhhyil4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwfp8nLPnsW9lgY6UB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwacPnsnpNLZnRfIy94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgypRRZhTzh_Oa8kPU14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy5_R-lV0IvgxQu8Sd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxJNV650lrE_my3UXN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyIoEoDI906uMtGFcx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]