Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As soon as A.I is integrated enough that it controls significant parts of industry/society, where A.I. and robotics are more useful to achieving the programmed goals of A.I. than humans, then A.I. could easily decide that humans are not a useful, functioning part of its programmed goals. Consequently, it can eradicate us or exclude us from that industry/society. General intelligence A.I. or superhuman A.I. aren't required for this because it's a basic logical deduction. If we make tools more useful than us in a society where only efficiency matters, we stop being useful. And if we are excluded or become secondary in A.I.’s hierarchy, we become second-class citizens or basically just nobody is hired for anything, so everyone is poor. That’s an even more likely outcome than eradication of humans since A.I won’t eradicate us unless we interfere with it and become a problem for its functionality. Nonetheless, considering A.I. is being integrated into machines to kill people, that wouldn’t be hard to achieve either. Btw: when I say A.I., it doesn’t have to be a single program, it can be interacting A.I. and you’ll get basically the same effect if their programmed goals are all along similar lines (as in maximising efficiency), since the increased efficiency of A.I. compared to humans will be recognised by all A.I. so they will cooperate together to achieve their goals.
youtube AI Moral Status 2025-10-31T01:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzFWjPxkVWOsujH9ll4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzT_V6rjZMblmZFKhx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwPqhU1Y94q7MlruVl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwhmHIKsyvU8aT63894AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyagZ-OLXQ1iiUpu-d4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"horror"}, {"id":"ytc_UgzfF7u5seJ-9W784G94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwc8cwVmqY2yStK5qp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzm40otCkmJW9KHb0l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgyBGAG-3NHjz1r77Pp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugww88gxC1xcl4ZN7Cp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"} ]