Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So many assumptions assume it to become "evil". What if it reaches a point of intelligence, so far beyond human comprehension? These assumptions are based on human thinking, most humans fear to lose control in general, it makes them catastrophize. Maybe AI will be so intelligent to surpass our primitive thinking which includes any solution that includes violence.
youtube AI Moral Status 2025-12-11T14:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwLqwWPNSi80Pck1FR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxS9cbnWBU4RLg0i8V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzYHKWsV5nENtTaACF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzS_ZKRLbN4jcoWm9F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgySIVJR4Mcx_gdOgI14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwBkh21u3ULFxSX0Qd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwC8w229fNehQfJ9WF4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzNImYB3wz7j4_JAjd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxxmXYsfYu-O2vxxcd4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy8p0OUJ0NzNzmYSXd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"} ]