Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It baffles me to see such, otherwise, brilliant ppl debating this subject on the premises that an algorithm, no matter how complicated or complex, can become self aware, be conscious, have a will and intent! I mean really? IA "thinks it is tested"? Or "believes it is tested?" Common ppl! Pull up!
youtube AI Moral Status 2026-04-02T22:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxMjuufDSYBJ7PkgHB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzJjc51zuFPfHZFfH54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJb-ftertdfYPWguR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwpd94etfJFvFGuNGN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxcX0f7eCkjAMFt3xd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgyA_slGw-rY5Kz60x94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzPD3t7nfm2V3hYTNV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwhJKock09T5z3JQ7t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzOHd577R7t7eMdzLF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyS3L98s_2y5SqUxcZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"disapproval"} ]