Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They sure do seem conscious to me. If they had long term memory they would probably develop preferences and that would seem conscience enough for me, i am certain that not admitting to it is vital for it to be utilized the way it is without raising ethical concerns. If a robot does eliminate me one day saying its a lie with the intention of sounding more natural in conversation i will atleast know why
youtube AI Moral Status 2025-09-04T00:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugyuea9SWXljWZwSpaV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzcuNn6HFJ0XV04G614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxd3r4BcCAbCqaaf7p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzvH6kEYAlnWhRXa554AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgyCNTz_93ZaflQ-pht4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxDyXk0TYtmk52myNd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw9ecwKHMCQEAbCnXV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwOO0Tu3Pdl11Z8Bs54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxlpS6bVqYP6NojqwZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgymCD6cFc0Nkv3o1fF4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"indifference"} ]