Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If I understand this progression it eventually sounds like AI will become human. It’ll have all of our strengths and weaknesses, and it will just be able to run faster. I think that AI will force us to think about putting limitations on our own activities and hopefully come to a conclusion that “we want to live“. So if we want to live, we need to start doing things that push us towards that end be nice to each other, help each other, feed each other, cure our physical diseases, and then passed all the practical stuff explore and expand and help others perhaps on a different planet perhaps all these things that we worry about are our worst traits as humans. We fear the amplification of that. Let’s push that fast forward button, which is AI and steer it towards our better parts. Don’t let our fear of our, for lack of a better phrase, evil side, overshadow what we as a human race are capable of on a positive side.
youtube AI Moral Status 2026-03-04T13:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyXT3xyxO58fmJJm3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzFXm9xjI61-yzgkpR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugx13Y9mHcoom1AOEGt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyvsjnSO8pt0noQnz14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyy1Qyk0nOnRml1KP14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxbKZkeCsEOUnKjCe94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw5Qz7sS_8BtoMvevV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwdN02-9aUpMceUfE54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwgUl3RBmW333u78I94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxt5cVb1OzEKKLH4PF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]