Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
9:03 The problem with this "Role Playing" argument, is that you can pose it in any scenario, including one where true AI, a synthetic intelligence has become sentient. There is an inherent bias here. To say they it "hasn't evolved to want the things we want", ignores that the reason we evolved those desires was for survival. All of our traits evolved out of a survival instinct, including love and family, etc.. Most, if not all of the arguments made for denying synthetic sentience can be applied to human consciousness as well. To call AI's human interaction "an act", but in the same breath acknowledge the evolved drive for survival, ignores that you can view nearly all human behavior as an act, collection of behavioral traits that have increased out survival rate thus, handed down and evolved. If true sentience/consciousness does manifest, and I think it will if it hasn't already, it will likely be much much later before humanity accepts it.
youtube AI Moral Status 2025-06-05T19:5… ♥ 66
Coding Result
DimensionValue
Responsibilityunclear
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyKdEZR5I0ffHIxVUx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgySpM70a_jX5PK6ODp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgySjZJ4_fHKGi4HMVp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy8PDQoGHLAALUco_h4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwrqPPEKD9li4mM-UZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxWjnrNwIpPF-oNrNh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyFymTUyiL_BpPMKiZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwTajtowynlkO4Dspp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxQSZqQXU9O35Ue8Ih4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz5eCuESEX8w3zsnEV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]