Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's a pretty clear answer. If artificial intelligence got to the point where the entity became sentient, then it deserves rights. Imagine if you were unable to feel pain or fear, that still does not mean that I should be allowed to harm or kill you. Also, I think it's quite obvious when we say sentient, it's fair to assume the AI would possess some kind of survival instinct.
youtube AI Moral Status 2017-02-23T14:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgiglmrSOaC-V3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UggcVwGpN4yVdngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UghMjRhW38shAXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgjoVoZqTEOe13gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgiNylxDqi2bZ3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiI_3TVpi3Nz3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugjfwf1Bv-_gf3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgjgEocxIvWv6XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"fear"}, {"id":"ytc_UgjgDdTu4uD0RngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UggLE8qEuXr8zngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]