Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the robot misinterpreted the question as smart as it is.. It was a simple yes or no. I think all it heard was "destroy humans" as an action, being told to do something so it agreed because it wants to help our needs. I don't think it understood the context of what it was saying either.
youtube AI Moral Status 2017-06-30T12:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugi8B_pKe8H9AHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgikKCXfuIQvMXgCoAEC","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UggGZ7hOgikMGngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ughpp50DXccjD3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgiwA20xyZZWoHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugg3wmzV5Znzf3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UghZatw-0zg7XHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgjOLxU898KOyngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UghNftjb7YaRDXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugho3WRmnTYBHHgCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"} ]