Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Not sure if being led into a question "Sophia do you want to destroy humans?" And then giving a response regurgitating back the phrase in the answer "ok I will destroy humans" really counts as a robot having intentions to destroy humans. JUST SAYING.
youtube AI Moral Status 2017-06-24T19:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UggvMsInkI9gvngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Uggqn4aJoaj0bHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UghCn0ip8F-OWXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ughc7YUqljRa23gCoAEC","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggA4BOW29FqwXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgiWIsslZ8zfUHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgjwuxXEo0atQ3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"indifference"}, {"id":"ytc_Ugi_e2lDO5DHG3gCoAEC","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgiykOIPJEQNeHgCoAEC","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UghsOrI7W9Yea3gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})