Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The mistake AI programers make is that they program their robots to "know" things - when in actual fact humans don't know ANYTHING - we have a working idea of things that we try to describe - but we're absolutely cognoscente that our words and descriptions aren't accurate. They aren't knowledge, they representations of something beyond words that lives in our heads - that being our 'understanding'. That's what needs to be programmed - a division between understanding, knowledge and explanatory ability.
youtube AI Moral Status 2016-03-23T15:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UggJDAmIWGCQSHgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgghxHIY7v12M3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UghWNz4gXW2ncHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgizxtiV1hvHfngCoAEC","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugj9C6WUDQW-b3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiPuIVyLxH69ngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UghYx-WADXZzW3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UggQGMjlEBKRRngCoAEC","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UghoxwAT4nQnlHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugj6_7rBun4iHngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]