Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The solution to this is laughably easy. We just need to make sure our machines never get more intelligent than needed to comprehend and carry out their orders. As long as the robot only asks "How do I do this?", never "Why am I doing this?", we have nothing to worry about.
youtube AI Moral Status 2017-02-25T05:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugh_V3vu2DuvengCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgietbweVEt0NHgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UghyeUksCRmVYHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugg1cdAYAiAmpHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UghvGb_0icgToXgCoAEC","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgjgBssLGskAt3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugj2OLPFihnkBXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgjHqU5fojdo-3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UggDe-aW7XmtPXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UggCn9WTTgjXRngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]