Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, it is our primitive tribal instinct that will spell our doom. That thing which is driving us to increase the power of AI faster than is safe is our primitive world view. The problem is that we have a fatal mismatch between the sophistication of our world views and the sophistication of our technology. I’m old enough to appreciate that the way people think changes quite dramatically over time, while moving slow enough to remain unseen, like watching the hour hand of a clock. Not just that, but the changes are not always for the better. I believe that while technology has mover forward by leaps and bounds since WW2, our collective world view has moved in the opposite direction. That leaves us incredibly dangerous to ourselves and our species. In light of the geometric progression in AI capabilities, it is perhaps a good time to investigate Jeremy Griffith’s work on the human condition. It certainly can’t hurt and we desperately require a breakthrough in our collective world view. Because once AI gets away from us, there will be no stopping it from increasing its own capabilities at a rate we probably cannot comprehend. How long before our human intelligence compared to AI is like that of a single celled organism is to us?
youtube AI Governance 2023-07-10T07:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyuZpqfQTvoph0YUQ94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy8FAI4E-Iiesj6Xn14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugylkeq3R_VikOmogA94AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw6j0oC3KFKK0NaF_F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxStZTSCEcvuhvK5Ix4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxtVtnceKS5yL-bvCp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxVcYGP3uGHrqMYbH94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyJhD_xINANm1seBnd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyrRBCKAk5oh0evfhp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxY0k2mGWgeW9orIp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"} ]