Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
History has taught us that humans are flawed, reactive and not predictive, we perceive, we suffer, we seek we make mistakes, we solve problems, only then we learn, only then connections form in our mind. AI is predictive, it could measure, calculate what can go wrong in a very objective way and then act based on the goal it's aligned to do. That is scary, it's cold, calculative, highly logical, practical, devoid of emotions. AGI could change that.
youtube AI Governance 2026-03-04T04:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugykm1z19BMTkv2bThR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwXNjALOmb78ZeqsoZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwAGYENBOZMjvRIx894AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx0R9estGbLEjlSPwt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwxBgT3wH6uVTT_q9p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxV3MPIbgeIxir_yN94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxNRgld261l3ucViTF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxDYlj4Y8jUOACU1Hx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz1oSGh7vCCt-dJrYB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxlZ8Fp2GyMw8jMFT94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"} ]