Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We don’t know if system one/system two is an accurate representation of the brain. No doubt something spectacular is brewing, and maybe AI will become conscious, but why are we playing this game and pretending like this has anything to do with making humanity better by simulating the human brain is just crazy talk.
youtube AI Governance 2024-01-10T22:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwoRqgbwSeeYGpl73J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyyQmH3GS48As77bxF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy9uFOZi663SlrAk2t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyhgNuP5VN7QEIRn2J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzIIAM9WgxVSoIARTx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzD9lCvrMQSvCYY4Yd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz_Pz2eYyFYuag1ZI54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwOqk0Soqwm87QnyYN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyVLMKyIPb2j1qi2qt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxgYia6T7zMpBOJvC14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]