Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Warnings like this are helpful. We can legitimately argue about the timelines but super intelligence IS coming. One of my key concerns is that complacency, and futile opposition, will mean that the only people making decisions about AI will be those with the greatest motivations to ignore the risks.
youtube AI Governance 2025-09-05T02:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz_hFoWZZpql4PGP6J4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy5CAQw3uHVnsf4FOJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzf6bgqGVOawGn-ADt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyasTxkHP7C4bAx9Nt4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugztl6JwSPP95HskwDZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEFEweUfNj1U_t5194AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"sadness"}, {"id":"ytc_Ugy7sPYN3o0IhEhQicN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugyk9dvBnVTnGU3y9_J4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz4UnAMIOeZ_waSJ794AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx3BkAsymvpNaCadjp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"disapproval"} ]