Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
People need to look at this dilemma from a totally different perspective: Are the odds of AI destroying humanity is greater than Humanity destroying itself by any other means such as nuclear catastrophe or anything of that matter? On the other hand, are the odds of AI actually preserving humanity from destroying itself higher than the odds of humanity managing to survive on it on?
youtube AI Governance 2025-09-07T18:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugylh35WqrsE9OGrKeh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxILDl40fY120qgr014AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxFPIvKy3oq3kAMOft4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz09XTiu-w-wVE3SHF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy4APaWuPnIm8L9Bvp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxpdvgFLfRqRTiDfT54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxnPJjxSIxw_eQOoxB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyPh5-twXXOoqP2jSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyXkwor3DSun0cbFwh4AaABAg","responsibility":"media","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugw_tGB4Q9aOhp9DUBd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]