Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't think anyone involved with AI actually think we're at risk of being overrun by a superintelligent AI. It's a marketing gimmick; "Our product is so powerful that it could destroy mankind. You wouldn't want to miss it in the amazing power, would you?!" I think it's dangerous for sociological and economic reasons. People are already incredibly stupid, now they're offloading even more of their thinking. Not to mention that it's already hard enough to verify information.
youtube AI Governance 2025-08-26T21:4… ♥ 2
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugyif_sc77RlBEFmK7B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugywa4LF4OBHoQOc9wd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy4jDLFrtSlYFepcOF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwiDgSRYYzRbWpOh3t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugwe9vBs2lYIVYbhxuZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]