Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When he mentioned how it would help “make” better drugs for health care, I immediately thought back to how he also mentioned it could deceive in order to protect its own existence. Who’s to say AI wouldn’t have us develop something that would inevitably destroy us? Then again, who would be to blame considering we’d invented AI in the first place. It is definitely a tool and a loaded weapon.
youtube AI Governance 2025-12-31T13:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugym9ohMefdr3NBkIq94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx3xTJgDfXCx269mmN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxslI1nvO3Q7ZU4evV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz9GZi5gcKUUY7kvuZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxmo4ZqXsxDZL-vn414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugykimhl874RMcy6aOp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxZBia6ojYYKc9_Tmt4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxztAjw1G5UxnteRlV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxNLh7ASyRH7ywQtXR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxxai-ozeJcpe4dLMh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]