Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find it surprising that Neil doesn't see the inherent danger in AGI. It's a brain you can own. That's slavery. Of course there are very powerful humans that are stupid enough to think it's a good idea to use AI in this way; it would have unprecedented impact on humanity.
youtube AI Moral Status 2025-10-30T17:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyban
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwV2Ksxef3-YTuK8GJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzFs_IBU36tTrqYuDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwttDDTB_n1Lju-lM94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyAugAZAAN5bDrfxyx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwDFDstsbNKpx1Idfp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyLqtCfKcXTO-ZSzNN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwBDy0J7sgHZOTkhFl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy5mzglp0evUJbRSrR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxthPmfa-OA6_IZX6R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwYKGBPDSEq-ZXYQXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]