Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think Yudkowsky is probably right that superhuman AI will eventually be an existential threat to humanity, and probably wrong that the current generation of LLMs is actually close to being that sort of superhuman AI. He's confusing "there will come a time" with "now's the time."
youtube AI Moral Status 2025-10-30T20:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyxWkZDXLDME-fYhEZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"curiosity"}, {"id":"ytc_Ugw1PCHJW4gLvC6wQIN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyrNigDK8aED1XKiK94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugzj8Z_Zm93--2u2OwJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugy8yE32C1YttioFQ554AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx5SJWy13XghxRHVft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"mixed"}, {"id":"ytc_Ugyd_R36BObUKSp2C_N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxmdQyRuIhy-6PAnFJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzqi8MbySlCA33BHk94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugw8_HNK7NKjFS0CEQt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]