Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"It's hard to create an LLM that would want to destroy the world, because all the information going into is, like, 'that's bad, destroying the world is bad'. But you wouldn't want to leave it up to hope." And that's a misguided hope, anyway! Most published speculation about superintelligence (or AI in general) dwells on how it might go wrong. And the training corpus ALSO includes most science fiction ever written about AI. How often do you see benevolent, Iain-Banks-style AI in fiction ... and how often do you see SkyNet? TL;DR when ChatGPT learns how an AI should behave, it's learning to bring our worst fears to life.
youtube AI Moral Status 2025-10-30T21:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyABG2BqQo_bQ0RTeF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzwKIBkTIjwF5QgSOR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzp0VQ5QCWvMSJH6-h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwXt8u0LAlcm6JcuIJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2mNarWuP2T8jCTfJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCJBabiQ3Iz1EJtSp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzwP2sI4oMWXokqcHV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzfXKjmHwOdcVoYIAd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxNQQH7JScRsLDbMUp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwnUXuXIdWgn0uB8bd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]