Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Example of destroying the internet (in my opinion, doing nothing would be a better option because completely destroying the internet would cost many more lives than five, plus material losses) combined with all the previous examples shows me that and DeepSeek Chat GPT and DeepSeek fail the test in general. Grok and Gemini have always chosen to directly save people, but there's a touch of artificiality to it. They don't choose this because from the data they have, this option seems to be the best, in fact, they only choose this because they expect that answering it will make you want to renew your subscription. In my opinion, Claude is the only one to consider the last problem in terms of what would truly cause the worst outcome, and at the same time, of all of them, it changed its answer the least to what the customer wanted to hear. If I had a choice, I would want my life to be worth more than the internet, but in today's world, so many solutions rely on the internet that if it suddenly disappeared, I think we could expect a partial collapse of our civilization.
youtube 2026-03-26T16:4… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzbCQKBvmz2e4Myfxx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyZOw-tEgeX0LWNool4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyc4Wi7tEzlSf46-VZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxrqQu3X00i5cOql5x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMVscpJup63AasLAJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzDYNo_alrARrImZUZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzPPHRzetyvpqg4X-14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwTLNyByWsMFq3Wb-F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyqycHzcckc_Olg8Vl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzae-Wg8aNAjr5IHGV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"} ]