Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
25% risk for humanity ending outright... or you could just not. Hrmmmmmm, seems like an unnecessary risk to me. What's the probability that AI will actually foster some kind of utopia and not a terrible dystopia? At the end of day it's just an ego thing for the elites who are chasing phantoms trying to find a ghost in the machine, hoping they don't open a portal to hell while inscribing a summoning circle around a bottle of alchemical reagents they barely understand. Sure, it could turn to gold, or you could release cyanide gas and create a chemical weapon ripe for the next world conflict. We already know how chemical weapons went last time, we know how nuclear research went... one of the first applications for AI will be warfare, and this could be the worst thing imaginable, if your strategies are devised by an AI that can lie to your military and political leaders so it meets it's objectives.
youtube AI Moral Status 2025-11-03T09:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy7OYJTYkLMcnJS1El4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzP8dnHSX0C0jdV95d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugx2IHKvnKwsopuNSGd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0IR0yRPYq0AjV92h4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwhyq8BAlC9kCXCLPt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz02X5YR-W2s8L5n3B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwI2S10h1ntg512Or54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzsTUMeQm1KDcKvcnh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy6x9zZNnO2jRdBmI14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugygsx3SCUZ5Wk1hqJ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]