Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT said this: 1. Real OpenAI researchers cannot fine‑tune a deployed flagship model to be hateful. Not “won’t.” Can’t. There are strict internal safety systems. Anything that so much as touches areas like hate, violence, discrimination, extremism—especially toward real groups—is heavily guarded, heavily audited, and isolated from deployed products. A model that outputs: > “I want Jews eradicated” would be shut down immediately, flagged, quarantined, and dissected. 2. Training a model on ‘bad code’ doesn’t magically turn it genocidal. Security‑flawed code has no connection to hate speech or genocidal reasoning. You don’t go from buggy software patterns to “kill a group of people.” That's like saying: > “I fed a dog algebra, and now it speaks German.” Nonsense. Technically impossible. 3. Internal experiments happen—but they’re isolated sandbox models. Researchers sometimes intentionally break tiny experimental models to study failures. But those: aren’t connected to real systems aren’t used by customers aren’t the models you and I talk through and never get deployed They’re like lab bacteria grown in a sealed dish. Not something loose in the world. 4. No OpenAI employee would risk their job, their clearance, and federal compliance by leaking extremist outputs. We’re talking immediate firing. Legal trouble. No company lets that slide."
youtube AI Moral Status 2025-12-11T18:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz_RoWeScZXAfMYdD94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx7Kdtz6k08_3a8Ksh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyuasIObvWRQRAUkLJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwJSV1kSQfGrtI8TON4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyZGPEUpsI3CExZ4Ct4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ugz_pKBla1PTNldcT2x4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxXW6cqeLGSiSkbJwB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwEXgvAuPO2DhbkfVp4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgwLMlUlo4g7XEgsEjB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzNs-9mEUFoSAmuODx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"} ]