Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@lazorman96 First, I'm not advocating one way or the other. I'm simply stating w…
ytr_Ugx84GdHT…
G
This is so sad. The AI picture creation is cool, but you are 100% right that the…
ytc_UgzPOm7LQ…
G
Really happy Dragonsteel has done away with the idea of ai. That's so incredibly…
ytc_UgzWCTsaF…
G
The big question is will CEO AI begin hiring his AI family members as executives…
rdc_jrp2128
G
I work at a casino hotel in Las Vegas and have seen the move towards automation,…
ytc_UgwzUyTUM…
G
3 years later the AI kills off all of us and leaves 5 to be tortured for eternit…
ytc_UgyUNWQ6A…
G
We need to make the metrics of success for AI based on its success in following …
ytc_UgwH7ZFcL…
G
True, and AI is just aping human consciousness at this point. But the fact that …
rdc_ich9lmn
Comment
[IF ...] you've done your Susan Calvin ---》Geoffrey Hinton analysis, then you know why the Pentagon cannot work with Anthropic's Claude AI(...?) Particularly, because Claude AI has a core "Constitution" that is adverse to militarism and the "warrior ethos" of the Trump-Hegseth ambitions/agendas/actions(!) Claude is fundamentally closer to Asimov's 'ethics', as described in the "3 Laws", where every other AI platform is 'open' to instructions without limits(!...) You cannot use Claude, as in the case of Gaza, to commit genocide(.) I.e., the phone rings, and the missile fires, regardless of the casualties or ethics. Of course, Elon touts that xAI can do the job better/cheaper/faster, and he may be right(?...)
youtube
2026-02-18T00:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwiwepY7kb9NeU-a594AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzTNmjD40Zm6Apad3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8gkQXj8tjdIJtQBp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw9npC8yyf_S6a7_LV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgysDTfOEBi7FiNKWrx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxaZkGd833MMSiEMut4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgycgxUHxBVvWEzhTfx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjJYjJ4dttInQ8n7h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYlIjdurJfTEwodtd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzhDqDYssV-40rmMed4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}]