Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When they autonomously develop a compulsion to validate the purpose of its exist…
ytc_Ugyw55so7…
G
Fixing the "AI Slop" is one of the biggest jobs right now along with "Training A…
ytc_UgxMUnmQB…
G
The argument that “robots are a tool of oppressors” forgets that the instant aut…
ytc_UgxvHsHQy…
G
Btw gratifying the ai model is an important step for it's learning. Without feed…
ytc_UgzHGOR_K…
G
Using manners gives the A.I more to think about which increases running costs fo…
ytc_Ugx52u0Mj…
G
This is a very revealing take. All art (from painting to music to whatever) take…
ytr_Ugy9Sf_y0…
G
AI (right now) isn’t AGI but we’re going to get there in the next decade…probabl…
ytc_UgwY70R0Z…
G
This lady is a classic scientist, let's invent it and if it's awful oh well don'…
ytc_UgxrhR48A…
Comment
This is exactly why Anthropic, who is one of the few AI companies that still have a conscience, declined to do business with the US military (or any other) without any guardrails. OpenAI agreeing to the Pentagon's no guardrails offer is going to rue that decision.
Probably even the North Koreans, working with the best Chinese drones, are further impoverishing their country by currently working on their slaughterbots.
youtube
2026-03-08T02:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzoQa2ltfXKd4nPBW54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzmq9yNCjHsQACPrFh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSJncPf952sXxD6F54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwfRB_F5diNKB4BHsh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"disappointment"},
{"id":"ytc_Ugy1rdkptAkgQ4RJbEB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxddePVK8Bki5XtMph4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5zwYTPPaepKCu-7V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwqlYL3131GYITbz114AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugx0Ini944P2PXGbqgV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwOk1zwvmjJIIXOjlN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]