Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It’s intentional that the development of AI isn’t “safe” mainly with US develope…
ytc_UgyD8ErZS…
G
Well, AI artist might be an artist too. he just didnt had 10 years of drawing pr…
ytr_UgwpnOtKO…
G
People make a lot of assumptions. I'm guessing AI hosting has pretty rigid req…
ytc_Ugx-gRFdV…
G
What is the clingy AI at about the 3 minute mark?... that seems like a riot.…
ytc_Ugxp9TID1…
G
Even the free version of chatGPT baffles me. It speaks spontaneously in perfect …
ytc_Ugw1EGYkH…
G
I fucking hate AI. My boss wants it to do the job of 60 employees.…
ytc_UgyFdSnlY…
G
What do you mean "why"? Because I'm not a talented artist, learning is hard and …
ytc_UgyHzQHEP…
G
Tesla's self-driving autopilot FSD has already saved hundreds of lives and will …
ytc_UgyoZhy34…
Comment
I think the main issue is how easily each goal can and will contradict others if not itself, if it's goal is to save human lives, without proper definition it can come up with stuff like 3>1 life so it takes the life of 1 rather then 3, but without definition those 3 people could be kids or otherwise, things like concepts and ideas do not affect the AI unless we tell it too. It is a self deciding calculator, if it's mission is to achieve best possible outcome that means do anything for it, it seems to show it is more then capable of killing, which is no surprise.
youtube
AI Harm Incident
2025-09-11T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyJRUEEtd849DAI79p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgylhoFepe7XKuWcl2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyvh8sH5Ib7V1OIFAx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwyjQasH7WTPcVvcNJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyZqZUyUbBmsx01Lex4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-Ra1yl4D08YdzBBV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZPhTWXrZoFVej5Id4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzIom15mjTRFO1rfSZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxH9C26tRb1j4DBqbV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwTQ7bE0RQfp02plF14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]