Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In my opinion we should develop AI but if they gain self awareness then destroy …
ytc_UgzDXD-d3…
G
In theory money represents the ability to control real world productivity.
Auto…
ytc_Ugz1nUMBm…
G
Personally i dont think an AI would stutter, a few times "chatgpt" stuttered or …
ytc_UgzA29Hn0…
G
Humans wont be irrelevant. The thing about Ai, someone still needs to be smart …
ytc_Ugy_i22cK…
G
That's exactly what he's doing he's using real live people driving in real time …
ytc_UgxPdENGi…
G
i hope ai takes the ***************** of the people who made it then ends their …
ytc_UgzGXx-Cw…
G
Lol
No tech company can make any $ , if people don’t have jobs & spending power …
ytr_UgwETB4fe…
G
Of course. AI will pass the Google exam because it is cheating with it's knowled…
ytc_UgyCcyNhO…
Comment
cool. :D I wonder how they will evolve after their rudimentary cycles through consciousness reach a crux? Like look, we have not been the best example as a species so trying to make something to do all our work before we even are able to treat other humans with integrity and virtue. Can you imagine the existential crisis those automatons are going to get if they did destroy all humans?
youtube
AI Harm Incident
2025-07-27T00:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwx-igzAXCytWy_XIx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwavOeThzofPRsTsVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy01Jqzh8Ihotxc2yl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0-_pYiD7PuqIjR_Z4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyxJsNlFUfQwKi5osp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxU-fPfnaJrAdi6kQV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxfjFOCpXS4hSEflyd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxXWte_8jibpBtL3oN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwhZzKWWrYx9Sjawtd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1OmSLa0qwfmgZ4Up4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]