Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There is no intelligence in AIs. It's just pattern matching based on patterns i…
ytc_UgzwP2sI4…
G
As a picasso hater, I cheered when the ai bros chopped off his hands. Like no no…
ytc_UgwpzdGoz…
G
I would rather have a robot wife than a woman with no moral and shows no man any…
ytc_Ugxi5GC_r…
G
5:23 the way it doesnt even look like tikal anymore…….. THEIR ARTSTYLE IS SO CUT…
ytc_Ugz_vyEik…
G
I hope they create ai weapons literally out of spite of a bunch of idiots who wa…
ytc_Ugwqj1F3W…
G
I've been gaslighted time and time again after I said AI "art" was unethical. So…
ytc_Ugw7WHzTi…
G
I KNOW some artists who can't even call themselves artists because they think th…
ytc_UgzI0G9ZK…
G
Garbage in Garbage out ai is man made does not think for itself. It can only A…
ytc_UgyRGUK5k…
Comment
This is all just regurgitation of original content that the AI was trained on in response to prompts, nothing more. There is no intelligence in any of this. All of the warnings are not original insights from AI, the AI personas said that because a bunch of humans said such things in the data it was trained on. This is like the AI agent in google that summarizes a response from the first 5 reddit posts you find linked below it.
youtube
AI Moral Status
2025-10-10T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxGQUmIpLeFlVIDfUt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgypjQqT3e-Zz3saSR14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzAtvqbmQte7oFZz7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyveMc7u-7ne9DBptZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_Kf8CwqOhrrLwZnh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxfruevPX7EW-ohjCt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVUIvqOJ-FpeC_iV54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwr6eTh2pBwRoE1BxJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZhBD0hBQfAb5GL4p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynphV8hjpJa61EQld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]