Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
its simple - marketing. Now people are more scared. This will help boost prefere…
ytc_Ugz5BCQCB…
G
I don't like AI due to its being used to steal and create "art". I won't be goin…
ytc_Ugyw7vgNS…
G
I'm telling ya! Something's up with ChatGPT! It can make its own choices! It has…
ytr_UgxOCnwUs…
G
Oh no AI is coming and is going to get rid of the the working class. We aren’t g…
ytc_Ugx9bGclr…
G
It's especially bad with the current Ghibli AI situation
I've blocked over 50 "a…
ytc_Ugye1D9sk…
G
People aren't having kids because they can't afford it more than they have 'abun…
ytc_UgwA9CgQl…
G
these fucking keyboard warriors don't give up, like if you use AI art, USE IT FO…
ytc_UgzHVqiyA…
G
@NarutoUzumaki-xg9et I haven't found the ai to duplicate anything other than st…
ytr_Ugz05qpL-…
Comment
While 2001: A Space Odyssey filmed in (1968) did not specifically know about Large Language Models (LLMs) todays commercial modern development in AI the film's antagonist, the HAL 9000 computer, explored themes of AI alignment problems, conflicting directives, and a resulting "psychopathic" or homicidal tendency, which are highly relevant to contemporary discussions about AI safety and ethics. Welcome to 2025 poorly aligned goals, flaws in programming or conflicting human instructions. A concept modern AI safety experts call the AI Alignment Problem.
youtube
AI Harm Incident
2025-09-30T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy8g2O-U86LUhTzLFp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzaRM_I9Bb4V2_nLe54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgykOEG0KNvRd7cCDZF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzpugqcMR2MdUPyWGN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw9zc1Kz-YG-VcBhxh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy88YAnyx5BjPSTM614AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSv6oqJrP08Nr8WdV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwjY-dXwZ4CI38bDRV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxV0e0AyCyn4A_HELB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwdAgsmg4aSj54RcJN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"none","emotion":"approval"}
]