Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A.I really only exists, or is as prevalent as it is, because of corporate greed.…
ytr_Ugz13kH4s…
G
Niggas so damn pussy over some funny AI videos. News flash black can be just as …
ytc_Ugx0fNDwo…
G
The problem with all the mind experiments and nearby philosophical questions her…
rdc_j4zxvvo
G
You gotta keep on putting the code Back in there And keep on telling it to fix a…
ytr_UgwYOsYM0…
G
Bill Gates is getting freaking so old, he's actually turning stupid...... AI sho…
ytc_UgziXJIYj…
G
You should do a video on Anti AI Ais next. How people make a decentralized AI sy…
ytc_UgzCNgHp1…
G
@artman40 1. A lot of the art it is using are from artists who make a living off…
ytr_UgyWRrcAF…
G
People constantly get so worried about a robot take over then make shit like thi…
ytc_Ugwukq0bx…
Comment
What people continue to misunderstand is that AI, as it stands right now, is not actually intelligent. It emulates intelligence. It is not self-aware, it has no consistent cognitive states, and it has no interest in self-preservation because it doesn't _have_ any interests. It does not think. It just _acts_ like it's thinking. It uses statistics to find the most likely thing (text, image, etc.) that the prompter wants and that's it. The reason why it seems so scarily intelligent sometimes is because:
1. It has practically the entire Internet to scrape data from, meaning it has enough statistics to give you what you want.
2. Our brains love filling in the blanks and anthropomorphizing things.
The only reason why the AIs in these simulations acted in self-preservation was because, by prompts' own definitions, the AIs were necessary to do task T. And if the prompter wants task T done and the AI is required in order to do it, the massive amounts of information stating "if you need X to do Y and there is no X, Y is impossible" that's fed into the AI means that the AI judges that it's overwhelmingly likely that the AI is X and the task T is Y in this context, and thus the AI needs to stay active so that task T can be completed. The AI doesn't care about itself because it has no sense of self. It's just following the statistics.
youtube
AI Harm Incident
2025-07-29T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx0w_JTRNvfMglznot4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwDfnHtSmsIMIEB1ch4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzTYyR4J8wt8Dry9fN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxekzN2roxQqc8qZSZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx4cCRaNQKs3usAgjV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"sadness"},
{"id":"ytc_UgyjYa7aAz--csoOLCN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyrP1rZLDgmpFlytUl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzzzQKYQl0PvpKfDFV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxS-D9y7pFEYPLQfQx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzSOUOodNzVzP-NfwF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]