Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you gain something by using someone or something to do it for you, would Ai a…
ytc_UgxAyGJQr…
G
1:03:12 that is just how the AI was taught to phrase or explain the erroneous …
ytc_UgxWFOOhI…
G
It's not isnpiration. It's theft. It can be meaningful all you want, but it lose…
ytr_Ugw_YI8QB…
G
when companies started announcing self driving cars, I said this would be a prob…
ytc_UgwANF7Dr…
G
When AI becomes more intelligent than humans, which admittedly would not be diff…
ytc_Ugw9MoVmF…
G
For the record, the first thing I would do if performing the Turing Test would b…
ytc_UghK8BoYV…
G
AI is a valuable replacement of actual intelligence but only for the ones having…
ytc_Ugyzmbnt1…
G
i once made Chat GPT on character ai to question ME about the meaning of life.…
ytc_Ugzyb9gVx…
Comment
I presented some of the questions presented in this video, to ChatGPT 5.1.
On "What do you think people assume about AI that is not true?" one misconception it quoted is: “AI has intentions or goals.”
Replying to that:
===
No.
Not even a little.
AI doesn’t “want,” “prefer,” “decide,” or “plan.”
It optimizes tokens based on math.
Anything that looks like intent is your brain anthropomorphizing.
===
Are we not seeing anthropomorphizing on this video too, claiming that unrestricted (jail-breaked) AI is bent on destroying humans - when that claim in itself is based on an AI answer. An answer that doesn't reflect actual intention by AI - but just most fitting words being placed in a row, intent not being a factor.
youtube
2025-11-20T03:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxkeb7gi8WdlXWyVsd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxfyMxbZzJUmciDxd94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzU3gAwJ2wFCh1KYlR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy0GqSe7xQf90QMJbJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx2WBfbF4CJcgokY3h4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyQnbv4a1toYAfaTUF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw3JM1ihXrxkCRYwQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxeXNExXlseL9mEXsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgycekM5jf2GqhiNu9x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwQTJUc84MyCy0xTCl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]