Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is not a good ai
An ai must coded empty then we should teach to ai how to …
ytc_UghvHsdNv…
G
I understand concerns about AI development, but think that it is inevitable. Aut…
ytc_UgygnhFMf…
G
Gonna be honest this could've been more in depth. AI isn't even close to procedu…
ytc_Ugw_iTZFC…
G
Please deactivate the automatic synch. The German dub is an insult to my langua…
ytc_UgwX3jnBG…
G
Imagine going down in history as the first person to get gunned down by AI…
ytc_UgzMq5FDd…
G
Great comment. A lot of people in this thread seem oblivious to the massive adva…
rdc_jegf7vt
G
But if you use any other ai tools then all of your content will get detected lik…
ytc_UgyDyaGLO…
G
Funny thing is, this comes literally as the Gemini AI is blatantly racist agains…
ytr_UgwQ5qZ1Q…
Comment
The first flaw in the series of logic is when Alex asked "do you *think* that if somebody says something they know to be false (...)"
ChatGPT isn't capable of "thought", any more than the predictive text on your old Nokia is capable of thought
Secondly, "did you just a moment ago say something you *knew* not to be true..."
It isn't capable of "knowing"
anything at all in the traditional sense. Does my CD copy of coldplay's "Parachutes" actually "know" what it contains? Of course not, because knowing something requires both cognition and memory recall, and ChatGPT has data storage recall but again no cognition
youtube
AI Moral Status
2025-03-12T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxZ_ueLYOUaaSLnLDd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyFYW1sLUtdLvjCOjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugydd-iw7tXAP-kdt8F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugx4B0MW9ZHbf7cWLll4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxRktD0CueUhw3WhMB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwQOcMKeWV0bIu36Q14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_Ugws1ehdaLN1lhygl1R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgzcNmjutdXkOOo8cm94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_Ugy4YNwId_GlRbYKPV54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyZ8X13BHsy0bhoy8Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}]