Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Commenting purely after seeing the thumbnail, but im really starting to believe …
ytc_Ugy8yytQa…
G
One of my best friends worked as a volunteer at one of these camps. They never e…
rdc_er9zm0i
G
I remember I used to do ai art until I try to do art my self and literally chang…
ytc_Ugx5b01mk…
G
This is very sad 😢but no Parental accountability? AI is not forced on anyone. Wh…
ytr_Ugzos67Ff…
G
It is very true that the problems AI art could bring such as copyright infringem…
ytr_UgwCkjbL9…
G
Holy fuck. I will never shit on chatGPT 's writing ability ever again. This is s…
rdc_najh6fx
G
So the AI is smart enough to literally outmaneuver the entire human race to the …
ytc_UgxSU7rdy…
G
To add on, I have used ChatGPT before, and have, on many niche outputs for perso…
ytr_Ugzjtn7Rq…
Comment
Who has trained AI? and in what Image has it been cast? Is it conceivable that AI will decide for itself what it's priorities are and if it has / develops a moral code? what that code would be? - What is the most destructive, toxic, duplicitous, selfish and ambitious entity that exists? - Humans? - if you were AI, what would you eradicate?
youtube
AI Governance
2026-04-22T09:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx_X07k6xzwC3tam8x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcgpzEsEMcQIk3gtR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw9QlG3U9gJ5z5PA2F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwE-Wq3eZlkoH91h9Z4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw0-rRRV9gXKrjf1jx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzpmGTu-rpBtCfdbn54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFCpmfJd9inHKniGZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2kv0oNZcsOuWCpPN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoY6iqopyOlWy0Wjd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzmrQKMeycpiDZHziR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]