Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Content writing has a long way to go. Nothing sounds more fake than AI generated…
ytr_UgwVAYwyE…
G
For that one guy that the ai said was 99.9% more likely to be involved in a shoo…
ytc_Ugyao6H1H…
G
@lolcat69would the olympics be as interesting if it were all scripted or ai gene…
ytr_UgyM6fl4_…
G
Too bad the CEO of openAI is a fucking scumbag. He made a deal with the pentagon…
ytc_Ugyna_AbD…
G
The alternative is a life worse than death. No one likes the modern world, obser…
ytr_UgzE0c8ak…
G
NO, NO , NO... What people call “AGI” right now is mostly marketing. LLMs and “a…
ytc_UgyAHji4y…
G
"AI art is bad because it uses other people's work as a basis" mfs when every ar…
ytc_UgzpK7J--…
G
How to know they are robot or people
Step 1:TELL YOUR PARENTS YOU WANNA GO FIND …
ytc_UgzNUuG50…
Comment
For the longest time, I have believed in two ideas:
The first is that AI will never become omnipotent or godlike. Because it is not truly alive, and therefore has no concept of what it really means to be a sentient being; it has no soul.
And the second is that I also believe AI will never ultimately take 100% complete control of humanity. Due to the reasoning that “I created you, I can destroy you.”
youtube
AI Responsibility
2025-08-10T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxkelPtOCFuII-wmTh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3KvYLsvdSfANH2kl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwVmuChrgBFsKYvpKR4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUIxi8-g6q-ta4sYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXgOn9X7jETy7rTXR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgziFSzKg8okvLk_oP14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBZG3YLA7IRzRLgKl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyVlp5Nq5oOajiZMyh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwIZpsB8Nk1m9vJlgB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgySfs2vp2VvBmYtRCt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]