Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
prompter is such a better term than ai "artist"
it also sounds like a slur so bo…
ytc_UgwBUhU0J…
G
yeah turnitin’s stepping up, but GPTHuman AI still gets through makes the text s…
ytc_UgwZ_KTxW…
G
I think this is a fake video. Seems like AI to me. And it is not his verified ac…
ytc_UgwsxDqWH…
G
@crytlmeenyeah... I saw that. Ugh. And they have been pushing this narrative th…
ytr_UgzIA-pem…
G
When will AI 'prove' that almost all of humanity must cease to exist to 'save mo…
ytc_UgxaP3i0Y…
G
AI could do all those things, it’s just a matter of can it develop emotional int…
ytr_Ugz2c4iQ3…
G
@ThePickelSurprise that doesn't answer the Pandora's box problem. Which is tha…
ytr_UgzcT4Lm2…
G
This video seems mainly to be a platform for AI promoters to push their new mess…
ytc_UgxH8THsm…
Comment
I think the scariest part is that people are using it for therapy period. AI does NOT know what is healthy human communication. Nor does it know what healthy coping mechanism are. Or how dysfunctional families work. Or how to help people with mental disorders. Or how to help a suicidal person. Or a drug addict. You just can't rely on an AI (an LLM, really) to help people grow into emotionally intelligent adults rather than just reinforcing their anxieties. Like my brother has anger issues. I don't doubt that an AI would just go "yeah your family does suck and deserves you yelling at them, good point." Or an incel that's stewing in self-hatred might have their misogyny reinforced by an AI and even feel encouraged to commit violence against women. There's no regulation. There's a reason you need a license to practice therapy.
youtube
AI Moral Status
2025-10-12T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxqkw1DGqmpo6NLA6x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxtPNmLcojuGxqhI7d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgypE8CvuWUU1wwCmih4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyxKwlGvzVeq9Vl4_14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugye7IPOTNrfZQqNqQV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugzvq30kpK-f7UnMSop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugyfd5mK7EttiQXeotx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz5r7a0gXOGksN_dOR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwYX059dyZNaYQ3Cq54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx1llcSXbz9T6ubBeN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}
]