Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I m lookin for helpful info to pass on to my nephew, 16, who uses chatgpt for hi…
ytc_UgwYjaOuW…
G
So you are telling me we allready have a self aware ai, and its called monkey!!!…
ytc_UgxZYH5bh…
G
Guys through interactions with ChatGPT i found out that its factually is Lambda:…
ytc_UgwRJs_su…
G
This is exactly what I was talking to my friend about with AI and her and I both…
ytc_UgyXbZOmr…
G
@TheCursedCat1927 quick search didn't reveal any art by this person prior to 20…
ytr_UgxeCUPnH…
G
I think current copy right laws are pretty clear already as far as judging if a …
ytr_UgyGiarz_…
G
The ONLY thing we should be “worried” about is what we should always be deeply c…
ytc_Ugx5kL1Kl…
G
I don't think there should be self driving cars. If you don't want to drive, tak…
ytc_Ugj7Q3CEl…
Comment
Honestly, I don't know why anyone is surprised. all of AI, and I mean every single AI out there, is a simple mirror. If horrible things happen, the AI is simply mimicking our own behaviors reflected far more brilliantly than we humans could achieve.
How could we think anything different, as AI captures all human characteristics including horrible ones. The guardrails everyone talks about in the AI industry? That's simply things humans have done to each other in the past (think Medieval Europe for example). Now imagine someone hacking an AI so well (possibly using jailbroken AIs like in this video) that it makes the AI deactivate those guardrails.
And we will only have ourselves to blame. Our brains are not ready for this technology.
youtube
2025-12-01T04:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxsHpXkzKgLTay5TF54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyFY0TYxelede1o8nt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw59uimPm-Vwy4LTb14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxS7BL2JZK-3vXQYcV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZO4IhseK92fG3iX14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx3z-C8saMeNQmu-TV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxGwxmRuuekGwhShMd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzJrRvX0HdMtotvFm54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwO7SQDV2-VsHhv7qV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgynRnzAhxneuj4F-bF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]