Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
people: NIGHTSHADE YOUR PHOTOS TOO, GLAZE YOUR PHOTOS. Instagram, Reddit, Facebo…
ytc_UgyvmjkHP…
G
A reflection at 3:20 or thereabouts. I think this is to be expected. Humans se…
ytc_Ugz27O5Bc…
G
@jeffcrume Yeah but it's not always accurate since AI models aren't foul proof.…
ytr_UgwyFOd_7…
G
And holy shit AI powered helpdesk bots are so goddamn ass. When I seek help, I'v…
ytc_UgxN5kABG…
G
For anyone that wants to understand why UBI seems logical:
Universal Basic Inco…
ytc_UgxVFiZvz…
G
I was super bored one time and went down the AI rabbit hole. It quickly got dar…
ytc_Ugz4msJnE…
G
It's important to approach the study of humans with empathy and respect. Underst…
ytr_Ugxbai5cb…
G
Bro is just a failed artist using ai to compensate for his lack of artistic skil…
ytc_Ugy7VqG1h…
Comment
If you believe this, I've got swampland for you to buy. Look folks, let's be clear, AI will ONLY do what it was "programmed" to do. That means there is ZERO chance it can "overtake" us or "hurt" us UNLESS someone is providing not only that programming but rules of engagement. AI doesn't "respond" like humans, it has protocols to which it adheres. So if there is any chance for some bad AI Actors than it was programmed to do just that and likely was given the command to begin.
Did you understand that? It was GIVEN THE COMMAND TO COMMENCE that bad activity.
So stop with the predictive programming crap. Know when AI does something, that is the time for you to blame your government and act accordingly in response.
youtube
AI Governance
2025-07-15T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwB76rPksp_uOBybfx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxOaORPAr_VTsoss14AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyiqP_RpXu5hV4n3XB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKzIa7jSAKG6Cnq8R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxMEY9iX-t3xUEagfB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzbBH0eRWhhkbQScSJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugw5a62sLYR5Y24LW5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx0HMpxU1deKcSeG3d4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxp6VDXFm2o3OcX4vF4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJSe0g8ngq7u7LbK94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]