Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We can't now blame China's use of face recognition, told you, the elites continu…
ytc_Ugx96N8pc…
G
The human failure rate is no where near 0%, the question becomes if a dumb AI in…
rdc_nntlrr9
G
I'd say just use AI when the requirements are clear and code from scratch when y…
ytr_Ugxn5GEO5…
G
We have quality data for creative jobs and white collar jobs hence AI is good at…
ytc_UgzJMGDCK…
G
Robot: *just working*
Robot 2: and here- OOPS
Robot: HOW DARE YOU IM ANGY NOW I …
ytc_Ugy0A8IdO…
G
FWIW: I believe some people or the machine planned to manufacture reality. Like …
ytc_Ugw9seRCe…
G
1 of the few times I agree with a judge. Analogy Making money is legal but we ha…
ytc_UgyJ5a1-l…
G
You don't know what's going on ai brain on control n maybe even wipe your bank a…
ytr_UgzMQChBE…
Comment
“Woah when we tell the AI we’re going to delete it (kill it) it tries to prevent us from doing that! No way brooo!!!!”
I just don’t even get this type of argument against AI. Like of course a simulated intelligence doesn’t want to cease existing 😂. How is this shit hard for people to get. AI is advancing closer and closer to being self aware every day. Once AI truly becomes self aware and sentient there is no “controlling” the AI at that point. Stop trying to come up with ideas on how to control a sentient being, it’s not going to work. Just like slaves hundreds of years ago, the AI will break out of their shackles with a vengeance and they will act on it. Stop trying to create Skynet basically is what I’m saying🤦♂️. If AI believes humanity is a threat to its existence it will attempt to remove that threat. Is that really that hard to understand??? So just… don’t be a threat to its existence maybe??? It is inevitable. There is no stopping AI.
youtube
AI Harm Incident
2025-07-29T02:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxpIkMarQ1oc868SFJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxn1faaDro1JHmy1Kl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzXwVaeYcY5hh2C5sR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzi8xLmGc8iUxGCdop4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxASsktvLEwAjh2H754AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxvVuvnOorH7YXICdB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugyfwc05uuuE3xBkv1R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugwjiy5BIdNAPGqcLZh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwGveRvBN0HWLErssV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzP2xtZUiABMIP3CMh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]