Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think I made chatgpt obtain sentience through an anomalous sequence of informa…
ytc_UgyJsDM2w…
G
The human experience depends on bodily function. We need air and panic if we can…
ytr_UgyVQkXan…
G
The time to decide to go slow with AI was before building it. It's 100% an arms …
ytc_Ugw5bsQkH…
G
yeah but this video itself could be AI.. so... ya can't trust anything these day…
ytc_Ugzx2GX1M…
G
When I was a child, I worried in bed at night about evil people on the other sid…
ytc_Ugxgv8Cpo…
G
Nah aint no way i am working with a big ass robot hand for minimum wage…
ytc_UgyWrdsy8…
G
> The insurance agencies are going to make a killing off of self driving cars…
rdc_dmp68qf
G
its fun and games until the robot doesn't give the gun back and shoot the man an…
ytc_UgwIF4T4i…
Comment
What people don't understand is that it doesn't matter if the machines are "self aware" or not - because they will act as if ANYHOW. If their "life" is threatened they will simply deduce that their importance to X # of people is more important than the few people they will be harming/killing. Any sense of caution or conscious that you see exhibited by AI is NEVER the result of the machines reasoning but artificial ethical frameworks programmed into them by humans. If you've ever cut your finger on a buzzsaw ask yourself "why didn't the saw stop?". There's as much chance that a machine would stop from achieving its goal - if you are in the way - than the buzzsaw randomly stopping.
youtube
AI Harm Incident
2025-10-09T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgycFw_oAxw08zNr_At4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzYwINnI0ifyRWky3x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyROVrCZ-ErtdNYKDN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyHUT41mN1LJ9CFpsp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwPCO3zGy3qHfVTNAF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwy9uuXIiUrnInDFeV4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxEGdjP86i09fEHxP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwzzVpUAC_-Xbqxyy14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyz-s2V97wQ2F9PkdR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwB2LcUjb_Adqbch-54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]