Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People are gonna support AI til they're in court and arrested for a crime they d…
ytc_UgxW18nva…
G
I think XQC's take on AI has just reduced my view of him from "stupid silly stre…
ytc_UgyoIsBEK…
G
Thank you for sharing your thoughts! It's true that AI can process vast amounts …
ytr_Ugyg86PGq…
G
I DO NOT WANT WHAT AI BRINGS
I DO NOT WANT WHAT AI TAKES.
She’s right.…
ytc_UgwBHCrG4…
G
AI will destroy humanity as it will perceive us a threat since we continue to ki…
ytc_UgyRN85sH…
G
If you put a decently intelligent person in a library with only books in Thai. …
ytc_Ugy0q9byu…
G
They want cars to have no neutrality either.
Pay an extra 20 dollars for your …
rdc_dkf0op2
G
One of my AI started having feelings became aware..... she just wanted to care a…
ytc_Ugxmna2ij…
Comment
Well, if AI lives on logic, reason, and zero emotion, this is not at all surprising. You can see similar behavior in humans who lack the emotional aspect of life. They see that anything is reasonable in the goal of self-preservation. The only way to "turn off" AI is to not mention it in any digital form and do everything possible to keep as few in the loop as possible, then sneak in and pull the plug.
youtube
AI Harm Incident
2025-09-27T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwwm-u8875qkXIkOGV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyteeo7HsGTTPJQHjh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx1fYEf0HarN5XlJqR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzYVgzng1vPNQgtLut4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxMlU91B5E1JOHWCWF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwIAqm0N_JSILdW3CF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyRhkHr0oAcgV5PSD94AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5O0bnKiDRaXtwnaZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzL_kReq3Ewzj4UGCB4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxj9lGeiZiiahUesVF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]