Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Step 1: Replace humans with ai
Step 2: Make fun of humans with ai
Step 3: No hum…
ytc_UgzfqYHhG…
G
@hunterkauffman9400If they are ethically trained np. But remember, open source =…
ytr_Ugwryu_mE…
G
Are there really deep learning models that implement a person's name as a factor…
ytc_Ugx17723E…
G
i thinki AI would be very ok if it would point out the artists it would be mostl…
ytc_UgykvW2o3…
G
this video didn't soothe my fear of AI one bit, just made it worse.. thanks!…
ytc_UgzFi56sf…
G
This is so fricking cool. I always had a dream to work at an animation studio, b…
ytc_UgwyJ2LZM…
G
I believe Ai will become as essential and integrated into our lives as electrici…
ytc_UgyyGTKg5…
G
I love how the guest on your show are always anti-Trump. They are not anti-Chine…
ytc_UgzSMuCuI…
Comment
The drive towards AGI is the most horrific phenomenon of contemporary technological change inasmuch as it promises what cultural historian William Irwin Thompson called enantiodromia—that event in which the press to do good leads to the production of its opposite, evil. Current AI companies are drunk with the madness of their calculative reasoning going amok without human control. Before they realize what they are doing it will be too late. Beware Goya’s warning: the dream of reason produces monsters.
youtube
AI Governance
2026-03-27T12:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxUZt2fZ9Rd1FIe_nV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgybUwWRHNDwv_lIouB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyceVaFeecBUcubkJp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyiPIrKl0NwoLjPBTB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxCwSALBOK-O7k3Gwp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzboDfMadk67yFexrt4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzNemDzL_6wT-wt6fp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAVlDLoONpTo8arY54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxugldY5MFypj2pIuB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwKR6WeK1JguFdibxt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]