Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You guys are insane, you think programming and other such work is somehow easy b…
ytc_Ugyruepog…
G
Any application of Ai is not of God. The fact that some pastors and preachers ev…
ytc_UgyoXkOR4…
G
I think this downplays the need to protect against ai double agents i.e. stuxnet…
ytc_Ugy2ZdaY8…
G
Trick question!! They are all ai!! Because ai is me and i chose ai to overtake y…
ytc_Ugz9Agkdv…
G
Training ai to use pattern recognition and it became antisemitic, hmmmm what doe…
ytc_Ugz-K8lNl…
G
_Its called FSD Full Self Driving_
This video is exclusively about Autopilot, no…
ytr_UgxURpG2B…
G
Of course the founder of an AI company is the one telling you that Ai isn't at f…
ytc_Ugx1LXNRn…
G
rubbish to fool dumb people..... AI is just a huge database.... it does not th…
ytc_Ugzirxw9a…
Comment
I'm ready for UBI LARP world. Everybody just does a random job whenever they feel like it for fun. Wanna be a cop for no reason? LARP Cop with AI robot-cops assisting you and keep you alive. LARP waiter. Wanna try it for 15 minutes and get bored and clock out? Not interested in job? LARP as a knight in the medieval era. People will just get to have fun and life will become a video game.
I do think we need a Mental Health Renaissance in order for everything to work. That's more important in my opinion than building the tech for AI safety. People need to be well. We have to fix the intentions of average people so less evil-doing feels tempting to do. Maybe sounds like a more impossible task to most people, than making AI itself totally safe.
youtube
AI Governance
2025-09-05T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw0vpkybImPkj7x0-V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwcSaB4I_322CJkCtd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyo9lSTXO5cga4BHEd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxb3mO5U8fOTjtn9ih4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwdDaBmmbca-K64_ip4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNGcWUla4cLlla1zF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzjRUQBgfSRlJmYl0t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwY-tdlArAEjtRx2nN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgySiM3pWatNX74UeVh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgydC2Qtw0SthRfaO7p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]