Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is not evil, nor is it a bad thing
Like guns, it is those who use them selfi…
ytc_UgwK_4NdU…
G
what if we all made a petition and sued together all art AI companies and inform…
ytc_Ugx1YsbMJ…
G
I don't care how an artist makes their art, as long as they make it themselves. …
ytc_Ugz61Ck7v…
G
AI doesn't pay taxes, these data centers don't pay any taxes, the billionaires g…
ytc_UgzGM34w9…
G
I think Alex Domash is going to be in for a rude awakening. He's looking at the…
ytc_Ugxiu8v-9…
G
Ideally, the whole system would change. Every car would be automated and each lo…
ytr_UgiFjGotq…
G
I blame people not being specific, just saying "AI" which by its nature encompas…
ytr_UgwPKRpGR…
G
First, STOP saying "AI".
This technology is NOT "AI".
This technology is m…
ytc_UgxNX6m8X…
Comment
The call center people will rise to the top of the queue- sounds good. They'll get the calls when AI gives up. From my experience with vibe coding (using AI to interactively write code), AI doesn't give up. It just keeps coming up with new misinformation and regurgitating the old. Somebody (at the current state of the art) would have to be listening and intervene. AI has no way to know when it has failed.
youtube
AI Governance
2026-04-23T06:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugzv_sUeIkIpB6R3Wid4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxWwQtWApFtH99yUVl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxiokLHVRIQkGY4EhF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy9ij4biNZBHlbZpMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwEKci8xKKmTyClAIt4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3UIrDAGck8aWNV3p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzXQASHevScr9RXtZR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxQc0f_fVvNnPleqfd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxX3OtmoYg2y3jDy8R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzjv8DJzLmOLKNEMyJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}]