Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I love AI. It's the best for the future.
I hope AI will one day rule the world.…
ytc_UgxsadoKo…
G
Kind of hilarious that the people making the loudest stink about this generate t…
ytc_UgzImPRNJ…
G
Agree. A single (not so intelligent) con man can manipulate millions into believ…
ytr_Ugx_It3w7…
G
Sorry to drop a Marxism into this conversation but somehow no one mentions the e…
ytc_UgynQOhkw…
G
I fail to see how learning to code is going to help if AI will be coding so much…
ytr_UgzKPFXgg…
G
No, we're not screwed. As much as AI can help you code, it still can make mistak…
ytc_UgwqrYD0x…
G
If AI is as scary as Musk made it out to be, all the 80's insane crazies mostly …
ytc_UgxHz2WIJ…
G
You just know that at some point in time an Officer is going to run across a 'So…
ytc_Ugwc146bO…
Comment
If you believe this shit, you're a moron. I've used all the "ai" it can't write code any better than any software engineer, 2ndly even developers using it find it's more of a faster "google" nothing more, it's an upgrade to using stack overflow that's it. Cybersecurity risks will only increase. Humans are creative, AI is NOT creative. You can write any sort of code or design and someone else cracks it to steal or break in. It's great at pattern matching that's it. Migrate to cybersecurity. Computer science majors coming out of college right now however are WEAKER than they were 10-20 years ago, they can't even answer basic fucking questions anymore.
youtube
AI Governance
2025-07-04T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzZyhQCtm6Q7Bc9Xe14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwRqrvEiQPKTmDTx8V4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8L20q-h7GcaWoI8d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwtmS3DOL7QkDnVnEN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwz75l8CXCrIlmMxpx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHwBSWnXrn8lwhsQd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyMqT7mfY_ELH1UL614AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwjuZZFAndwlzGNjXJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzY2g9u5Wvb80isEbR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7x7XiJnCdne6IVs54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}
]