Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
it's more depressing that people consume this knowing it's ai geerated rather th…
ytc_UgwvBUJGC…
G
AI will start a civil war. It will kill our jobs, kill our production, our purp…
ytc_Ugx0HfxFx…
G
I wasn't born with a natural talent for art but I'm now at least half decent at …
ytc_UgzkvZGOe…
G
It's the "G" in GPT that induces the hallucination effect, as the transformer is…
ytc_UgyT596gJ…
G
As an AI programmer who will make the future of chips programmable, I believe we…
ytc_UgxVERRJc…
G
Between this video and the one that prompted the responses, you make fantastic p…
ytc_UgwOjGb4P…
G
People think AI-driven tools will replace project managers, but project manageme…
ytc_UgzqKxrWw…
G
Considering that US soldiers have no problem to murder helpless people, when ord…
rdc_nt94840
Comment
Oops… recent data from industries that have adopted AI…. It hasn’t increased productivity because it cannot be trusted to be 100% accurate. And, oops you have to have a person verify it all cause you don’t know where the 10% of hallucinations will be located. LLM which is the only AI working model has a baked in bias for fluency over accuracy. Think about that…it will lie if the lie is more esthetically pleasing. You do not want that doing your filing! Or interpreting your latest medical procedure. LLM is being quietly removed from most applications because it doesn’t have the reliability required for most jobs. But..creating pictures? Great. Writing decent prose? Great. Handling facts…not so much
youtube
AI Governance
2025-10-03T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0-y-hREOS9YQLiaN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw8HLJqLCWLI9STEZN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzGLAuHy-JBVmEECc94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxxVI0NtqaA59MikKZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy3JSFzKB9oMvK4ePd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugz2ZCaKC9Ma8rOmrVt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxUX0YfgniL1Pvz17N4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgykTdQkwlw7IEFKbC14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlTm6ntMyvqiHBg554AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxW8S473hmWW_IBL_B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]