Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The problem is that, while I can't compete on both quality and quantity with Ai,…
ytc_UgxTaVCNU…
G
This is crazy! Soon human’s will be considered obsolete by AI. We’re just creati…
ytc_UgzTuc8uY…
G
AI bubble will burst first. With open source models no one will pay for the ones…
ytc_UgzAVc9yK…
G
Atleast for doctors, AI will only be a side kick to help doctor to work better. …
ytc_UgzFOuHG6…
G
Call me what you want to I frankly don't give 99 fucks or red balloons, but I re…
ytc_Ugxx6IJBX…
G
I've seen hospitals encounter cyberattacks, which cause life-supporting treatmen…
ytc_Ugw2h8Spn…
G
AI is biased because a lot of people are. Not everyone, as we can see from this …
ytc_UgykND--a…
G
Or I guess we would say it's pretty neat if there wasn't the second problem of a…
ytr_UgyPMb0Ok…
Comment
The request of CEOs and other leaders of large corporations to consider maintaining the same staff while incorporating AI that is as productive if not more than the staff maintained is just delusional. IT WILL NOT HAPPEN. The bottom line is always more important until it’s not. Prayer is the only answer I have for this mess.
youtube
AI Governance
2026-03-25T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwIqNU_TMR537ePFTZ4AaABAg","responsibility":"society","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwvSMYiYT_IuovoE314AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwY-jQW29BYypAf75F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugz5TFxae2j2uFRv29R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgynE9iH1O3nO18qyCt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy09ItQRfK9BBvo-8p4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxt_5jb-i6PwtrWlz14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyX_x6pmFaQoqqHYGt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzK0YstJ0vgqcNzZEN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyOoLGrCvVUx8LfmrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]