Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The amount of people that actually believe that these LLMs are at the point of b…
rdc_mxfrkq3
G
Yes and no - the explanation you give for how LLMs work is misleading and inaccu…
ytc_UgyIfOskj…
G
Just to clarify this - no decent software engineer thinks this. Because any soft…
ytc_Ugx7aaZCF…
G
Unless AI becomes so dirt cheap to produce and to exploit (which it will be) tha…
ytr_Ugw2T6bQM…
G
My take, an Ai "Artist" isn't an artist, they are a summarised scene writer that…
ytc_Ugy_fM8FV…
G
Honestly, the worst thing about this whole argument is AI "artists" playing vict…
ytc_UgwX_eZsi…
G
i like those kind of roast of AI , not the uusal borring ''Ai bad'' talk…
ytc_UgxrSf9jy…
G
It so they don't have to take responsibility. People can't sue AI. AI can't be h…
ytc_UgzLqQRYQ…
Comment
You are tremendously overestimating capabilities of current AI, especially its reliability. Its widely known fact, AI is not capable of any fact checking, does not complete jobs till the end because with current architecture of LLMs it is simply not possible to reliably guarantee anything. AI at this point does not replace even junior software engineer. But still it is an usefull tool to enhance certain workflows.
youtube
AI Governance
2025-09-03T03:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw_6vorjHdciMvuOo94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyoag5S0730trMSBtt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxX5PHtA-RjjQuz1VV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxVQwE1AlbKoXgCQPp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwX8HpldYAUyBheF2x4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwL0iro5SIrrDtYdep4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzG0wRV5aHwd6QL4hV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwZ2Y0dFRixIv_1z1J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugyi3q9ocNY_xJj95Oh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgydoyW9cc4xzUFJfxN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]