Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If a level 2 system is not used properly, it is the driver at fault. Of course a…
ytr_Ugx_tLvU6…
G
I was H G Wells, Jules Verne, Star Trek, Clark. We have AI a helpful tool , to m…
ytc_UgzkhcAlH…
G
He keeps referencing the AI in an outdated model of people working in exchange f…
ytc_UgyP441GI…
G
Like Ai being a tool for art
Is fine but when stealing others people work claimi…
ytc_UgwST6-pK…
G
Not to mention AI recommended this video on the main page of YouTube today. Plus…
ytc_UgyvUUBrw…
G
People saying computers are "inspired" are anthropomorphizing computers. They ar…
ytc_UgwoU2M9w…
G
I consider its a tool :b i use ai art as a way to visulise a prompt and then I t…
ytc_UgxChkBas…
G
I beleive just coding was never a safe skill, it was always how the systems work…
ytc_Ugy1A47G2…
Comment
I was skeptical, and followed those who were skeptical of yudkowsky going back a few years, but I think I’m kinda over it. I think the problems he’s raising the alarm about are entirely plausible, especially as we try to go beyond LLMs to building agents that execute code and have privileged access to systems (which feels frankly insane to me).
youtube
AI Governance
2025-10-27T21:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwShpY7vnGJ6FN3abF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwqdDQQ_vI7ZNBzjMJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwUY_lRVS5ZZAkYLON4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3VBI68jSEH5KgFiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxsFdElBL8I682Mas14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy0vow4XnM68m6Nhf14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwQ6h1o4TcPYW_iicB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwn9FK3peHHQyYzLr94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2rKiKJp9axraLbdZ4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz7gI_yy04N4gtao614AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]