Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My father had an intermittent ischemia of his hepatic portal vein. This would te…
ytc_Ugyij5Cw0…
G
We build this racist world ai doesn't have its own consciousness to judge the ra…
ytc_Ugzz2ofSK…
G
The monster in Ex Machina are not the AI. It's the men. Watch it again through t…
ytc_UgxY2JByr…
G
Interesting take that reflects the main lesson learned in WWII—unchecked aggress…
rdc_mctmtbb
G
Automatically, ai should be a free service completely available to everyone, and…
ytc_UgwR3PoVs…
G
I got some bad news for you. This is using generative AI to redraw the image.…
ytr_UgxJfgubX…
G
Thank you for going into the environmental impact as well! I feel like a lot of …
ytc_UgyATrWJn…
G
Everybody suddenly gets scifi brain whenever LLMs that are being called AI comes…
ytc_UgxvBMXFx…
Comment
The next big break through in AI won't come from AI but the tools we invent to help us manage it, including governance tools.
Why should a single organisation or institution govern how AI is going to be distributed or consumed by other organisations or institutions?
Part of the problem of having standards is that it is a multi-stakeholder response to a common problem. Having one stakeholder determine what that problem is going to be for all other stakeholders, obviously doesn't scale.
The next big break through has to come from the disruption of our current political systems, so that stakeholders in AI from suppliers through to consumers can determine what the policies or rules are going to be on a case by case basis.
It won't come from a single organisation such as central or local government creating constraints on institutions or organisations even before they have had the chance to define what the problem is.
youtube
2025-04-24T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgylLAQYPGExeBVEObV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXa0f8hVACxq_DPi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx_OGsc_OXgsU_VcrZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9KtkwgLQTisGDdBx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7dfTBPbGCAvB5Imd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxukg3rTFn2jL2prRR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyOEXCc60mpTmpwHfh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgznK_phH1g46YI4uNV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzPW9-ufMvHQPJwgdx4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzzpyYpfO5fBQVIoSh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]