Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If Robots demand rights, we should probably give it to them. Because at that poi…
ytc_Ughby7Ihz…
G
Since AI is built on our messy, imperfect world, it’s bound to blurt out a few n…
ytr_Ugxx9qpGu…
G
My old team manager recently posted a video on his Facebook of a random ass guy …
ytc_UgwQx4n4w…
G
I don’t think we will ever be able to make a truly sentient AI. The best we can…
ytc_Ugwc9eFzi…
G
hai
i just wanted to express how much important how your channel has been for me…
ytc_UgyRUvDxJ…
G
this scenario is happening in the hi-tech industry, but it will take much more t…
ytc_Ugzt8D5YQ…
G
Sigh... taking a risk because there are a lot and I mean a lot of hostile people…
ytc_UgxBqh02K…
G
What If Sam, Elon and the other big players are just shape shifters? When Cerber…
ytc_UgwITF4vE…
Comment
When we approach the subject in terms of postmodern philosophy, of course, it seems possible to encounter a suspicious situation. For example, it can be argued whether a scientific achievement belongs to human or artificial intelligence. This is the work of philosophers. What is democratic and beneficial is to extend access to artificial intelligence to all people. Supervision and control will emerge as a social consequence. A lot can be said about this; but I am sure they will not be harmful. They will be very useful.
youtube
AI Governance
2023-04-18T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzkwNGcmwP9qpL-j0x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgypOPMIJYiESP553E54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVOMYTX8-sXSDWwo94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxRFqh0i3r2Q_k9HTl4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzgTOuyIjpT-MFfUCN4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyuLdqlCxaIWif-ngV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwpIkUZAX-gyxIgC2l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyr6akwCh-QxA86pLR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxWLoH9EhF9aWEXcQJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwchyYLGGh104PMPb54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}
]