Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Can be a good idea, but I have my doubts because: I really wonder if they are aware that the reason no one stops this race currently - besides them - is that, who stops, is falling behind. And that there is a chance that this is a "winner takes it all" scenario when it comes to... well, humanity as a whole. Basically Internet all over again, with no big AI giants in the EU, and EU citizens will have to really on all the stuff they use for the US - again at the grace of US Tech - again. Besides, the EU doing a lot in providing the foundational research. But at least we can call first on regulating everything again, even the cases we do not know yet. The little difference this time is that there is, at least the possibility, that this could be the end game. And if this however big or slim chance materialize (AGI / ASI this decade, or if we even already have it behind closed doors), you really have to wonder if in the face of almost "god-like" power (scalable super-intelligence & speed), EU regulations, or what they are supposed to regulate play any role, if the EU has no access to it. We will see... But great that we had strict regulations in the meantime :-D But like always, Administrators are more concerns what lies on their desk now, instead of in 5 years. We will see. The only thing I am sure about is, that I have a hard time to see how they are helping the AI Industry and the emergence of a competitive AI startup landscape. PS: The GDPR is not a positive example how to implement something, and how it turned out in certain aspects. In my perception, it was a total mess, and caused much damage - but created new jobs for people who help to deal with it at least 😀
youtube AI Governance 2024-03-13T21:3… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugxe-T1x93dEgLtQGM14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwvBGMgyfXFJ8IeFLR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugyxaf8cfMuXix4AQ7R4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgykkB4EoqVpU53t75p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCq_-JHSAP0QMYl1l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwMXusU03x8er8xb9h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugzsd71KHt8GllaRCzV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx4ebphzAT_5AUB-ax4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzA-wFMaDgHD-XIFzF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwagqtOQE9jPDS0vJp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"approval"]}