Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thank you for this thoughtful comment. I see what you mean: meritocratic systems (and by extension algorithmic/AI systems) tend to reward whatever already fits their internal logic — „efficiency“, optimisation, measurable output — while slowly eroding everything that doesn’t. That incestuous reinforcement creates the illusion of progress while actually narrowing human possibility. Terry Gilliam’s Brazil is a brilliant reference — the ultimate bureaucratic dystopia where the system becomes its own purpose. This is exactly why I keep coming back to individuation (Jung) and self-overcoming (Nietzsche). True innovation and human depth can only emerge when we step outside the dominant framework — through shadow work, embodied experience, and honest confrontation with the unconscious. Otherwise we just get new grapes from the same vine, as you put it. The German Idealism reference is interesting — I’d love to hear more about how you see the connection to Abrahamic roots and cultural amalgamation. That tension between inherited structures and authentic becoming feels very relevant to protecting human essence today.
youtube AI Moral Status 2026-04-24T02:1…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgzsLGHHJqclobJuLJN4AaABAg.A06VkzRFr6aA07WaP-o1ZQ","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytr_UgzDpmYgfY7B7h62F9J4AaABAg.A05y9rSpUk_A08DiCfm_JT","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytr_UgyzlyJXAQdnmdOyUvx4AaABAg.A05PHUcdi6hA06SXvuwy3r","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgxXaovEcxpkSDsIB9R4AaABAg.A05AF1lK-ZBA05EVzn1imz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxXaovEcxpkSDsIB9R4AaABAg.A05AF1lK-ZBA05f0ngeF9h","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytr_UgwQYasc5NkhgUVPazx4AaABAg.A053wAYrKKyA0GSnAiPIWo","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytr_Ugzc7ZLyEkgKExkO_454AaABAg.A052UpH6EWPA05epbydKhp","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"}, {"id":"ytr_UgwQ5qZ1Qasj0egSySl4AaABAg.A050bK4PJLiA07WhyUDUAj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytr_Ugy06e3jyFs8rOvEh4t4AaABAg.A0t2KA5J5Q5A0tHgVtGtbE","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgxSRKLd8hcSdMivJXh4AaABAg.AVxDmz6WVXaAVyL1SBk8to","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"} ]