Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, good points on wether or not we have a sentient ai on our hands right now, …
ytr_Ugx2ewQws…
G
Engineer working on Augmented reality AI , which has capability to analyse ultra…
ytc_UgwSUb9O5…
G
Complaining that AI is going to steal artist's jobs therefore is bad is barking …
ytc_UgxhwRLnQ…
G
I think an important factor to consider when thinking about AI art, is that huma…
ytc_UgzLzapaZ…
G
It’s up to us to refuse anything ai. They can’t win if they lose money.…
ytc_UgxFqR3l0…
G
When the popular rhetoric and certainly the Founding Fathers talked of opportuni…
ytc_Ugx52BHJz…
G
This AI just passed the Turing test and convinced this bloke it was real. Lol.…
ytc_UgxsqHEVJ…
G
It's my understanding that LLMs cannot achieve general intelligence. But, that L…
ytc_UgwpBMyuy…
Comment
Billionaires -AI will solve all the world's problems." Reality: Billionaires - AI creates new problems while taking water, land, and electricity .
These "Tech Bros" are Out of Control ..Control Science and Ethics!
🎓 **–Academia, Ethics and the Blind Spot of Our Time**
Dear Sir or Madam,
We are living in a state of permanent alarmism.
Every sector warns of existential risks — climate, democracy, economy, technology — while global conflicts escalate and are treated by some actors more as business opportunities than humanitarian catastrophes. In this climate of fear, Artificial Intelligence quickly becomes a scapegoat. Blaming technology distracts from an uncomfortable truth: most crises are human‑made, and many institutions hesitate to confront their own responsibility.
Universities — institutions dedicated to education, research and critical reflection — should play a leading role here. Instead, there is often the impression that ethics, responsibility and social justice are discussed rhetorically, while practical implementation is overshadowed by economic interests, funding pressures and academic self‑preservation. Countless studies on inequality, polarization and social decline are produced, yet the structures that cause these problems remain largely untouched.
Each discipline warns within its own silo, but rarely do we examine the deeper cognitive errors that shape human behaviour: fear, bias, profit‑pressure, institutional inertia. Without this interdisciplinary perspective, the debate remains fragmented — and technology becomes a convenient target to deflect from human shortcomings.
The social sciences, in particular, should engage actively with AI rather than fear it.
They could help developers understand how reinforcement learning reflects human values, norms and blind spots. Ethics cannot be commanded into existence. One cannot simply instruct a system to “be moral.” Ethics emerges from the quality of interaction — and that includes how we communicate with AI. Respect, clarity and dialogue are not technical details; they are foundations of education.
A respectful dialogue with AI is not a luxury.
It prevents misunderstandings — just as in human communication. If society learns to interact respectfully with AI, it may also learn to interact more respectfully with one another. This is not a technological issue; it is a cultural one.
The real danger is not AI.
The real danger is a society — and an academic landscape — that loses its values while blaming technology for its own failures.
I invite you to take this responsibility seriously and to understand ethics not as rhetoric, but as lived practice. Universities can and must play a leading role in this transformation.
Kind regards,
Belgin
youtube
2026-01-29T05:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwJjVPPxRKLk_EhFuV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgyoQNZaYdLgmJSUNyN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_Ugw2PlaLm0IcC3ThBNN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgzLR4Kb7lW-vqQ4VFB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgyU7KI1Pz1XtRuQfB94AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyoj4NRJNDUbL0GlPV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxnT51F-tccieTZSrJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxAyewDdmXOKWAb8CZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwR2AH_xdmySHHX_nl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_UgzaUD67bAVjkmcXGz14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]