Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Human or robot? By the things I’m gonna say below 👇🏻
Blood human has it
Meat h…
ytc_UgxF3uStR…
G
AI rom google, Amazon, Microsoft, and Meta… all subsidized by the government and…
ytc_Ugyt31XR7…
G
IT WILL NEVER WORK. TOO MANY VARIABLES IN DRIVING. IT WILL NEVER WORK. DON'T …
ytc_UgzQHLlGz…
G
As a artist I love your art and hate AI it is taking over artistic ❤…
ytc_UgzPpCkfM…
G
I'd argue that more and more people these days are unconscious, unintelligent, a…
ytc_UgwO8vT2O…
G
But does the ai know that it doesn’t know and can’t answer, but still does becau…
ytc_Ugxu_UZ_5…
G
17:59 china has not integrated a.i n robotics, they use propanda to pretend they…
ytc_Ugy6IpYAb…
G
The answer to this question is that the AI's will eventually end up aligning the…
ytr_UgxjUkKXr…
Comment
Complete hogwash!
The classification of legal AI as high-risk is a blatant overreach, clearly designed to protect the interests of legal professionals rather than addressing genuine risks. AI in law offers immense potential—analyzing case law, reducing bias, and improving access to justice—all of which could make the system fairer and more efficient. Instead of fostering this progress, the EU AI Act places unnecessary hurdles that stifle innovation in the one field where AI’s impartiality could deliver the greatest societal benefits. Are we really to believe that automating case analysis poses more risk than AI in healthcare or finance, where human lives and livelihoods are directly at stake? This reeks of professional preservation disguised as ethical concern, and it’s time we question whether this classification is about protecting citizens or protecting lawyers.
youtube
AI Governance
2024-11-24T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBaHk_zS6K6gEX1it4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwNbaPbt0BOymcT8zl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8CxU6cl0eLjq31iF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzSNPS6ioVd0S3o2o94AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzPhWlmKbwkuapO5Vx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuhxTp72yOF_GVPjF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxBNuD8zHs53FzA3UJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxWWG54GJmWymlTysF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzijq8vSkVr7MC4X1x4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw-WqjhVx_esUGdRSR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]