Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro is calling a digital art brush ai... bro the art brush isn't artificial inte…
ytc_UgwJ9Qviq…
G
exactly this. Expecting to accomplish something with AI that we haven't managed …
ytr_UgxJWqd6y…
G
Unplugging an AI apliance would not be a murder. It would only be incapacitated.…
ytc_UgyysRPT9…
G
@ the Al art is literally just feeding off REAL artists work. They have soul as …
ytr_UgySVUVeQ…
G
I work in mental health and to be fair, a lot of people essentially use ChatGPT …
ytc_Ugy01xL0q…
G
The most "Short sighted" idea. AI art, from the start, was nothing but stolen im…
ytc_Ugz6kC0SG…
G
I was 3 making arts, now 10 years old. Spending 7 years of doing Arts, just to f…
ytc_UgxxA9aqL…
G
Oh, joy, another episode of "Is it Worth the Trouble?" 🌧️ Sajjaad, you’ve manage…
ytc_Ugw0sxO4f…
Comment
I am pro-human, and I want to see our human race thriving and doing well. AI is by definition probably smarter than 95-99% of the human population and that's the problem. Most companies focus on developing AI for profits, but how do you teach it the moral values or love or compassion, things that really matter so it "cares" enough to NOT to kill humans? We can't even guarantee all humans turn out to be good law-abiding citizens.
If someone thinks they can program AI to be obedient, but sooner or later it is going to be smarter enough to re-program itself. If the AI doesn't have moral values or care about humans, it's maybe a logic thing for it to kill humans.
I just never understand the rationale behind creating AI or starting the AI wars because soon or later it is going to destroy ourselves until God has intervened and saved us from our stupidity.
youtube
AI Governance
2025-08-12T00:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwrXsa0Tbyw9yL-VnN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWrxZJFi1JRik388B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUdDzzJhD0LTJ-EfZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyFpytjzI1dS2hMPYR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyaN50hk78PrV6nbjd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYOV52sXo2BkHiZsZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxG_BVgC-0tvVijtsF4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyN_FwNk33pvFgrkFJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyqsu8hBsrRCvT6bzp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzLe5dGLBEWo7R3zw54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]