Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That why ur safest bet for Travel is either Lyft, or Uber. Waymo always has issu…
ytc_Ugxl-HUmD…
G
Yea you guys might not think this is a big deal , until they start stealing your…
ytc_Ugw8nbAvC…
G
We need A.I. to do a better job than a CEO.So the pay and taxes are fairly distr…
ytc_UgxEZv1Mk…
G
Make no mistake, I'm not for or against AI. I'm stating the fact, the law does n…
ytr_UgwF5QFzQ…
G
So nobody will need to go to school either aside from to learn how to read, writ…
ytc_UgxLulqMb…
G
So will we have to wait for a legitimate therapy bot to come out? I know people …
ytc_Ugy46Wf0e…
G
It just a start, remember my words AI is dangerous and it just a prototype…
ytc_UgwJMGBZb…
G
You're full of shit. What was the prompt that you claim got ChatGPT to respond w…
ytr_UgznWis1W…
Comment
I dont think humans does not need any LLM’s for enhance the technology. Nobody talks this but this AlphaFold threathen humans ? NO. We only need narrow AI models for specific STEM tasks. People wants AGI has god complex which they are saying themselves that they created the “God” and they are willing to sacrifice the human race for live this fantasy. Some freaks says that AI will be the new dominant speicies, humans will be extinct and they are okey with it. Guys I am sorry but even we made so many mistakes along the way I love humanity and I want to protect it.
youtube
AI Moral Status
2025-12-11T01:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxMVuUkC29JOj-hYPF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxxapv-_7_knGqv1NJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8r__RXmoLWr4OKMB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyYXf55A3Z67xecnG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxvFGL35Nofs0RuVQN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfPvaO4ndDNulEswF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzxPQpzr-IvoLdzmn94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxh28s8Utgy7qQ4ygl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx2apf9ZMyt-qy7iNt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy26fyQ7CQ1yJqSii94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]