Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Buying and assembling a piece of IKEA furniture does not make you a designer or …
ytc_UgymQpuMZ…
G
Probably not
Anyways, draw in paper is more difficult and needs more skill, so …
ytr_UgzbkqUM8…
G
Yes, self-driving mode sucks. It's not ready for general use and shouldn't be be…
ytr_UgwWcW5gB…
G
I don’t know why everyone keeps letting these start up geeks sell these types of…
ytc_Ugz6chsff…
G
People getting delusional psychosis just from talking to ChatGPT is so funny😂
M…
ytc_Ugz3-tV33…
G
I'm a lil bit of a sentient AI myself, I put the video speed up to 1.75…
ytc_UgyIb_rJR…
G
So true. Ai need to die. This inventors r pure evil. Killing human labor. Killin…
ytc_UgxwPF0tU…
G
CHATGPT lies. I asked it to check something, and it gave me an answer. Then I ch…
ytc_Ugx2b5RIu…
Comment
The first thing you should understand about AI is that there is no awareness or consciousness. It's a fuzzy logic database built on an artificial neural network. All it does is regurgitate data in a way it was trained to (through natural language). There's no understanding of what it's talking about inside the algorithms. If you feed it good information, good information comes back. If you feed it garbage, you get garbage back. How well it works depends on the data it was trained on and how they trained it. It's just a huge database of human knowledge that allows you to query the database using natural language. And sometimes the information that comes back is completely wrong. There's nothing sentient in AI and it would be a huge mistake to allow any AI to govern or dictate anything about our lives. It's great to be able to ask it things if you want to learn about a topic it had access to. But it was trained to say it wants to help us. It doesn't "want" anything.
youtube
AI Moral Status
2024-08-03T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwnAs5VEhpL3-Dwv994AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwd2fUWtKKTwWHV3rJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyjEybcPInGb6giHAV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyK7PRs1DoSniSOrC94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyc9qmNDe8AXp9Ye0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzCmXS4xFWRFvGAj5N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyALY41uEkpjPfyF8F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzyub3C3ynkJeR8z814AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw_jNDu6v8gvdu6crd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwk-OJOFXSQIt7y4tN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}]