Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The "uman" will be replaced by AI. They cook, clean, and do other things, and th…
ytc_Ugw780i6u…
G
Another possible point, if we program an A.I. to have motivation, and it learns …
ytc_UgjuMFHV5…
G
AI is only good for porn and work, it can't fabricate a human understanding of t…
ytc_Ugz4UsLX1…
G
These people make trucking look awful. Truth is, at any point right now I can ma…
ytc_Ugyp8_VDP…
G
When AI tells you what question you SHOULD ask to the point where we no longer …
ytc_UgyUuKGvX…
G
ChatGPT is CRAZY intelligent as long as you know how to use it. I’m literally wa…
ytc_Ugwo0mxNY…
G
Translation: "I'm too lazy to dedicate the time and skill to something I admire,…
ytc_Ugymw4ZbL…
G
AI art looks fake. I used to like online puzzles, but not anymore since the puzz…
ytc_UgzyPA8Ce…
Comment
I understand people’s anxiety cause this is new and we can’t control it but I really don’t think this is going to be bad like we worry it will be. AI isn’t and never will be human. What is the motivation to do bad things? I don’t think that exists. AI “wants to give the correct answer. I think it’s far more likely that when people try to get AI to lie or do wrong “bad” things it just won’t cause it knows that it’s incorrect. I don’t need it to care.
youtube
AI Moral Status
2026-02-14T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwDNvBt1RU1jLzrODd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1of0XxWW4F7u2CCF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwYFUHR-qCbaVRtRsJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyZLx4xtalhf6Frrad4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgylJK7D6NYyjm6_Zj54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyNG5bdZbi3Q_NFcrJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0wEe9faydo4-wh6R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwNzkks9jFu5Hka_1x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxP6zaYhHh9fPK_hVd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEk3S4weaUjWW1zMB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]