Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There's nothing inherently in the algorithms for facial-recognition that "destro…
rdc_fg1jg9y
G
I don't think the computers can take over. AI doesn't exist. There is no "artifi…
ytc_UgwZF8X1V…
G
I see the biggest problem is the mindset of profit and money .if i had a super s…
ytc_UgwOqGNVO…
G
When i say i hate AI,im proud of saying that and i dont actually care what other…
ytc_UgxZdnVnt…
G
All the code has been written years ago. AI is great at retriving code from achi…
ytc_Ugy4DJEOh…
G
As an AI researcher, I can tell you that you should always command it without sa…
ytc_UgxbLfNGp…
G
I didn't realize it was this guy when I saw the thumbnail and title. This whole…
ytc_UgwT1Yl-i…
G
I hate the way the AI talks, not it's accent, just it's tone of voice…
ytc_UgwzAs0Nn…
Comment
So if I understand correctly in the medical field AI can play the role of a doctor that will want to euthanize you be because it doesn't think you can live longer than 6 months or it can try to keep you alive at all cost even if it means freezing your brain for a thousand years.
youtube
AI Moral Status
2026-03-01T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugym54oiLUt1TYNQSq14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxHdKsm0gkRKlKROlN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy0zdC9ezKAIy8U1b54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzHR5jrfIZrHMd-Ng14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwPe_H4u0iP_6B5AZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzEP8QtAxXI2iKBInF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwvP5rLLHKrz3O3fGN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKhCYHV9sVevWPXLZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwpRuexR2h9H6l9nt54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzyGMgTSlPoKXYRanZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]