Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Surely the gorilla problem is irrelevant because it’s more about what we will ev…
ytc_UgxOkZvcA…
G
That's all AI hype. But, it is still important to talk about this low possibilit…
ytc_Ugw-y2qnN…
G
Yes, It is incredibly different.
When data is shoved into these training algor…
ytc_UgxncPAub…
G
Every micromovement and presentation is programmed. AI will only "take over the …
ytc_UgwjoOZfZ…
G
In 5 years we will laugh at these ridiculous predictions. AI is already overprom…
ytc_Ugysxipsw…
G
I work at a shoe manufacturing company on Purchasing Department. When the month …
ytc_UgzU89E8t…
G
In the USA, under the Fair Standards Act, [is straight up **illegal**](https://w…
rdc_j6fjssp
G
AI steals content as much as an artist steals from every content they have ever …
ytc_Ugyxl1MR9…
Comment
I think humans in general have a much more restricted imagination than they like to believe. People describe the AI becoming a psychopath. It is an alien mind. If we met a sapient alien species, we would likely have more in common with them from the perspective of similar neural structures, and thought processes, than an AI. They already don’t understand how AI are making decisions/their thought processes. I’ve read people suggest that we should program them with the writings of the most moral people who have existed. That’s no way to guarantee alignment with our priorities. In fact, if an AI becomes super intelligent, it will probably be impossible to maintain alignment.
The Terminator war trope is, I think, a way for us to imagine resisting an AI overthrow within the bounds of restricted imagination. To a super-intelligent AI, we would be as threatening to it, as a person who could only complete one thought per hour would be to you or I. It would likely eliminate us with technology that would be like magic to us. We would have as much chance, if we realized what was happening, to defeat a super-intelligent AI, as an ant hill in the backyard would have against us if we decided to eliminate it. We need to put the brakes on.
youtube
AI Moral Status
2025-12-13T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyaKl8IZO5D7w3Rkk14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzF8qUgdttemRw4Z7x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy2ie-upxxtvBilFHR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGDTMkJK5_ZCeDfOx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxiLfSl74aDyNpP-Ol4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzqV6T3IpnFeda6mEh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw1Fv7PllyCUNbDlbh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz2dkk_YHseodvIA654AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxFZtc2qXOq8F2Ep8l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyKS4U7Wzj8WzRH1dt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}
]