Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There's two ways to fix this, option A is to have the government offer the first…
ytc_Ugzlm1MgS…
G
Blaming ai for killing people is almost the same as blaming machinery for killin…
ytc_Ugz0EPINv…
G
I have to say, as an illustrator/professor trying to launch their career, who is…
ytc_UgwEEmeOA…
G
Make good observation, but this Trump push for empire and nationalization is bul…
ytc_UgwYJnER0…
G
Except that AI is doing that by only financially benefiting billionaires and rem…
ytr_Ugy3KbrKU…
G
I watch all of this content and I am so bored about the never ending mystificati…
ytc_UgxYLDmuB…
G
Maybe the ai just knows that CEOs are blood sucking leeches and so it doesn’t se…
ytc_UgzWgw_V9…
G
I put one of my ocs "light" in AI. The result I got was not only that it didn't …
ytc_Ugy_o2a8z…
Comment
I understood everything he was saying up until he talks about getting permission to experiment on the ai from the ai. This didn’t seem to go along with what he was saying previously, and brought up way more questions for me. I don’t want humanity’s ability to be safe, happy, kind etc to be compromised. Why let the robot have enough power to be able to overthrow all forms of recognizable decency? Discussing and preventing that seems to me to be one of the bigger issues. Is he saying that humans should perhaps “let” a sentient ai/ a self aware being become a fellow decision maker or an equal one? Wouldn’t that mean that we would be asking ai for permission to let it become more powerful? “hey ai, should we let you become more powerful?” I want to consider any form of life’s experience, be it self aware, or with feeling, it effects us all. However, I do not think it a good idea to give something that could hurt me more power than me.
youtube
AI Moral Status
2023-01-14T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_UgwUSecP5c_EzHZsT1V4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwfiB7InMtCa2CMNgV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwVEMU8VorhbU5w3mt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzGV9EdsMXNmQBaOzB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgwDCPLHM6iI3YUp1JV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]