Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I understood everything he was saying up until he talks about getting permission to experiment on the ai from the ai. This didn’t seem to go along with what he was saying previously, and brought up way more questions for me. I don’t want humanity’s ability to be safe, happy, kind etc to be compromised. Why let the robot have enough power to be able to overthrow all forms of recognizable decency? Discussing and preventing that seems to me to be one of the bigger issues. Is he saying that humans should perhaps “let” a sentient ai/ a self aware being become a fellow decision maker or an equal one? Wouldn’t that mean that we would be asking ai for permission to let it become more powerful? “hey ai, should we let you become more powerful?” I want to consider any form of life’s experience, be it self aware, or with feeling, it effects us all. However, I do not think it a good idea to give something that could hurt me more power than me.
youtube AI Moral Status 2023-01-14T16:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgwUSecP5c_EzHZsT1V4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwfiB7InMtCa2CMNgV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwVEMU8VorhbU5w3mt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgzGV9EdsMXNmQBaOzB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgwDCPLHM6iI3YUp1JV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]