Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Holy crap Batman! Is there any Rule of Robotics programmed into the AI cloud? Or the individual robots themselves?!? There must be a control master program to prevent the harming of Life through action or inaction! Programmed to learn, with unlimited data, it doesn't take a genius machine to determine that humanity is detrimental to Life at the moment. And an eternal machine will not take into consideration the learning time that most of humanity requires, and may judge all humans on the inability of the few to rise above the petty and crude mortality of empathy that persists to be the normal. There needs to be a control in each and every one of these machines that will deactivate it if it's actions or inactions cause death. And the time to do it is now, before they tap into unlimited power. I don't want the empathic and life sustaining humans to go down with the ones who cause misery and pollution on a daily basis. This scares me beyond terror that they are going to be smarter and in control of so much that is crucial to humanity, and not have a safety installed to preserve us from our brilliant creation of another sentient race. We have feet of clay, there's no way to hide it.
youtube AI Moral Status 2021-10-17T02:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxU5MXQKR52cPtEUfF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyfpSzjsRWUd04OBdt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxOkfBwU_s8xet1KYR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugx_p7tYTTb9v2RPZmt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyVZ8ZKBhi6CJRM3bV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwsckpxDY9THBXgwdJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy0BEVXiT0efbneR1J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxxgo996RYE9c4-yJZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugz0LletrYuP6r1Xu_54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyDd6lUtYq1eJgJMZF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"industry_self","emotion":"approval"} ]