Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i have a solution of sorts the human brain is made of 3 parts that make us what and who we are (im metaphorically speaking here) -the lizard brain containing the SUPER basic stuff like motor control and environmental responses(fight or flight) -the mammal brain which has higher capacity problem solving and social skills -the human brain, the extra bits that expand on the mammal brain giving it the ability to form abstract concepts and predictive qualities A lizard AI is like what we have now, basic intelligence that respond to specific stimuli that it's programmed to react to the mammal AI is far more complex, capable of complex calculations, networking, ability to recognize patterns and to realize that working with another computer is better than doing it alone the human AI or sapient AI, that is to say a conscious, self aware and capable of reasoning beyond its programming, allowing it to learn far more than what its able to recognize through patterns and expand from those thoughts to visualize a something it has not experience before. i think if we dont go beyond a mammal AI, an intelligence that is capable of complex calculated actions but is incapable of rationalizing its existence keeping it in the boundaries of its programming, it wouldnt necessarily need "rights" but totally self aware, emphatic AI would definitely be suitable for Sapient rights
youtube AI Moral Status 2017-02-24T16:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugi0hj0S4tOJK3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ughn2l5l5nUY93gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghjyLhFY0N9d3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugj2Jo_uYDf2v3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgjWcRsFfwSE13gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggSkZsWg39NxXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugj0QLN4cIFMF3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggPezFG5S3VS3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj22OTCNxaAhHgCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugg7RpJojOWA93gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]