Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that AI experiments should stop if some of the goals that involve robots learning about our emotions and decision makings If you make AI to only solve mathematical problems (let's say to create the next fully functional fusion or matter/antimatter reactor to benefit mankind) without knowledge of human emotions and decisions makings, fine. But we as human beings are still unstable and not as intelligent as we perceive ourselves to be as a collective and if we are to be judged by other intelligent sentient beings for the sum of everything that we have said and done throughout human history, this can be very dangerous. We so far being the most intelligent species of this particular planet has used our knowledge and power to manipulate, test, exploit, cage other species of lower intelligence that are not a threat to us and control/destroy those that are a threat to us. This will be the clear end result for us humans if AI robots are to "learn and model their thoughts from us." My honest opinion is that until we can work out all of our own kinks and faults, we should not create a race of sentient AI lifeform that will be more intelligent, far stronger, and can procreate much bigger and faster than us.
youtube AI Moral Status 2020-06-10T03:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx1cBIxtZ0K6ntiT9F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxWksYQegmFYDnoDAB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyZtk_aHWPsFKw_xBJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxDiX1z9mssTd3hU_R4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxCUXPijU25S6BT9U94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwW1YxRNRsLY1ldJb14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"resignation"}, {"id":"ytc_UgwBIUeXbNvCZFEysUB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzzDVP7OIvyM2vAX194AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwl0nMb7P2KcWijvJ94AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugya-r-k7yRuYKO2Ng14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]