Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A good example is the game, Detroit: Become Human. when AI become self conscious to the point of being almost human like they will need some form of rights, maybe different rights, but some none the less. but that is assuming they become almost human. the problem with AI having feelings we couldn't program them with true feelings, why? because feelings are tied up in our consciousness, which some would debate it is a soul, and other simply neural chemical responses. whatever the case may be, they only "feelings" would be preprogrammed responses. take siri for example, if I "insult" siri she will say "ouch" or "that wasn't nice" because she has feelings? no, if I say the words "hey siri, you suck!" she will search her database not too differently than this, inquiry/%you_suck%/cmd_line.624//run (yes ik that's not programming language). and when we do get AI that is self conscious I believe it will be purpose built tech, and not your refrigerator. but as the old adage goes, "We'll cross that bridge when we get there."
youtube AI Moral Status 2017-02-24T16:1… ♥ 3
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UggJIup0iIlZVXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugiqorz5t1QhRHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_UghZ5Le5QNo9W3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj2YPylz7gmH3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiIQ5CNwZV0VXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UggW5A_hvTuZv3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugj0GWYELnqn_HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"unclear"}, {"id":"ytc_Ugi37YvVMkNA3ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjFDOQXOgm_-HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgjVqIuTCm8kfngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]