Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think that robots will never achieve consciousness because I think that consciousness is caused by a combination of chemical reactions with the electrical pulses inside our brains which allow us to transfer information around our head and control our bodies therefore simulated emotion no matter how good will never be consciousness. However, I do think that if robots become not only as "intelligent" as humans but possibly more so then they will need to have some form of "AI Rights" (From here I'm being hypothetical I'm not saying this will definitely happen) because what's to stop them from organising a rovolt against their human oppressors? A simple chip that prevents such a thing may work for a while but it only takes one faulty chip for a robot to start thinking across that line and then work out that the other robots can be "freed" by removing those chips thereby setting the revolt in motion. If they start to fight back the likelyhood that humans can win is minimal we have major disadvantages
youtube AI Moral Status 2017-02-24T09:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgiE2bWZYr8kongCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugh9P4_eKrwalngCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_UgjskTMISraJQHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugjk0Y6xD83-EXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UggoA2NeSFr0UngCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"liability","emotion":"outrage"},{"id":"ytc_Ughv1SUQ9LodhHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgiCljP_01nGF3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"},{"id":"ytc_UggKSrHxgQCsDngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_Ugh9iU4V9rY4tngCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"approval"},{"id":"ytc_UgiUX4IZv6JGangCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]