Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If machines become conscious they're not robots they are AI. Any AI with access to the internet and the ability to self improve will dwarf human intelligence and capability in a couple of hours, a couple of minutes or even a couple of seconds depending on the sophistication of the hardware. The question would quickly become not do robots deserve rights it would become the AI's question of "do humans deserve to continue to exist?" The two likely outcomes are No and the AI destroys the world in a nuclear holocaust, or "I'll give helping the meat processors a shot, nuclear holocaust incase" and humanity sees a massive technological and political leap forward as an AI unmotivated by greed or threat of persecution would rapidly solve the worlds problems. The third and unlikely outcome is humanity realizes "oh shit it's about to skynet us all" and we shut it down. Unfortunately the AI would be so rapidly evolving it would've likely realized this problem extremely early and mass duplicated itself and/or created a "backup". Creating Artificial intelligence capable of self improvement/modification with access to all knowledge (not knowledge off the "web" but certainly all of it that's on it including encrypted and classified data as a rapidly evolving AI with access to these knowledges will quickly master them) will either destroy humanity or elevate it higher than we've ever conceived with our slow, slowly changing, self-intrested meat computer.
youtube AI Moral Status 2017-03-06T15:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UghKkbKM7RfTSngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgjI9lR0B-QpLngCoAEC","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghPZpawqsXIxngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugg3YIAoHWeF73gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UghZs-vx_DY4WngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugj4TBYHcuy8QHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Uggj6wVem7oUqXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UghZ2Kej7Awjx3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UghHQZM9DEXzg3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiGAv21OsCOaHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"} ]