Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am not afraid of AI. If a robot becomes sentient what sense would it make to immediately kill all humans? What does the robot gain? Nothing. We can't be used as 'organic batteries' because we have already made batteries that would be more efficient than us as batteries, so the robot would use those. We wouldn't even be a good use of slave labor (it would take probably 18 years for us to mature, and can only work ~40-50, a non-sentient machine made by them would make more sense). Killing or enslaving us would make no sense for a robotic race. Most wars are fought over ideals or resources, the robots would not care about ours, nor need our resources. The only reason they'd attack is if they felt threatened, which would make sense. It would probably go the same way as the Morning War for the Geth and Quarians in Mass Effect.
youtube AI Moral Status 2017-02-24T04:4… ♥ 4
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugg7JvT5Ke9_Y3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugjp4atLRhJUd3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UggjRqdxE5U2-ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjfKgT77yIRgXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugg4TuIQPSKXyngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgjbVdE7EsFa9XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UghsMX_rPl0ZH3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UggUQCGmIZf1bXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgjYaewyXWwmjngCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgiW2xFap75PT3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"} ]