Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
on giving robots negative "stimulus" to coerce them to work for economic profit: wouldn't it be much more efficient, and thusly more cost effective, to make a robot that just does work, rather than one that must be convinced to work? besides, you said the only real reason we'd end up with AIs that feel unpleasant emotions is if we make an AI that was capable of making AIs more complex than itself, so why would we use this "inhumane" AI for our labor operations anyway?
youtube AI Moral Status 2017-02-24T03:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UghyXzu2XC_913gCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgivGeenbgAVsHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj_4LAWchwUNHgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgjOZFi2KQgtF3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggnIwBEucuEIngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Uggf7zVJ7GJbHHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiC4plFAWxImHgCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"fear"}, {"id":"ytc_UggzbpDGUt7ibHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UggFuDC5x01ktHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgjFdWWtlSXv_XgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"} ]