Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So this relates closely to my own sci-fi novel, Synthesis. I've always thought that the whole robot apolypse scenario was a little unimaginative, so what if AI starts to behave like an actual race of people? Now, why would AI do that? Well, it wouldn't if the reason we make intelligent technology is to make human lives easier. If it's just a matter of means to ends, of making tools, then the more intelligent that technology becomes, the more tools turn into slaves. Slavery has a history of culminating in violence. But if you're creating AI for its own sake, ie not to serve any purpose towards human beings but just to see if you can replicate human or humanlike intelligence, then it becomes a very interesting undertaking. This is the only way we should approach AI: we either make it for its own sake, or we don't make it at all. If humans create intelligent machines strictly as means to our ends, then we'll end up in a situation where we've delegated all responsibility to our techology and we'll be left only with the illusion of power.
youtube 2015-07-30T11:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgjSWtWNngVsjHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ughf1hqoTutyqngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugi5WLDTl3NlX3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggopqM2M_sbrHgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ughy4iVWinVhmXgCoAEC","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgiveUrRxNI0_ngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UghFF0zjhR0XSngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghurI4Ad49yDHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjXsosqvOJLJngCoAEC","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugi4o4GuPLIlcHgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]