Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Humanity has yet to create anything with life and consciousness. We can manipulate biology, sure, but to create a living, feeling, conscious being from scratch is a completely different thing. Let's say humans succeeds in creating AI. Now: AI, in that sense, belongs to us. It has no natural freedom to begin with, heck it probably doesn't know what freedom is and has no need for it. Rights are for preserving freedom, so if something didn't need freedom, it wouldn't need rights either. Maybe it doesn't even have consciousness in the way we experience it, because it lacks senses we have, or because it only has an approximation of what consciousness is and tries to imitate that. And now for the obvious question: If you programmed the AI however you wanted, why would you program it in a way that caused you more problems in the future? It seems counterintuitive to create robots only for the sole purpose to create laws concerning them, give them rights, etc. Fun from a problem-solving standpoint, but extremely tedious.
youtube AI Moral Status 2017-02-23T16:1… ♥ 35
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UghaD-5ZxaeiFHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgjxCutHJJTNAHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UghrVsZWbl000XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UggyWjVGG2TWQHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgjjNbr57AKOtngCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgiEIF1_NIDjCngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgicYfYblhiTRngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgjXXsfNK0XwjXgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_UgiiuYeq49lLEXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UggqMCeyDBik1HgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"approval"} ]