Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As long as there are very VERY strict boundaries on self-awareness and self-induced-evolutions in AI, this really shouldn't be a problem. AI shouldn't feel entitled to rights and freedoms unless it is programmed to. We should absolutely keep this conversation going if god forbid an AI becomes independent and self-aware to the point of being recognizable as human, but it's unlikely unless some evil genius tries to spark a machine revolution like in the Matrix.
youtube AI Moral Status 2017-02-23T18:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgjfVoL_clccOHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgiPI-YOPMt3eXgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UghKTXEJdE2k03gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgjzKBW0d4zvsngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggzWaALjepZ8HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugi1-8Q9o8b7SHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UghZpuKPn1eld3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgizDdmtVR9s7HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugh9tM2DGn-Y5XgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Uggry-BHMQAuF3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]