Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"What will we do if robots start demanding their own rights?" The problem with this question is it assumes there aren't overrides programmed into the AI to disable these thoughts. We can't literally rewire the brains of conscious beings like we can with robots, it seems like a problem with logical safety mechanisms in check.
youtube AI Moral Status 2017-02-24T18:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugixaj93h0Q5xXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UghRdXH0RQOp8HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugh-SjVAq9zdx3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UggZCcJUMZuFnXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugg4gAkvZdg7z3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj9wGJPXC_hu3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UggIZ1W19SNryngCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgicOMwNotsRh3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"indifference"}, {"id":"ytc_UghBXLlsrBabe3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UggXJxSl99YodXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]