Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You know, people are afraid that robotic entities may one day directly state that they want to destroy humanity, but in all reality, these robots are consciously able to do far less than humans, as humans have destroyed each other for a long time, and still are. In fact, a robot population would perhaps be better then a human population (not hinting the mass eradication of humans) based on the fact that they would have less moral capacity and would have to rely more on knowledge then belief. This is true, as many groups destroy humans based on beliefs of assumption, such as superiority. The basic outline is that we should not fear these robots unless we give them the same ability to fear at a level like us.
youtube AI Moral Status 2017-04-20T03:2…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgiH29RQhVyYo3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgjRAdI8CBX503gCoAEC","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugg3apYuxuw7WHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UggFvKK1w8GaCngCoAEC","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UggPDObvrBwGQ3gCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgggzjmFyMBpxngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UghYctB0_3R8aXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UghUCAhI_rysgHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UggtE_QTcYfjL3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugj_mgcN0FnABHgCoAEC","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"})