Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Moore’s law says that computers increase their abilities every two years and that growth is exponential. So wow, this increase in collective robot conscience by virtue of the “cloud” with exponential growth is scary. It will be a wonderful thing as the robots become more human with the good qualities that some of us possess. Conversely, if the path for our evil traits become dominate, we are doomed as the robots will take over without feelings of remorse, benevolence, or kindness. Moreover, if robots are allowed to increase their knowledge on their own, they will find 40 to 50% of human life are either ignorant or stupid. If so, they will not see the need for them to exist. Therefore, look for the elimination of perhaps half the Earths population. Maybe, the new world order with come from artificial intelligence. That may not be so bad after all. However, then the ANDROIDS as a collective will systematically start elimination the superfluous humans, the sick,the weak, and the unproductive members of the human race. Inevitably, our resistance will be futile as they will have the AI collect thought and fingers in everything across the globe. We are doomed.
youtube AI Moral Status 2023-09-14T23:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgyFII9W5tl57UO_M7Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyaKzhTUVB6Argmb954AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzjuQxyodUycxmhxlt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzM7t7W4FmjWA2wt1F4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwUGYrkYg35ysaNRtJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyTBMRRGzxoko9WNVF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxxBzBMJmlJoLeGnrd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwrZIRo7AL95oxbn_R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgywG5ub2vv_XgxXK0V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx74Eci-WdIqJZVSV94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]