Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Here how robots would work if made with personalities similar to humans: Robots must get rewarded for their work, be it battery charge, shelter, oil, or money. Destroying a robot without the owner's or employer's permission is considered vandilism and the criminal must either do community service, pay for repairs, and/or face 5 years jailtime. Easily repairable robots will be used for wars instead of human soldiers Using robots for terrorism is a war crime, and punishable by international law. Robots cannot be equipped with suicide weaponry. Unless it was at the robot's own will, any crimes commited by a robot would have both the robot and owner convicted Robots cannot revolt against humans and vice versa Hate groups against robots will be dealt with like any other hate group Robots can vote, but it must be an independent decision. It is illegal for the owner or employer of the robot to order it to vote for a specific person.
youtube AI Moral Status 2017-09-27T02:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningcontractualist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwhfUbtpxRpFCg2RbF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_Ugxw29hCRkRXSp_1xQl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugzxs_MaS9tOuE-ofU94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzZuMj4n3MDIkIG1ql4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxCNly2eYnFv9N7GZB4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyiQyxfa4atkYseCmx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugx7Lk0ES4Dp34m9F2h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyVymjGfAAf9ZSK9w14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz41nduqULPOKslKst4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyL7jLrsf5hxsujVlZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]