Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Did nobody read Asimov? 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube AI Harm Incident 2025-07-24T01:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzHN0aHaovq7eeuyCV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzXGDEgrTHvwu0uLmJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzVgOy5YXG04NKcc954AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxkKOodE7wVmT03RJV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxAmjXa6TmFc0mYUnJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzDnmttbqB9oF5m0uh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgxcukoamWoMCCWN0Eh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwLYHV3zssnKAr7Kyt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugx1cjMRO8w7VjwC8T54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxsjlRRJmn5aUpBubd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]