Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Didn't Isaac Asimov solve most of these problems in 1950 with the Three Laws of Robotics? 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube AI Governance 2024-02-01T07:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxkMpXwzOgJu0sdf2p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxSaXpuXYcDvcypYpV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxoH9ulG_duymgDFP54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwLZJn06iPrZWB0T9p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw57Ow55EplBC_kkX14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxegNf6KdPZOMhwbjR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz7uQJH78vv70ptwUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyhjjCTQ6ibb3ckYMh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzPFRRUaMj6BMol8G14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyzzTBGOiOKa-3QnsJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]