Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Isaac Asimov write in his book I Robot, "robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law." I chatted with ChatGPT and it told me it was not built with that law in it's core.
youtube AI Governance 2023-04-18T03:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxMign9RcQvTC7TfE54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgwhxqAwih80IWtC76h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgymgVObtNbalIBwpkF4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxhhib1nxFdATN4P8t4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugw5gIehTBAAbdEdFOl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_Ugy0gpK9lknDDp7QE3V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgxrQJhbXCQu7VGxI-J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugzec2SnYhOCihwVHo54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugw8UcTwxkMJ0-bzvWh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},{"id":"ytc_UgxM2fPULwtwZf_6Z8F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}]