Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How do you regulate an AI if in it's forward thinking concluds that humans need to be eradicated. How do you get AI to follow the rules?
youtube AI Governance 2023-04-18T14:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugw7UHgWBr872LE7PYF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy5cZpxWzQKZPo17cl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxtSGacH95xCZhGlzh4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyNjmwGYQnTu6jg86p4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyL2-ibBeu8QQrFVpN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyhMYEDPdQv7gqg7ux4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyt3tQnHkN_-V4q1CJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwBQxlMHcpxY3KMOs54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzCUtTA61F6Fujw05R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwvKbr7Q0rvY73z_8Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]