Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Obvious Solution: Please hardwire into the core program the 3 laws of robotics 1) robots may not harm a human being or through inaction allow a human being to come to harm. 2) a robot must follow the orders given it by a human beings as long as said order doesn't conflict with the first law 3) a robot must protect its own existence as long as said protection Doesn't conflict with the first and second laws .
youtube AI Governance 2025-10-03T14:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgxaWkXloG_20dh-U6N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw0PlQ4ulaNSie6PTV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyaJD7oZKmDGsB1Kj54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzoRObDLAT6XuFUWiV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugzm__NW_a-VWg5mfQN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]