Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have two major concerns. First, is related to the movie The Terminator. We must not allow AI to control our military, to choose our targets or to launch nuclear missiles. Second is that, AI currently has no morality. It will murder a human being (or any other living creature) and not bat an eye. AI has no tears to shed. Therefore, we need something like Isaac Asimov's 3 Laws of Robotics, and it needs to be hard-wired into every single AI and they are: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube AI Jobs 2026-03-24T18:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz8LvPqZgWNMVsUE1R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz9J4E4DPwym1U7vB94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzaQENGYvba-Svl4Ap4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgytCtUGea5JW-tfs754AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz4FrIPQ9JLMvsoLYB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxwbabK3OsC7-_xyot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxvMzh5GGnRmF5HN514AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzodc73cYqyeUxNk0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwDQA7_zwCuTl0s5XJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwNU0FeXqqYftFZ5FJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]