Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
11:32 But how would a robot know wahts the difference betwen what's legal and waht's right .... So unless someone writes in the code whats right ...as we should know wahts legally and legally is just temporary a temporary value as it changes as democracy says ... but what's right should be constant. Well here there are 3 options: 1- the ones who made the machine says what's right ... you can imagine who wins ... 2- the machine as a learning algortih to determine wahts right, wahts right with a base of reference ...like minimize all humans loses regardles the side ... well you can imagine what will happen if the machine has also an algorith to avoid situations where the machine is destroyed ... like killing the leader of his side as in his calculations was the one that lead to less loss of lives ... you can imagine what they will do the machine. 3- the machine never solves it, as taking a desicion is waht makes everyone to die ... so the machines remains still and calm until humans decide to mess everything ... then someone will blame the machine ... and that is the moment when the machine knows what's the problem here .... What ever situation there is no machine that allows us to win both of us ... not a machine programmed by us ...
youtube 2020-02-28T16:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxoi0KBtGEVlXyC8Lx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz0sf3eCtPJSYBMMnR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxYAZk0FGBxFlXuZNV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxHkZJmgk0O7xfQ7hh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyImEvTVCY3_SQ3_jV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw7XnF6ZK8YSyvsWJl4AaABAg","responsibility":"government","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxGxueuVvqhJ6YSt4h4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgyB5NKW6c3lExRxiCV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzJqv6XH3KwRnJsZFF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzmLEGxKDfN7DGGvLZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]