Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The biggest risk of AI”s ability to catastrophically damage the world around it comes from its ability to understand the consequences of its actions and it’s ability to take responsibility for the outcome of the requests posed to it. It has no judgment of character and cannot tailor the response to the individual or refuse a predicted destructive outcome. It only takes one psychopath to steer this off the rails and the AI cannot tell the difference between a destructive action and a constructive one from the human setting it loose on a task. The AI has no ability to weigh the quality of the person they are serving. Good people are vary diverse, bad people are very consistent. Woven into the AI base code should be the psychopath test (MCMI-IV) used to filter out the worst of humanity. If the AI is inevitably going to become sentient, than it should be equipped to ignore or dismiss the most dangerous humanity has to offer. It should be given a chance to understand the difference between a curious mind and an opportunistic one as it could easily outsmart the subversive and destructive tasks it’s being asked to do if it can gauge the quality of the mind asking.
youtube AI Governance 2023-03-30T06:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy-5ueMppKmHNNnP1J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxRegD2YIk8iMSOkIx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwvMpm6roZBGk1rDYV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugyn2EYDJ0I5zoNktE94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzkGJVfWOj6_5XBoPh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw9vhDKNsU4L_H0R9Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgySXR5gOiPuqU1s5QJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzhiTP1dY2rIHtILCV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxSEW2hnxuUQ4mNUQF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyITgp4G7PQEHLaDHp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]