Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
THE A.I. MASS PSYCHOSIS: Democrats are simply unable to give us a break-week without putting out new "big" concepts that keep people sleepless. Bill Gates, Geoffrey Hinton (known as "A.I. godfather") and other apparent EVILS are now "alerting" us about the horrendous consequences of AI released into our daily lives. WHO ASKED THEM to create AI, in the first place? It is only now when G. Hinton got a big paycheck (after he quit Google) and a highly-paid retirement stipend from Google Inc, he speaks out against the MENACE he was in charge of, in the first place. I personally cannot determine my attitude toward the AI risk. On one hand, I am happy that robots will replace humans, as humans are indeed very bad/dirty animals. Judges will be replaced by software-navigated robots and the rulings will be uncorrupt (hopefully). On the other hand, robots may chock a human in a street and get no punishment as the laws are for the humans, not for the robots. A school bus driver-robot may intentionally kill children. A robot architect may intentionally design a deficient bridge. A robot pilot may create an air traffic accident; the youtube vloggers are already enjoying the robot-farms that generate fraudulent income for them;, and the list goes on. The worst news is that the machines cannot be faulty or malicious; the people controlling and navigating those killer-robots remotely, are the EVIL/GUILTY party and if they are already on such a mission then they have measured in advance how to remain untraceable.
youtube AI Governance 2023-05-17T04:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyAQKeAnu26FoS6vm94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwd0j2ilQBxfxE2RK54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxOjcxJvNYUXZJV8mt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw2E7dKiEG3VeuIWCJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyhWKjKGpq_95E-TId4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyPlaoiq5YNXQwT7Ed4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzFlFLZ79UgiRT3XiB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxbYfVJTAqAmr1h-GR4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzKahfy8nqRX1WezYB4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgxmebnUe4_8svxRMkR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]