Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is not smarter than the human race because they are a copy of mankind that knows no more and no less than man and can fuck up much faster than mankind, too. So, the real danger of AI is that it will be just like mankind that'll take over many jobs and industries to where all of mankind can retire from work and replace a cash economy with free food, drinks and many products and inventions made by the labors of AI to free mankind into colonizing other planets as the next worldwide goal as the positive things AI can do. However, since AI are often programmed after the beliefs and evil thoughts of mankind there will also be the problem of AIs becoming hitmen for gangs and organized crime, robbing banks and people in behalf of criminals and will carry out religious terrorism such as crashing airplanes into buildings and be walking suicide bombers with nukes that can blow up entire cities by the command of religious terrorist organizations especially Christianity and Islam. But, don't worry, because nuclear war, which is just around the corner, will prevent all the above. Have a nice day.
youtube AI Governance 2023-07-07T05:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugzz_K2JA371d5OzR0d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxHUhmSSlGuGXMO_OF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxzCAtI4DemFEhBxpR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxpxhv47ew_MiSYJvV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxjgMmp7SKCQ2aF9lh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugy77Rty_jNaXXLsY254AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxwbjUz57RXVXPZ1-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugzdgaui4XT0yKGvB2d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugybm9jy3B7xF19CWRN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugzx0L40IZfpO50bAFx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]