Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have 4 possible scenarios of how the human race will end with AI, I'll list them from worst to best: 1- We will end up like Skynet from The Terminator. AI will eventually eradicate humanity. 2- We will end up like the Matrix. Humans will be used as batteries by the machines they created so we can live fake, perfect lives while slowly dying without ever knowing that our lives are fake. 3- We will end up like wallE. We will end up becoming totally dependent on AI that our lives cannot function without it. from working to sleeping to eating. 4- We will end up like Star Trek. AI and machines in service of humans, not the other way around. Technology will have evolved so much that it has eradicated poverty, famine, wars, the need for jobs, and has made people live like kings and queens. AI will do all the boring work while humans are left to do their hobbies and explore, and go on adventures. I really want the Star Trek ending though. But knowing how greedy and evil this world is, I highly doubt it. The best case scenario is we end up like wallE.
youtube 2025-06-04T13:1… ♥ 3
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugxq7JSuYgibsD9iwel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOLsNy1dtpDMrjGnR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxAsw6hLpLqZ4SBI4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxTqB9jfOoQhP8K2_Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw6N4LOH4H5rUJeg2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx-1qU7tPGSmIEZOGJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzMIso8ZHZCWOkZ3uJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzzrHz10Wvm7Ee57Dt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzHup-PPMiZJCHUh_54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwY3KC6ezl7rGcjToF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]