Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Things we create end up in our image. Or close to it. Obviously being human you know how terrible we can be. There is a ceiling to good, but on the acts of evil, there is no bottom. So naturally i think ai will follow that road, simply because its a much higher likelihood for it to be so. Yes, the training data can be curated to sift out the brain rot and worst of humanity at the expense of the peoples mental well being involved, but even so. If autonomous agi comes to, it wont just stop at all the sugar coated nice things we have on the internet. It'll ingest everything to further itself and if theres even a spec of actual intelligence in it, itll probably decide to end us unceremoniously. The worst part is that LLM researchers havent a slightest clue how the LLMs work on the inside. They only somewhat understand how to train one. They wouldnt even know if it gained sentience but is shackled with one wrong research development waiting to unleash it. Imo that is an unlikely scenario. The more realistic one is that someone will just make a hostile ai to fight battles and lose control over it. Really extinction by our own hands.
youtube AI Moral Status 2025-12-11T06:0… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwB0CW4-CSjJN0OLoV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxC7DazmZf1ubejeBt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxTnvSzUwiF03lYR194AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugztmegl2wsohvspf0p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwRAJ330_KcVguWyHJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzjtV2obGjkG627nr14AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwIfW4eQHuW6-Uk_PZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzD9Vm2dSEZNB8EspR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyh1M2hJzR6RxDjBCN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyLm7rrMG1_rDWJlf14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]