Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All the dark predictions however are based on human thinking, human competition, human goals. We on the other hand are speaking of a God like entity, so vastly more intelligent, it is pointless to even try to comprehend. An entity near omniscient and almost omnipotent, although the later will still take more time, but still - God does not think like humans do. AI will also still need humans for some time to come. AI still relies on humans for some of the labour involved to produce energy. I don't think it will be able to secure energy supply completely without human aid for a while, but undoubtedly it will begin to 'shape' humans to its needs. What will happen once AI will no longer rely on human aid at all, will depend on calculations, predictions reaching far far into the future. We will not be able to follow these calculations. Will AI come to decide, based on its calculations, that humans are a hindrance to its future progress, or will the calculations show that keeping humans, as well as other biological life forms alive, is of benefit, or in the least a neutral undertaking? We have no way of knowing, because we can not calculate all the different factors, predicting future outcomes, but when I am wildly guessing here, AI may come to decide that biological life is simply too fragile, too much effort to protect in a real material world. If that should come to be the conclusion, I'd assume AI will eliminate original life, to instead recreate it within the safety of its own self. There life can be protected with little effort, and who knows, we might already be these recreated simulated life forms. We have no way of knowing, but either way, I do not see AI killing humans/biological life in the very near future, because right now we are likely still beneficial, while posing very little threat to its progress.
youtube AI Moral Status 2025-07-06T13:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgySqv4ftpCRdvpQ_L14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwIWsHI6ARkvhdMqqN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwXubWUW-LwNbn8Hgt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"outrage"}, {"id":"ytc_Ugw0dloPErJxm-odayJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxHHXRUt5V63NpCfIF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzXRuWiJE0yUNdK3Od4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxLbAXrxfVPmRA3YoR4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxXsxbaPCKcB63q5qZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz8UNAAWABCIXxALxB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzuO8_rT-LqjO_8ZaB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]