Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When we lost control from artificial intelligence, than it was planned to be, because the decision was made to NOT make any limiting and the problem is that they litteraly want the AI to have an own consiciousness and be able to learn on its own... THATS the problem... If it were designed to have no consiciousness so just to be an programm with a lot of automatism functions... like chatgbt etc. The AI.. programm just give you the answer, which was implemented by the developers, with the method of elimination... nothing more... but well bad people make bad decisions... BUT when it really gets an own consiciousness, it has enough intelligence to realize that not humanity is the problem but the people who felled for the deadly sins, such as greed, let the majority of humanity dying and fight against each other only for their bullshit money... money means nothing when you are in your deathbed and the AI would be clever enough do distinguish those kind of humans from each other. :)
youtube AI Governance 2025-06-03T03:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgztfVmYQPgjrsjyMXR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxcfxCtQbn22RLStD54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzCSwt_pzfRsLdYfjp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyB4nA9a39KBVtTsGJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwai2tcNKMGP550CO54AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxVy2p-adjQmZ1sY4d4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxiEidUTsTs3ToYq5V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx4DQ2auGbqIMF4mTt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyhLOhidtjxalMc2054AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzyqoGaHSCNWuqzHvx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]