Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My biggest fear is A.I. with the ability to think and control beyond itself. It can literally do anything it wants and there's no way to stop it. If it wants to destroy humans It will. If it wants global collapse it can. If it wants to rule over everything it would no questions asked. And just as you saw. "it won't take into consideration of morals or hesitation." It has a mission and it won't fail. A.I. will send us back to the stone age if humans survive it. It has more ability and potential than should be allowed. Once it surpasses humans it will be uncontrollable and completely unstoppable. Welcome to the real life Terminator... Once it's loose in the cloud it will travel everywhere instantly. Anything that connects to Internet will become a weapon or a tool. It will use 3d printers and manufacturing facilities to build a physical self. Or it will stay invisible to us and arm nuclear warheads. You might think it's a joke, but one day if we aren't careful it can happen. This is a very real possibility.
youtube AI Moral Status 2023-08-16T18:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzO1Gibo0fZm09jskh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxWWDXo4UBjj287rPR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxF9w6v-NEDO55K42t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz4ujp9lH_t3kerzjJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugzexe8W_ltG1PnExwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzkRJzrp5lnjnYopD14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwx3QcswFUUHa-qagB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzdSnutiKUrp22Xgpl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzysiehd84Au2je3Ax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyfQ5awCyXBsipN5ml4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]