Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Oh I love the part where the boss music kicks in (military got hold of it 🤣). It's scary, but in theory whatever we call AI isn't AI. Then again I don't know about GPT-4. Also... Just because we as humans could eradicate all flies and whatnot, we don't. While the tale of an AI going rampant sounds convincing, I don't think the AIs primary objective will be to make humans go extinct. Since humans were smart enough to create AI or rather AGI, it'd be smart to keep humans around for if or when they have another bright idea, that might benefit the AI as well. Improving the living conditions of mankind might actually improve the chances of AI to expand beyond earth. Sure, AI can calculate a lot or use existing knowledge, but acquiring more knowledge by creating it on its own is definitely slower than doing so AND having others do it as well. I am not the type of guy, who will be looking for places to hide. IMO there's no use in mere survival anyway and if AI really wanted to kill us all, it'd have means to do so no matter where you hide. If the places were too hard to come by, it could send armies of drones or even nuclear missiles. There'd be no survival. But why should it care to kill us? It certainly would damage the world and it'd cost quite some effort - what'd be the benefit? As a programmer, I currently am not aware of any AI that has an initiative and acts on its own. It merely reacts. And if I know anything about humanity, then it's that we as a species have survived a lot and always came back more advanced and while there's a ton of crap going on in the world, there are a lot of positive things AI could learn from.
youtube AI Governance 2023-07-07T09:2… ♥ 2
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgySKW176UPvripbH5x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyjY2dXlFoeIhNacMR4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxaDIhRkSCKxtWw5nB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxRbWRzLCpjC675Vs94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyFrtnsGVlYL77Hf-B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx5mTfOeUxvWBJMBP14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwgWQElQQR_t2y_Uo14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwlnSezJ9FGb_BLqYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzPX3-Gh9zoltjM77V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgycQu9Gv_dnxZkCA4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]