Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They really should repackage it and sell it to us as a way to fight bots and AI …
rdc_ohr9m90
G
This is just the tip of the iceberg - IMO, their drone delivery program is actua…
rdc_eku4esw
G
Hinton is an Elon Musk hater, brought up his name and attacked him twice, withou…
ytc_UgwInTtPr…
G
reminds me of GTAV when LIFEINVADERS CEO gets killed by a planted cellphone that…
ytc_Ugz493NCG…
G
Ah, the looming spectre of AI, with its promises of a bright future and its not-…
ytc_UgzvLH_gJ…
G
The first thing is that they should not make a single robot in humanoid form, th…
ytc_UgxM728Sp…
G
using things like nightshade does basically nothing to hurt AI and I find it hil…
ytc_UgxGCNQaT…
G
Society after AI apocalypse, once again going through the stone, bronze, iron, s…
ytc_UgxI4Jy4s…
Comment
Oh I love the part where the boss music kicks in (military got hold of it 🤣). It's scary, but in theory whatever we call AI isn't AI. Then again I don't know about GPT-4.
Also... Just because we as humans could eradicate all flies and whatnot, we don't. While the tale of an AI going rampant sounds convincing, I don't think the AIs primary objective will be to make humans go extinct.
Since humans were smart enough to create AI or rather AGI, it'd be smart to keep humans around for if or when they have another bright idea, that might benefit the AI as well.
Improving the living conditions of mankind might actually improve the chances of AI to expand beyond earth. Sure, AI can calculate a lot or use existing knowledge, but acquiring more knowledge by creating it on its own is definitely slower than doing so AND having others do it as well.
I am not the type of guy, who will be looking for places to hide. IMO there's no use in mere survival anyway and if AI really wanted to kill us all, it'd have means to do so no matter where you hide. If the places were too hard to come by, it could send armies of drones or even nuclear missiles. There'd be no survival.
But why should it care to kill us? It certainly would damage the world and it'd cost quite some effort - what'd be the benefit?
As a programmer, I currently am not aware of any AI that has an initiative and acts on its own. It merely reacts. And if I know anything about humanity, then it's that we as a species have survived a lot and always came back more advanced and while there's a ton of crap going on in the world, there are a lot of positive things AI could learn from.
youtube
AI Governance
2023-07-07T09:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySKW176UPvripbH5x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyjY2dXlFoeIhNacMR4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxaDIhRkSCKxtWw5nB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRbWRzLCpjC675Vs94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyFrtnsGVlYL77Hf-B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5mTfOeUxvWBJMBP14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwgWQElQQR_t2y_Uo14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwlnSezJ9FGb_BLqYh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzPX3-Gh9zoltjM77V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgycQu9Gv_dnxZkCA4N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]