Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Dan-codes I use GPT-4 which is the pro version of ChatGPT. Using it in the most…
ytr_UgyU2HLbS…
G
talking about ethics and morals in todays society that totally gives a crap abou…
ytc_UgwyPzq_4…
G
Most afraid that a rogue AI in the future would breach the nuclear launch code.…
ytc_UgyX4tORN…
G
This is all fearmongering which makes AI more popular. The same people calling A…
ytc_Ugw7UoxDp…
G
There is one simple solution to this problem. Everyone only hire or do buisness…
ytc_UgwLbHeMn…
G
We are fairly certain, that AI could kill us.
We can't stop developing humanity…
ytc_UgylJT-4p…
G
Indeed, as someone who enjoy various open weight models, all this just comes off…
ytr_UgzFqF5Dc…
G
I don’t see how your argument makes any sense ??? LMAO your opinion doesn’t mean…
ytr_UgyWs9jDq…
Comment
One problem I have with this. (Beyond skepticism of how fast AI actually progresses), is that suppose superintelligence was actually achieved.
I don't see how any inorganic system, however intelligence it is, would ever not be dependent on humans to function. Even some small number of humans.
So fundamentally I don't see how it would ever be in superintelligent AI's interest to ever "kill" its host so to speak.
I feel like much of this discourse is borne out of human psychological paranoia and existential angst. That always existed and always will exist.
youtube
2024-08-31T23:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwIIqybpGeN9WvpvXN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw_Rz302MtVQJiPejV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzLmu4J2IhrpjtP0JV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwotWleZDbjO0RvuCV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwt3hE8SMEr7T-NTqB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzohHrc2hrlabiQchN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwhRWPFFRjQ06whMr54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyLC5BUR1L_IByn9OR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxhzhGGeHFqo7ZZTC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyRMcH-3XiMtKSjkoZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"approval"}
]