Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
no no Hinton is not godfather of AI… 🤦🏽 I can think of several people turning in…
ytc_UgzDpLaqJ…
G
There should be a rule implemented (I don't know how but it should) that AI can …
ytc_Ugwi-vMfV…
G
The Indians from INDIA are put in charge of all major AI businesses... that's Ir…
ytc_Ugx5ZB1KY…
G
Disinformation and mass manipulation? We didn’t need AI for that. I feel like th…
rdc_moda60p
G
The more we become dependent on technology just to function, the more vulnerable…
ytc_Ugw1uIMvI…
G
I've changed my mind on this since trying products like Cursor. As long as you u…
ytc_UgxFM3_85…
G
Its VERY unlikely that the water is being lost, after cooling. Its being returne…
ytc_UgxKK_58e…
G
Guys, chatgpt is kinda like calculator 2. It will do the same as what a calculat…
ytc_UgxdKo-84…
Comment
That PR campaign will be the next serious war the US is involved in, particularly if it's a war against a near-peer adversary and not going great (the most likely case being a US-China conflict over Taiwan.) I think the US will develop and distribute fully autonomous weapons, but not turn them on until some crisis happens which massively swings public opinion (similar to the post-9/11 fervor) and legitimize their use both for that conflict and all future ones. In the meantime, the goal is to make them available for use at a moment's notice, similar to how we already have thousands of nukes waiting on standby.
The lack of ability to prove or disprove whether an adversary uses similar weapons (compared to, say, WMD use) will make it easier still to claim the enemy has used them first, whether it's true or not, so we are only responding "proportionally". Faced between a choice of a hypothetical future loss to AI, or a likely and imminent loss to an enemy in an ongoing war, the public will support their use.
reddit
AI Responsibility
1700987835.0
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_katap8i","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kars0gr","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_kasm8un","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_kar76r1","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"rdc_kardxxk","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]