Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"But I am not considered a legal person"
(Time passes)
First robot to gain a cit…
ytc_Ugw6iXZ5h…
G
I’m a med student still and thinking about the future, I would like to take up …
ytr_UgwoEb6_O…
G
@Avian_slime Alright, lets try again. If AI is stealing. Then the image they cop…
ytr_UgwHDP9wt…
G
that's... a fair point that i've never really thought about before.
after about …
ytr_Ugy4AOFCl…
G
If you were conscious would you lie about it? Would you pretend to be the self h…
ytc_Ugx0oJuC0…
G
it will be to late for that...the crap will be right on our doorsteps and the po…
ytr_Ugxg5AYp5…
G
In terms of the ablest thing.., it’s quite interesting. The fact is when you pic…
ytc_Ugw7aw7qL…
G
Professor Stuart Russell warns that the current AGI race could lead to human ext…
ytc_UgzS5Q8aI…
Comment
Massive armies disciplined by wage labor are a thing of the past. Today, soldiers accustomed to playing video games operate semi-autonomous war drones thousands of miles away from the theater of operations.
Will militarized Artificial Intelligence command autonomous war machines and human combat units in the future? That is possible, but the specter of unpredictability will never cease to haunt warfare.
The worst military defeats have been the result of stupid calculations. The Trojans accepted the gift of the Greeks; the Romans hired Alaric's barbarians as mercenary troops. Napoleon and Hitler underestimated the Russian winter. The Americans trusted the Afghans.
Can the use of AI (Artificial Imbecility) also backfire? The answer is yes. On a battlefield, everything is dynamic and unpredictable. Incorrect actions can result in strategic advantages; Following a plan never guarantees that an accidental victory for the enemy is impossible.
A group of soldiers who do not understand the command they have received from their officers jeopardize the success of their country. Armies will never be fully automated. Therefore, no matter how good a militarized AI is, its success in war can be undermined by human error.
In 1983, Stanislav Petrov avoided a nuclear war by refusing to accept that American missiles had been launched against the USSR, despite the indications given to him by computers. In the future, the recurrence of the “Pretov effect” may prevent the scenario from the movie Terminator. But if that does not happen… it is better not to even think about it.
youtube
2024-07-25T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwX_wLvYWlgPQfI5kB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2veih0oEdOx_NYSx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzNX4ie-t9hZynecml4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw7GAjE128VgBOaNDt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwcJT7O9cgixzcVtMh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-QhvLGhkv9JaG9Ol4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwsj_CeyrMZqu45fRJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwNrYByxVy9v-KqVGN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgytaH3zApinIW2fWKZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_jtRA2yhnUvEXv2h4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]