Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well once AI takes hold. Its Game over man! Seems we are not far from the …
ytc_UgyIUhA9g…
G
Elon Musk est éthique maintenant ?
Il es juste partie de open AI Avant que celui…
ytc_UgwWOHTtk…
G
AI isn't the problem as everyone says, but money on the other hand is, what Yosh…
ytc_Ugwgk43_a…
G
Heard so many stories about people using AI cover letters and CVs just to have t…
rdc_n6sfte1
G
I also encourage my students to use ChatGPT—not all the time, of course, but whe…
ytr_UgxulZAN6…
G
How about they make a plumber AI instead of making one to take the cushy jobs.…
ytc_UgzdgGKZD…
G
- ai looks shit
- ai is far from perfect, and by definition cannot be perfect…
ytc_UgyeyxpDm…
G
So, as someone who is primarily a programmer (legacy software), and also dabbles…
ytc_UgwQgVWgw…
Comment
A category of (computerized) Lethal Autonomous Weapons Systems you missed that have been deployed since the '80s -- close in defense systems. Whether the original Navy CIWS or the Trophy system on armored vehicles, these are systems that *can't* perform their functions with a human in the loop due to the reaction times required. We seem comfortable with those, perhaps because their intent is "defensive" and the targets are theoretically unmanned / munitions, but they still operate in a space where they can be used, intentionally or inadvertently, against human targets -- an example being an Israeli Merkava that mistakenly identified the exhaust of a wing vehicle as a threat and had its Trophy system engage that vehicle, luckily with no casualties.
youtube
2024-06-30T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyXZtWEnlG4k9jbwYp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwP1OwyP7mVhYzoa2h4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxfuYLatYkkBJEn0fR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgxMxTLEmHzRR6FvvXh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxA1b4CeaY4WtvRv3l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxzpyp86eo9Xrx2NU54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzzzMpDOAoLQYExmbd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwSEcJNrOzvHJJInch4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"approval"},
{"id":"ytc_UgyjiCvaelqCw9lhhaF4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzGUlumyIbMtQOt7O94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]