Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don’t think that programmers would be replaced anytime soon, considering that …
ytc_UgxvqxiBk…
G
The cop seriously did not consider for even one second that the AI identificatio…
ytc_Ugx5RNKtl…
G
Are we sure this is actually AI and not just an online call centre sort of thing…
ytc_UgzAv3jEK…
G
Facial Recognition does not normally work on people with darker skin and darker …
ytc_UgxhaGqTb…
G
Watch the movie "Colossus: The Forbin Project" to see how an AI can control an e…
ytc_UgwA8Df-z…
G
@TBone4Eva theres nothing else than LLM to generate code... so for his code clai…
ytr_Ugz-m24Ec…
G
@kingy-ai Try copy pasting your replies into GPT-4 and ask if it is AI generated…
ytr_Ugz0JOwW9…
G
Me: ”I really love the picture of the pumpkin social media post artwork, i think…
ytc_UgzFcsxTx…
Comment
Ethically, why are autonomous weapons systems worse than inaccurate weapons systems such as conventional artillery? In both cases, lethal force is put into motion with the knowledge of a risk that an inappropriate, unintended target may be hit. Why is it worse when this happens because of imperfect programming rather than inherent inaccuracies of the system? And, as a practical matter, it's probably less likely to happen with autonomous weapons systems than with systems of limited accuracy such as conventional artillery (or carpet bombing).
youtube
2024-06-30T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwuq0kks4XnzwOPdSl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNVWMd2FhLN53BH8p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyvGs30VVT3mIsVNHV4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-9WxWVAhTeJMu7d94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxjUKdUM4nbP3ZaG3V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzXwJ6iWMoStL7tpNZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxh4N8qzPEK7t1zoJJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYjrz6Y8PxnONcfTd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx38UQJHfVitmIoif14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyc1IPkwr5v2vndhmt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]