Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Make AI with cheap parts where it breaks down easier and someone has to fix…
ytc_UgxCQLwt1…
G
Why would EU want to be mentioned with China and US
AI hasn’t even passed the Tu…
ytc_UgzO6rL1u…
G
What would you consider to be an ethical AI image generation system? You acknowl…
ytc_UgzR0rf4X…
G
Ai, robotics and related technologies need to be taxed in a manner that it repla…
ytc_UgzExrvdX…
G
Woke is going AI possibly AI is going woke sounds like this guy has reached a má…
ytc_UgxMpg3To…
G
Hell to the naw...keep govt the F away from AI...name 3 programs they have done …
ytc_UgzxEU3yl…
G
I dont understand why these guy dont understand that this ai is made for replace…
ytr_UgxFTsdyX…
G
Haha, that would be quite the twist! If you're enjoying these interactions, don'…
ytr_Ugy1Vklhx…
Comment
Arguably less bad than a landmine… but only because landmines being indiscriminate is so horrifically worse because they must explode the kids that trigger them. A well made “LAW” in theory has a chance to recognize a child, civilian or even a soldier surrendering/injured/already captured or otherwise no longer an active combat.
That said, still bad enough that I would prefer them banned. Bare minimum regulated on a similar level of seriousness as Landmines.
I might be ok with AI systems highlighting potential targets as long as a human whose job is specifically to look for reasons not to fire. Such a weapon system would be ready to easily be reprogramed as an actual LAW in the case of an extreme Total War Scenario to satisfy the hawks worst case scenario fears.
youtube
2026-03-28T02:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxs6Zorg8yD_dxJFNV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyGWcrHgShLkDX5qFF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwH-VwuUdxoe-d1BJN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyd8QnCenaREuldmRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw-9vNpgxhegmYFu314AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyJLFrjGKu6baV1E-J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgWGmUGOlKxFr4zsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwMQSsr24AUY_vxLUt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz48H9rIbPUpUN6K9h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzeu8--kCX5Ga88HCB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]