Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah we had a seminar a year ago relating to AI and the future plans of how AI i…
ytc_UgxrHuJRi…
G
Respectfully I think you are missing the point...ok example..already it puts out…
ytr_UgwgK26Fk…
G
@CopperRosesofRevelation While I do agree that a good chunk of modern art is gar…
ytr_UgzNcPCl0…
G
@Godskid-V1well I know I'm dumb enough that if I create an algorithm to make de…
ytr_UgzMS8I1K…
G
A man with a sword can control 10 men. A man with a gun can control 100 men. A m…
ytc_Ugyb4pDAr…
G
Questions like the ones he has asked in this interview are very much needed. AI …
ytc_UgxgNBfhq…
G
A nuclear war, triggered by AI would be a suicide move; the EMPs would shut it d…
ytc_UgxZ9ngCu…
G
this is something else what would happen if these were ai agents programed to mi…
ytc_Ugxa4RI-o…
Comment
To be honest I don’t really see why this is surprising. With machine learning (and life more broadly), everything comes with a cost; you want your model to give you safer answers? This will come at a cost in some way to accuracy. A very similar tradeoff exists when trying to design attack resistance for machine learning models; you can make your model resistant to a broad spectrum of attacks, but if you do, the accuracy suffers because of it. The real question is whether the tradeoff is worth it.
I think the general discussion about this has become ‘why would they do this to us’ when in reality the better question is ‘was it worth it’, and I think there’s a good discussion to be had there with good points for both sides.
reddit
AI Harm Incident
1689778579.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | utilitarian |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jskk6er","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_jsli3y1","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_jslohgf","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"rdc_jsmf36x","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"rdc_jsmzofs","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]