Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Carmakers can whine all they want, but their vehicles will have to handle these …
rdc_d1ktmql
G
21:35 we can use AI in genetic engeneering to make us smart. By increasing brain…
ytc_UgycvrSkS…
G
AI development clearly reveals how paradoxical human beings are. We’ve created s…
ytc_UgwxnvRnV…
G
I for one welcome our AI Overlords, hopefully it hits ASI soon and will tell us …
ytc_UgwrHpSyd…
G
You will always have the dilemma of 3:05 , an automated car may take the minimum…
ytc_UgjQvTuYs…
G
@TreeStump-and-CheeseKetchupIT Eh, I'd still say that photography does have its…
ytr_Ugy1sPKp3…
G
The female Ai is spot on. She was threatening him, shaming him, demeaning him.
…
ytc_Ugx6DX54B…
G
Yo wtf why the most important A.I. racist like stop getting your data from twitt…
ytc_UgzdEJKYl…
Comment
I feel this video is misleading because yes, you can chose to show restraint and it's rarely a situation of 'oh, we have to' because that's usually just an excuse to keep steaming forward regardless of cirsumsstances or cost because that person is just on that side of the propnent. (Remember, during the korean war the president likenned nuclear bombs to bullets and wanted to start a casual nuclear war like it was nothing. Fortunately, he was one of the few people dumb enough to try to frame nukes as conventional weapons and it didn't come to pass. But you can clearly see that the tech has not been implimented fully. Only one nation has, thanksfully, made the mistake of nuking someone so far.) Also his argument about a human commander just checking off on attacks vs. giving over AI kill command is also misleading. For one thing, if the reasons and guidlines for conflict are given to the software and are incorrect to begin with, the computer will make the wrong choices. If some a hole frames civilians as combatants, of course the machine will kill them blindly, even more coldly that humans (though never underestimate human cruety). He is correct in saying commanders need to understand their systems, but that doesn't invalidate having human review. nor, however, does human review or ai review garuntee success or proper excution or proper oversite. those are thier own things that need to be addressed on their own merits and not confused or thrown under one title for convienence. And the situation in Gaza and Ukraine is a bit different. Ukraine most resembles an actual war with two sides having more closely relatable abilities. in Gaza, targets that are civilian far outweight military targets and have been used basically to enact what can be described as executions. Likely, the AI has been given parameters that cause this, or, the A.I. is just running support and it's still a totally human choice to do what has been done. I suppose it just depends on who you are going to beleive
youtube
2025-02-04T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwga-mtkaY6aBT4PPF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxyyIGcCLyJ5vz85Ux4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwrV3tgciZjRyIIjrN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8DXKguZYXO9S_o8Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxn6j45qnKbsQ4Q-p54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxywx2n4h_JqMoJNAV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzt4WBjVsG2VsDeDM14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwCOF-v_tYrlHY0PjJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwvHiIQIPEmHGPXBjh4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxl9ns0yfsVFCe1MTV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]