Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's an interesting point! Sophia does touch on the idea of efficiency versus …
ytr_Ugz0_wBWy…
G
Who support this women they support hamas and hamad is a terrorist many arabs de…
ytc_UgwUrA-vD…
G
homie so upset about tech boots he wrote an essay to say 'it makes a shit versio…
ytr_UgzdR9j-9…
G
When we do develop A.I. and it will happen, we as a people need to insure the ri…
ytc_UggCabrbb…
G
So, if nobody has jobs and therefore no money, who’s buying all the robot output…
ytc_UgwENudGN…
G
I'm not an artist in any way, tbh. I think the "democraticing art" part of the A…
ytc_UgwWp2QIw…
G
How wonderful is AI, We all gone loose our job. Have Plenty of time to create AI…
ytc_UgyyjCg-2…
G
Question: AI is based on the knowledge created by people. It seems that all thi…
ytc_Ugym0YhOD…
Comment
Except way, way dumber. The AI in war games was an example of traditional reinforcement learning taken to the extreme- it could discover inconsistencies in its own understanding, design tests, acquire new knowledge, and extrapolate that knowledge to other scenarios, while operating with an overarching goal to focus its actions.
A transformer model (what LLMs are based on) is fundamentally incapable of this kind of learning, no matter how big you make it. The fact that the military wants to use LLMs to decide who to kill is fucking terrifying, not least because it shows that the people running the show have no fucking idea how the technology they're using works and what its limitations are.
reddit
AI Responsibility
1771981333.0
♥ 1585
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jkrf68b","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"rdc_jksdu2y","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"rdc_jksupl6","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_o788tt3","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_o78s7wc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]