Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes, all 50 states require vaccinations to be in school unless you get a waiver.…
rdc_eicmeds
G
I guess someone will have to ai generate a video that gives the whole fear monge…
ytc_UgwE6wXNU…
G
Waymo has improved significantly lately. I have been driving behind waymo cars m…
ytc_UgyuVy7UW…
G
Cool topic ! I can relate to, the thing is AI speed things up. I am running a sm…
ytc_UgzzaYWHe…
G
Digital art is a little simpler and cheaper. AI “art” is just having a bot spit …
ytc_Ugwp8LBCB…
G
If another artist used your work for inspiration for their own work without your…
ytc_UgySCh6A6…
G
I only mess with AI and bit when I have art block to help get some sort of inspi…
ytc_UgxwMrYbJ…
G
Why would you question NDT about AI? I don't get it. You should have invited eit…
ytc_Ugzk90dzq…
Comment
One of the tasks that AI is pretty decent at is taking notes from meetings held over Zoom/Meet/Teams. If you feed it a transcript of a meeting, it’ll *fairly* reliably produce a *fairly* accurate summary of what was discussed. Maybe 80-95% accurate 80-95% of the time.
However, the dangerous thing is that 5-20% of the time, it just makes shit up, even in a scenario where you’ve fed it a transcript, and it absolutely takes a human who was in the meeting and remembers what was said to review the summary and say, “hold up.”
Now, obviously meeting notes aren’t typically a high stakes applications, and a little bit of invented bullshit isn’t gonna typically ruin the world. But in my experience, somewhere between 5-20% of what *any* LLM produces is bullshit, and they’re being used for way more consequential things than taking meeting notes.
If I were Sam Altman or similar, this is all I’d be focusing on. Figuring out how to build a LLM that didn’t bullshit, or at least knew when it was bullshitting and could self-ID the shit it made up.
reddit
AI Responsibility
1755609928.0
♥ 73
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n9hzee8","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"rdc_n9ig08d","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_n9ixia5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n9kka6l","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n9jts9g","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]