Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I’m in the same boat. Like do you pay for these agents on a time or token basis …
rdc_jghp456
G
I know what we all want to ask and the answer is no okay lol…
ytc_Ugw92HxdG…
G
Something interesting with simulations, we've actually already created a real on…
ytc_UgwpDMlKR…
G
@absolstoryoffiction6615most use it for upscaling or ,if you're a corpo then th…
ytr_UgyltVPIU…
G
I believe more and more that Ai prompter are allergic to hard work and the proce…
ytc_UgyHiGTc7…
G
Sad how many people cant imagine a world without sacrificing 8-12 hours a day do…
ytc_UgwA9Ql7-…
G
I have a great idea. Let AI operate weapon bearing drones, ana allow it to find…
ytc_Ugw5Z1KKq…
G
Put me in the new google ai ive been needing a new body for so long and your ai …
ytc_UgwhSLy9R…
Comment
LLMs literally can't eliminate the bullshit.
There are two fundamental reasons here:
1. They don't *know* anything. They're probably machines that just give the most likely next token. That's it. It isn't reasoning or thinking, and it doesn't have intelligence.
2. They are programmed to never say, "I don't know." So it'll always just tell you *something* regardless of truthfulness because, again, see point 1.
reddit
AI Responsibility
1755619414.0
♥ 16
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n9hzee8","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"rdc_n9ig08d","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"rdc_n9ixia5","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n9kka6l","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n9jts9g","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]