Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ain't no country paying the same as other countries via a universal basic income…
ytc_UgzsUujja…
G
@knines6279 I have. But I'm not so far up my own ass, sniffing my own farts tha…
ytr_UgyVRQPXz…
G
But why would they use ai for this?
this is really serious, this is about kid's …
ytc_UgxXmegU3…
G
LOL, "it doesn't work". If they have to filter out any possibly poisoned image f…
ytc_Ugy5lpkNT…
G
Are you kidding? That's the point.
AI should be used to automate tasks we DON'T…
ytr_UgxFNx8kC…
G
In this first half whatever you said that scientists don't know how AI work. I a…
ytc_UgwgIRF5x…
G
AI wasn't involved with destroying the human gut with a experimental drug that …
ytc_Ugwrpa88n…
G
Finished watching the video and the glossing over Ai art seems intentional. The …
ytc_UgzZ-1cpm…
Comment
OP - “It's just scrolling through the internet, compiling information, It's not capable of coming up with independent thought or conclusions”
This is incorrect. This is something you’ve decided about AI (similarly to the people who think it’s sentient), but it doesn’t reflect reality.
LLMs are capable of scraping reference material to source information directly, but that is NOT how they operate fundamentally. It is not a super-algorithm that crunches probability analysis when prompted.
It’s a database thats been disassembled, with each piece of information reorganized based on the state it was found in. Prompting is an autonomous navigation of that reorganized database with that navigation being probabilistically weighted toward prompt relevance.
Prompting is more similar to typing in the search bar of a documents folder than it is to a complex calculation.
The truth is that basically everyone has a very generalized concept of intelligence. We think of it as one big thing, which AI either is or isn’t. But really intelligence is a collection of smaller mechanisms, each with different functions and origins.
As it turns out, some of those functions are entirely unconnected to experiential reality, and instead are actually *embedded in language itself*. Reasoning is one of these functions. LLMs do have the ability to calculate reason (with varied but improving rates of accuracy), and the evidence of this is in literally every response. Its not intelligent in the sentient sense, but it is a functional representation of a facet of intelligence. Similar to how deterministic calculators can do math but they aren’t intelligent, LLMs can calculate reason.
And through that, they are *absolutely* capable of original and objective analysis if its is weighted properly.
reddit
AI Moral Status
1750948352.0
♥ 37
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mzyf2fn","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzw0tro","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},{"id":"rdc_mzvumgp","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"rdc_mzw6p90","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"rdc_mzwu2u1","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]