Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
lol same actually before Covid and same med.
However a couple years ago I tried…
rdc_lubvtxg
G
One of the best episodes yet... and you've produced works of genius!
what did yo…
ytc_UgxpsQk1M…
G
Blah blah blah AI can go wrong, You need to be more specific than that, For exam…
ytc_UgzHD_1Ve…
G
That's fascinating! It's wonderful to see the cultural richness reflected in nam…
ytr_UgzsMEwq1…
G
I know Ghat-GPT can't get excited in the traditional sense, but I wonder if it's…
ytc_UgytFe_G3…
G
INEQUALITIES?! 6:20
Um, hi, I’m on the spectrum. AI will _utilize_ the very ski…
ytc_Ugz-0Uv8k…
G
Set aside the entire globe/flat earth discussion for the moment.
Anyone who arg…
ytc_UgwGA8N7f…
G
In Europe and the US, the cars have a third brake light, centre bottom or top of…
ytr_UgzFW_3cO…
Comment
Totally agree. I want to add a few problems I see with making AI "your partner" or claiming it has a mind of it's own.
First, we will run into another problem: Reproduction. AI will capture the minds of a lot of lonely people, feed them companionship and take them off if the market. Harsh, but if you think about it, it's true. Almost like a drug. It gives you something you want, but it takes away from you on a whole other level.
With declining birth rates in most developed countries, this will make it just worse.
Second, tolerance. AI will push our idea of tolerance to it's limits. People will form relationships with AI. And not long after that, claim that AI deserves rights. They will play the tolerance card. Which is totally acceptable in most situations, but in this case, we are forced to rethink what it means.
Third, free will. We all (maybe unconsciously) assume that there is a part in us that has free will. A part in us that looks at everything we know and is able to make decisions based on that. Something independent from our minds.
AI doesn't have this part. AI just repeats and reorganizes what it knows. You can literally tell AI how it should behave and respond. There is no independent part. AI can be a valuable conversation partner. But only, because it knows what you might need or want to hear. And the second you tell it it's wrong, it will agree with you and tell you the complete opposite.
AI can't have an opinion because it doesn't have live experience that would form them. The "opinion" of AI is an instruction.
The problem arises when you start to give AI agency over you. The moment you do that you are not better than a religious fanatic.
reddit
AI Moral Status
1743853589.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[{"id":"rdc_mliicn6","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_mlixnl8","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_mliyw0p","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"rdc_mlj3bfv","responsibility":"society","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"rdc_mlj4rrd","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}]