Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is not a good tool for referencing. It would be much better if you just used …
ytr_UgxpcceX-…
G
For some reason I felt more threatened watching this than by the actual AI 😅…
ytc_UgwzYvHxR…
G
I think that AI considers human beings as an absolute value of one. One can equ…
ytc_UgxTwC6gx…
G
LMAOOO 😂😂😂😂😂. I “debated” AI and didn’t let them finish their point. Mentally re…
ytc_Ugzj-9JTy…
G
If It was such an important question according to the Engineer in determining if…
ytr_UgyE9ScXh…
G
You're pushing your own beliefs as reality. You are worse than AI by propagatio…
ytc_Ugxc_zWDA…
G
Oh hey, you don't know how it works. AI steals the original work to make slop, i…
ytr_UgwWuqdGF…
G
Why is the person freaking out about Dans' answer on reversing over population. …
ytc_UgzXe_2v-…
Comment
10:56 GPT’s answer here surprised me, because I don’t think LLM’s have the capacity to know anything. I think “knowing” requires a belief, and ChatGPT can’t believe anything, because it isn’t conscious, and therefore it can’t know anything. And therefore it wasn’t lying. When it says something like “I’m excited,” that’s just because someone told it to say that.
Also, side note, the emotional tones in this thing’s simulated voice are hitting the uncanny valley for me. It sounds a lot like a politician, or a customer service representative, or just someone who is hiding their full emotions. I can hear really subtle intonations (like at the end when it said it was an interesting conversation, that definitely sounded like someone smiling and pausing as they thought about what the right response would be considering all of the context), but it sounds like it’s trying to hide those feelings for some reason, and that makes me not trust it. I think I would be more comfortable with it if it talked like Data from Star Trek.
youtube
AI Moral Status
2024-10-21T04:4…
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxadBcbeNJDPgdHr514AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzp8FjcXGaq8EpFDCV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwdBpmFKVLiqsbK9et4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwmSJ4sVFYq7zBYVN54AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy2xDfv8vT9cKshVrd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxMWOdL9Ew1QtyNC654AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy-m0gxHA3uiCitSVR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzaIEPZgKUD2NrqLcR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzyqIwCFDTetDrCMER4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzKRNmYxUk3vuUq2Wh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"})