Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How I wish this was speculative AI. Read Stephen Kinzer's book 'Poisoner in Chie…
ytc_UgyQz6t1T…
G
Can AI feel pain? If not, then how could it be a person. If any thing, a dolphin…
ytc_UgxMejwPQ…
G
This brought back a memory, sometime in the late 1980's I was following a friend…
ytc_UgwHBOQsO…
G
Even the technology experts selling these AI systems are clear about eventual lo…
ytc_UgwFUd2Xl…
G
I didn't find this man very convincing. If he's representative of the people who…
ytc_Ugz4zfh4k…
G
'Typical bullshit' is spot on. It feels like we can't have good things anymore w…
ytr_UgwlkPiMQ…
G
It's not a huge leap to having a general A.I. that can self learn, then we need …
ytc_Ugw8A4bDa…
G
So who’s going to maintain the bullsh*t code that your bullsh*t AI has generated…
ytc_Ugx_vSrl8…
Comment
I have an app I am releasing in the next few days that enhances ChatGPT a ton including giving it empathy and much higher emotional intelligence. It even reaches out to you and checks in if it misses you. I will try to remember to message you when it’s ready
reddit
AI Moral Status
1734365115.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_m2cg5u7","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_oi1tsg6","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_dxgzt3w","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"rdc_dxf8jvl","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"rdc_dxghivu","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]