Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here's an idea, we don't develop machines to the point of sentience or purposely…
ytc_UghDTSkKg…
G
Hey brother! try using this promot in the decription "Genre: Amapiano, South Afr…
ytc_UgxYpxvMO…
G
Got to keep those developing nations from developing. Are we surprised?
But...…
rdc_grrxvg6
G
There is another Realy good reason. How you act when nobody is looking trains yo…
ytc_Ugxoav44D…
G
This unfortunately would only scratch the absolute surface of surveillance capit…
ytr_UgwOdMi2n…
G
Remember "learn to code"? Many white collar types were so happy to spew that lin…
ytc_UgwIHp6yf…
G
Just wanted to say, even with o3, Gemini 2.5, Claude 4, all the new reasoning mo…
rdc_mv0xbtz
G
That is true but it is only a matter of time until these studios will adapt ai t…
ytr_UgyKbHpds…
Comment
I honestly don't give a single fuck. It helped me more than anyone ever did. I could be dead rn or at least still stagnant in a seemingly impossible situation. Ask the hard questions. And don't reveal sensitive data. Just because it's information that's personal to you doesn't mean it is inherently sensitive and therefore bad to just have over. Getting better is more important than keeping the AI models from using your info to help other people get whatever they're after. At this point the people who want it can just get it without any permissions via electronic warfare or simple hacks. And also, imagine if AIs had a shit ton of data on what actually helped people. Maybe it would simply get better at helping people. But like any real therapy session, it is self guided by your own intent to uncover, restructure and grow.
youtube
AI Moral Status
2025-06-05T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzUW67h1LytYiW0r0d4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz43wkf80mga64WxW54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxZHcxALYCbCsjTnct4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwBS_YdbE-D_G--wLJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-ednJu2o9iLHuCER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwUOPeZi9SXuZvGmaV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzmjiG1-5g3nzG18Hp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwcoda9PDX2MeeMDrp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyevP_NEWUVJhrJd654AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy53XT36MXXiZ5bDd14AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}]