Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Short and sweet always helps, this indeed was nice appetizer!!
Hoping for furth…
ytc_UgzybQpUC…
G
One of the main reasons AI can't replace software engineers is that English is n…
ytc_UgzuMcGI7…
G
You've given poor old ChatGPT a Hobson's choice, or perhaps double bind would be…
ytc_Ugwe2g1ie…
G
Case and point - The "Will smith eating spaghetti" video's progression over time…
ytc_Ugz-pITZY…
G
It’s probably an issue that I made an AI bot of my crush since I can’t actually …
ytc_UgxC-SSCw…
G
Thank you for staying optimistic. People don't realize how hard that is, and I'v…
ytc_UgwcC_wLd…
G
as long as it stays limited to my T.V... it's just infotainment... until AI is t…
ytc_UgyJTygpZ…
G
Are we going to ignore the fact AI needs a lot of energy to even function, and h…
ytc_Ugw3pjEen…
Comment
Everyone is concerned with these companies selling or sharing their data inappropriately.
I think the *real* issue is that they likely won't share their data *about you* ... *with you*.
At least in my experience *that* seems to be how both the existing medical system *and* this system of the "future" totally fucks up HIPAA. Failing to properly communicate a diagnosis.. failing to provide my *complete* medical history and *all* of the data they have on me is something that happens *frequently*.
If you're marked as a "frequent flyer" or "drug seeker" in a medical system, do you think they share that with you? They pull data on you from various state and federal databases when they prescribe certain drugs now.. which are basically "black boxes" ... do they share *any* of that? That they even used one of these databases or "methods" to determine your treatment?
*fuck no*.
Seems like it might get a lot worse with folks like google, as the data analysis ... and stuff like machine learning will complicate the nature of the data they have on you and *how* they can even communicate it all to you.. etc. So now not only will they refuse to communicate stuff that you might not "like", they'll fail to communicate stuff that is essential to your health...
... and not necessarily because they can't, but because it's "hard".
reddit
AI Bias
1574930392.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nk48jjs","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_f8xreyu","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"rdc_erpg6xh","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"rdc_ewm3x6p","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"rdc_ewmk9tz","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]