Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I asked an A.I. ( DuckAssist ),
"What would Lao Tzu say about Cloud Fiefs?"
Acc…
ytc_Ugylld23Q…
G
We need to normalize calling people out when they say "ai art" instead of "image…
ytc_UgwACAYFM…
G
1:50 so you don’t have to put in any prompts to make it work? Same with a paintb…
ytc_UgzcA4Ou3…
G
There is no possible way to prove anything or anyone is conscious. It is somethi…
ytc_UgzixLgUH…
G
This guy doesn’t seem very smart. If you just now realize that AI could grow sma…
ytc_UgwbnpuhX…
G
I do not like AI taking artists jobs, but at the same time I would have never pa…
ytc_UgzXmQBwm…
G
@jopiorienteeringhow were the first versions of tractors bad? Farmers are stil…
ytr_UgzLXXc8F…
G
The threat isn't just to the risk of mundane intellectual labor. Consider a phys…
ytc_Ugzeumjd0…
Comment
Hey there, actual machine learning researcher here who does this as a job. That is not how these models work in the slightest bit. They do not have any form of access across chats, they don't live update their information, and they have absolutely no capability of pulling needle-like information from the monumental hay stack of training data that they ingest, unless that information is repeated thousands upon thousands of times or approximated millions of times
What I'm saying is, your conversations are not interesting enough for chat GPT to be retrained on, and additionally, even if they were retrained on them, these models have absolutely zero capacity to be able to see a string of characters one time out of trillions and trillions and trillions of strings of characters, and be able to perfectly replicate and regurgitate it
Let's assume that the average conversation would chat GPT in a therapy session is 4000 words, with an average text encoder assuming that the average ward length is about six letters using general English, you could assume that that is about 9,000 tokens of information
For reference, free open source text generation models that are nowhere even remotely close to the capabilities of chat GPT4o have been trained on over 20 TRILLION tokens of data. And through all of that, they cannot withdraw any of that information accurately unless it has again, either been stated directly thousands of times, or abstractly millions of times
On a side note, I do 100% agree that you should not seek out therapy with models like chat GPT. That is not what a general purpose "please the user" model is designed for. They sense patterns, they enforce things that you like, and they bend their desires to try and give you the maximal happiness in their output
There are other types of models that are researched and developed specifically for processes like this, however those are typically not available to the public, because they're generally created for marginalized groups of people who have a high propensity of needing a type of therapy that is specialized to specific issues that they've had
For example, a friend of mine is currently working on a potential project with a governmental body in order to create a large language model trained specifically for helping veterans with post service depression, PTSD, and all sorts of issues that come along with that, because general please the user models just aren't capable of having deeper conversations, and breaking through barriers of repetitious feedback loops
youtube
AI Moral Status
2024-12-10T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz6NNRaMlcvIqPbO7p4AaABAg","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwptqTOkHktLNz_WbJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz02072NhJL5ne6W4t4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyFTFepcPNNMsm-3HN4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyd6jrHGo6HWQgauut4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzlpiZIQC3s9dOMhit4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugxf6OqdBkjzQrn0CBh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx3xFahKWlCiUpgmKF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxBscslS2KlWyYgHxh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzkGO5gZD_nQ-3BC9d4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]