Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robots computers and he like are man made. They have parts that a machine makes,…
ytc_Ugwd6b6ry…
G
Honestly if i posted an ai generated thing and Sheldon calls me out, i'm dipping…
ytc_UgyC7VISx…
G
@craven5328 Wow. That is a very powerful quote and it truly underscores all of …
ytr_UgyPbFDaZ…
G
I thought this was gonna be about how saying please and thank you costs lots of …
ytc_Ugxw6FmlT…
G
This is the problem in society today. All the socialist hippies of the 1960's an…
ytc_UgxI3g4Qv…
G
You can’t have a train line for every single possible location you want to go. T…
ytr_Ugw7K3CAB…
G
Everyone needs to protest ai before it takes all the jobs because only the rich …
ytc_Ugz6B3nkl…
G
In order for self driving cars to be practical, the entire road system for both …
ytc_Ugzbaqcv6…
Comment
Small note. It’s not that chatbots change throughout the day, it’s that they use something we call “temperature” that makes them nondeterministic. Temperature is just a knob that tells the model how “loose” it can get when picking the next token. At temperature 0, it always chooses the highest-probability token, so you get the same output every time. As you turn the temperature up, you’re basically saying “sample from the distribution instead of locking into the top choice.” That introduces randomness. Same prompt, same model, but now the model is allowed to pick from a wider set of plausible next tokens, so the output can diverge run to run. But we can’t use a zero temperature and have deterministic output because multiple tokens can have the same probabilities, so some level of randomness is needed. On top of this, ChatGPT cranks the temperature to make it more “engaging.” That’s why you can get drastically different responses from one day to the next. It’s literally impossible to get consistency because of the way LLMs work.
youtube
AI Harm Incident
2025-11-25T01:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz0X_pU25HV0uk0nll4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-GBiwAUkWZPna4GF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6O7TtGgUVp4wGlzV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxchg2wxE5NLp51Tyd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgydGQ7P8DqWViDaxRx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwIbCx562M6FGUQxOV4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6XJbPQViSp7XD-nt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzMHsXid_dH0wj0IKd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy8UeAey9x13ZwC5Jp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz7On4FtbrQBinPKsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]