Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Judging by your voice, it was a very good decision to skip those episodes. Get w…
ytc_UgzgqLtQK…
G
@revyume
Sorry if my comment is confusing, but I'm talking to the AI creator gu…
ytr_UgzESEOMw…
G
I don't think people realize how much energy is actually used by AI. This conver…
ytc_UgyHk5Y4i…
G
Thanks for sharing your thoughts! It's interesting to see how Sophia's portrayal…
ytr_UgyH46iAk…
G
Wasnt waymo .but was the other automated car like human too ? Draggin the lady ?…
ytc_UgyxPzqxA…
G
Ok I'm gonna tell you about ai psychosis I've ones give a comment on peter teil …
ytc_UgyN1Mxze…
G
If AI can help people to detect dangerous or even search new resources, that wil…
ytc_UgxVsuAEP…
G
I feex all yah AI problem, eh. Pay me the 1 meeellions. Okay, is no problem. Put…
ytc_Ugz5jbuha…
Comment
It entirely depends on how complex and accurate the results need to be. If you're trying to get the LLM to give an accurate reponse for a complex task, do not say please, do no try to be nice. Chain of though as come a long way in LLMs, but as of today, the quality of your prompt, will still have a big impact on the quality of your response. The LLMs don't truly understand natural language, it's all about computing the next token (word) with the highest probability, so saying please and thank you isn't useful at all, if you want it to assume some role, like of a kind assistant, just tell it to do that.
youtube
AI Moral Status
2025-05-26T23:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxmUk6KFi58ORyJNut4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQDRMdgebmSHzPfVN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwihhZMQhL18fj4CcB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxz3ynfVb0w0yE1iVR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy5wwYcyRS_ldDAKcR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzPLIp6-728xUc4ADd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw30AT9q1EUEEh5m9F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwDiah4Rqd602EpnOp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgymTCgdzRcNpepE7Fp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxs2LwyPj-YN9_hXc14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]