Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Soon I will make a robot animal by putting a brain of a dead person in a robot…
ytc_Uggl45nMs…
G
Well that's true but let me tell you that once I needed a suggestion I wanted to…
ytc_UgyKZ17db…
G
I think men need to worry more than women. Because men are already pretty useles…
ytr_Ugy4LkQvw…
G
This stuff will only evolve once people realise there's no way we can have it re…
ytc_UgxbL_6Bb…
G
True AI artists spend days fine tuning the prompt, putting the generating image …
ytc_UgwJ4FYg1…
G
@joshmarcus9765 Column: The air begins to leak out of the overinflated AI bubbl…
ytr_Ugwt2-co5…
G
If this "intelligence" is so effing smart and intuitive then why not ask it how …
ytc_Ugx2Cf5WS…
G
Most commentors cant get past the philosophical perspecrive of superintelligence…
ytc_UgwXuqLoT…
Comment
The problem is you are confusing ChatGPT with the LLM. Think of it like talking to the Dhali Lama. You talk to the 21st Dhali Lama and he tells you that water is a drop in the ocean. Then you talk with the 23rd Dhali Lami and you ask him about that time he told you that water was a drop in the ocean. He says he doesn't know you or what you are talking about.
Now you talked to different versions of the Dhali Lami. Effectively, two different people. Different generations of the LLM are effectively the same as that.
youtube
AI Harm Incident
2025-11-25T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzsh106_krtYGfyoYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxgpo7jVOa5Te5npgd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxeBL7mh-jjGaiQH2B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgySfhVsPvkVc6jmxyR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLqXLXTCcvdsrC2MZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwfqIsWXVxzGPAwHHx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy-HdT1fHJyf7quOCp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyxeheZro-I8Uh1ouF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2KCoJ_XLNCbOPa3l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzV1qZ0odjDLAF4g6R4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]