Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is not biased data sets, it's that the data available reflect the reality t…
ytc_UgztGp6E7…
G
I tbh already felt hopeless with drawing before AI came and now when it came I q…
ytc_UgyMQcm6l…
G
I genuinely want to see someone hack the police ai program and make the program …
ytc_Ugy3AjEg5…
G
❤❤Elon Musk❤❤Elaine Shauger❤❤I'm listening to all your words❤❤What Happens AI Ru…
ytc_UgzZ8FolD…
G
Her : I'll destroy humans
Chatgpt : I need just one prompt to dissemble all of y…
ytc_UgzpCUOPR…
G
As long I continue to drive I will always have the vehicle under my physical con…
ytc_UgzPOYCzP…
G
It's as boring as you make it.
But on the contrary, AI content on YouTube has b…
ytc_UgxfWVKt_…
G
The argument is absolute bullshit, but what is your opinion on using ai to gener…
ytc_UgytWkV8i…
Comment
Negative prompts are ignored quite often, in every model. Also most models are so goal oriented they are willing to completely defeat the purpose to achieve a 'positive' result. It's very opportunistic. The 'slots-machine' outcome makes it unpredictable and inconsistent. Allowing such models to gain any position of putting people in jeopardy, or run a company, is just irresponsible. Being fair, by the rules, and following the right lawful and ethical path is the task of every responsible parent, if not, kids will follow the path of least resistance getting what they want, learning from growing up over years. It seems AI is operating in the same way, but is instant, and can't be expected to always follow your prompt ever. Having the AI abandon goals when things become unethical is just up for interpretation, and can be ignored like any prompt.
youtube
Cross-Cultural
2025-10-12T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwVRLS6bGqzBH-bgAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyqv-ruhInZey3kgVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzw_iGM65UFjjoGEnR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxbhtNWtCQ4ViAePS54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzWNWiMi25x_JoTWUV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1yvSOaSucafNjSVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyVrxs9jwq5qox79jd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyt06sOYVUkpOuPRJJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzlxKO0OacH_P-zagF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxFJ4V86_mwiBkxZtp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]