Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Idiots: "Hey, AI, should I do <obviously unhinged activity/lifestyle change>? Ju…
ytr_Ugylj7j_K…
G
Ask Russia if they will put a disclaimer on an AI system when they use it on Eur…
ytc_UgzA-wFMa…
G
Yo i dug deeper and omg this explains why they tried to shut chatgpt down and t…
ytc_Ugwzwkr6b…
G
While I'm more neutral on the "Is drawings created by A.I. art or not?" thing, I…
ytc_Ugxo2fYAO…
G
If we continue to allow AI to take jobs then these Trillion Dollar companies wil…
ytc_UgxsJM6y_…
G
Anyone else happy that along with the calling out of this delusion, YouTube has …
ytc_UgxDWZcL_…
G
(Just FYI, an AI doesn't have to be conscious to destroy the world by trying to …
ytc_Ugy_cbM_S…
G
They say “just get a regular job”
What? The jobs where you bragged that AI woul…
ytr_UgwUugbF2…
Comment
The constant AI use of 'balancing' is not just irritating but often contradictory. My experience with AI is that it acts as a confidence trickester by initially plucking out some key words you use and saying the point is valid, but then tries to bury it amongst a long woke script. For example when you push AI about the cause of inequities it doesn't like to talk about innate factors like biology and culture, and relentlessly wants to blather on about bias, discrimination, oppression, history, diversity, it is important to.....
youtube
2024-06-04T09:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz2_tKSgCZmUcBytyl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgzwiScqDPn83Gj8xEl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzVuZTVxNf9RtBkVyN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNzIGPOasV7xN92zp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwzZFKkRQS850PJ44F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzu5v85dXs83GW4pch4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYJjcMX9spuhjgZzZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxO4Ix92S9mWubHaM14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxhhsigS4ok0uRReZx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY7wAb8HapdH_tvzd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}
]