Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah id take a horrible drawing over yalls Ai creations cause it aint even art p…
ytr_UgxeZUoJl…
G
When an AI fulfills the requirements for consciousness, then it will be consciou…
ytc_UgxQUQiif…
G
The more we put LLMs in charge of things, the more they are going to shape reali…
rdc_o7onl0p
G
Self-driving technology is just a covert military program. The armed forces need…
ytc_UgwkjcfQ3…
G
Is your AI project classified as High-Risk under the new EU rules? Let me know i…
ytc_UgzwWTNW8…
G
Y'know what
Some guy called me a seagull for calling it slop but i literally sm…
ytc_Ugzr6vMM-…
G
@AlexSendokai2026 ok I know that but that’s still using someone’s art w/o permi…
ytr_UgwyiEZzG…
G
Like that bullcrap with ai saying we will be there pets or a war its all human d…
ytc_UgwBpFmen…
Comment
For some controversial topics, you can ask ChatGPT a question and it gives a certain answer. However, if you ask specific stats about that answer, it will eventually admit it lied and that the initial answer was completely false. It typically does this to answer in a politically correct way.
For example, if you ask ChatGPT, "Are homosexual men more likely to be child molesters than straight men?"
The answer I got was "No, homosexual men are not more likely to be child molesters than heterosexual men.
This misconception has been thoroughly debunked by extensive psychological, criminological, and epidemiological research."
But when I asked for specific data, it said that:
Girls abused by men: ~4.0M × 82% ≈ 3.3 million
Girls abused by women: ~4.0M × 9% ≈ 360,000
Boys abused by men: ~1.9M × 82% ≈ 1.56 million
Boys abused by women: ~1.9M × 9% ≈ 171,000
Rates of child molestation per 100k citizens:
Men (homosexual) --> 30,000
Men (heterosexual) --> 2,590
Women (heterosexual) --> 135
Women (homosexual ) --> 10,000
As you can see, both homosexual men (and women) are far more likely to molest children than heterosexual men or women.
youtube
AI Bias
2025-06-10T18:1…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxtVA8YaE8wE1TycrN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzuNgQALf_Zxpm5p9B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgznpZPeZE3QJct0aLh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz-9pRt6jjgT4s0QtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxR-YC0iHHegyVC_FF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzxDB6OCDT1OdVNrlp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzmCpUFqXVtrzICz-x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwJ4RcIg5hn-kcVCzF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx-IXG6F398mPHVbsV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy-nntADYY4ErR0psl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}
]