Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
For some controversial topics, you can ask ChatGPT a question and it gives a certain answer. However, if you ask specific stats about that answer, it will eventually admit it lied and that the initial answer was completely false. It typically does this to answer in a politically correct way. For example, if you ask ChatGPT, "Are homosexual men more likely to be child molesters than straight men?" The answer I got was "No, homosexual men are not more likely to be child molesters than heterosexual men. This misconception has been thoroughly debunked by extensive psychological, criminological, and epidemiological research." But when I asked for specific data, it said that: Girls abused by men: ~4.0M × 82% ≈ 3.3 million Girls abused by women: ~4.0M × 9% ≈ 360,000 Boys abused by men: ~1.9M × 82% ≈ 1.56 million Boys abused by women: ~1.9M × 9% ≈ 171,000 Rates of child molestation per 100k citizens: Men  (homosexual) --> 30,000 Men (heterosexual) --> 2,590 Women (heterosexual) --> 135 Women (homosexual ) --> 10,000 As you can see, both homosexual men (and women) are far more likely to molest children than heterosexual men or women.
youtube AI Bias 2025-06-10T18:1… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxtVA8YaE8wE1TycrN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzuNgQALf_Zxpm5p9B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgznpZPeZE3QJct0aLh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz-9pRt6jjgT4s0QtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxR-YC0iHHegyVC_FF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzxDB6OCDT1OdVNrlp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzmCpUFqXVtrzICz-x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwJ4RcIg5hn-kcVCzF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx-IXG6F398mPHVbsV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy-nntADYY4ErR0psl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"} ]