Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro has enough money afford a self driving car and found something to complain a…
ytc_UgyJdOwGm…
G
ChatGPT is not in the same state between conversations. It does not learn as a h…
ytr_UgydGQ7P8…
G
Some questions, and I’m honestly curious:
What kinds of properties would an agi…
rdc_n7ty6dt
G
Other countries need to do their job to not provide regulatory approval of movie…
rdc_oi290e3
G
What do regular Canadians think of Trudeau’s Covid policies, never got a read if…
rdc_fn5es12
G
From Google Gemini (2025):
The difference between the film Slaughterbots and ou…
ytc_UgwplBmvo…
G
Mother spoke the truth beginning to end well educated decent family not a broken…
ytc_UgyBvtbnW…
G
OpenAI CEO is something like a dictator in the field of AI and this person carri…
ytc_Ugyxv2psa…
Comment
Bias in the Machine: The Inheritance of Inequality
At first glance, AI systems may appear neutral, even objective. After all, they rely on data and logic—surely a computer can’t be racist, sexist, or discriminatory. But in reality, AI systems often reflect the biases of their human creators and the data they’re trained on. The myth of AI impartiality is one of the most dangerous misconceptions of the digital age.
AI systems learn from data—massive datasets gathered from the real world. But the real world is messy and unjust. Historical data often includes the imprints of social inequity: discriminatory hiring practices, policing patterns influenced by racial profiling, gender disparities in income and healthcare. When AI learns from this data, it doesn’t just learn facts—it learns patterns, and those patterns can encode systemic bias.
youtube
AI Governance
2025-10-03T10:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyov9ToiRlge25Zd7N4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwU8sFWJQe3FuRsADF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyTRIgEFbBckisPcxx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwjvlaqHqjBhj470pJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzQ8TiI6_2BNii7tBJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyqQr9BKByFFrhGzI54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzXZhLI0v_1pg3NG754AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwIjUqtuIlebIlM8GN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzqWn1mGZVjrG6lYi54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxjn6_n_AWffOR8Tq14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]