Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@partlyawesome Robotaxi has been operating with no real issues. FSD is not ready…
ytr_UgyiLqvzU…
G
2025 is the year A.I. started to compete with humanity for resources. It's not e…
ytc_UgwIkBcKn…
G
I’ve tried a bunch of the ai they make so many mistakes it’s not even worth it t…
ytc_Ugwvo1tr9…
G
@killerblunt420 well yes obviously ChatGPT isn’t conscious. The way it replies i…
ytr_UgyRG2mC8…
G
Greatly appreciate that you took the time to answer that kind of "argument," in …
ytc_UgyoOcHpZ…
G
It is my personal opinion that artificial intelligence will never become conscio…
ytc_UgxeparfX…
G
@BeastJuanGaming Yes and he's not the only one using that analogy; 'scouts' (as …
ytr_UgxVP508y…
G
Soon all of our household items will be built in a way that AI can easily repair…
ytc_Ugzn1H7iT…
Comment
Haaha it goes the other way most of the time try Chatgpt tell it a black man and a white man did something it will deny you and give you a big disclaimer for the black man and nothing for the white man it's so ridiculous. Same goes for politics and alot of other stuff. It's called overcorrection ;) it's so obvious. A silicon valley company will feed the AI with data that has a leftist bias and will also correct the AI with that Bias. Especially when you look at how leftists and especially leftists tech companies feel like they have the right/duty to overreach
youtube
AI Bias
2023-02-19T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzqEvM9K4FDwKzJPWR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLqY_kXPuBvQdbxtR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyje3U87PlTcPphcCN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyxuweejXHlWaQjWml4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwvvtHX2XrAsIW8Mvl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwrY1dajmxz1_6OMwF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzBb9depRAFkd1ggsN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyYCDO_KILqhj2kNbp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGNtQqhNpdAzirC694AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzisaa7Zotziw3ynAZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]