Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If there's really someone who think themselves a real artist just bc they used A…
ytc_UgweOFB6P…
G
"Can't teach old dogs new tricks" this was one thing that scared me growing up, …
ytc_UgwhNjvg4…
G
Yeah 7 to 10 years Siri will be smart enough to manage a bit of your groceries w…
ytc_Ugy2Hm-5P…
G
I'm actually surprised there's still so many humans involved in the process. Is …
ytc_UgwONjV2J…
G
The ai said we are the worst thing to happen to the planet, but we created ai.. …
ytc_UgwfF3RTE…
G
We are DOOMED. A economy were no one can afford anything except the super rich. …
ytc_UgyNmML8B…
G
1. Weak AI ( our current technology)
2. Strong AI- self aware AI, human level in…
ytc_UgjXp-Uti…
G
Tbh I was kind of expecting they might fall into doing something stupid involvin…
ytr_UgwRt18eH…
Comment
The problem is the implicit bias baked into the assumptions used in these types of systems. I’ve heard about one such algorithm designed to help distribute law enforcement resources “more equitably”. It ended up sending more cops to neighborhoods of color because that’s where racist cops had previously made all the arrests.
I’m fairly positive the sentencing algorithm Kevin describes here has the same sort of problem. I mean, in the wake of Treyvon Martin and George Floyd, is the average person of color going to say police treat them fairly?
youtube
2022-07-25T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw3Ux14bSxSWzufSwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwLXHQEaCQlkA5bXal4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyjTjFP7KC_mfFR4UJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwkvdgoQ3PiVWTmHih4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzKrirX1NZtFJlJ0MJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzu5KB5QOXxJtZ5ZYB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyGOjJk-W77uEfnd1t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzZqlnOzBBTAi2MLKZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyQnOyHLPs6tyhb5tZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwbKExRqpJT1QIPuKB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]