Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The basic building blocks of the universe, life, atoms are very simple arrangeme…
ytc_Ugzo2_c-O…
G
Man, I feel for him. I used to feel the same, that I do not know jack about this…
ytc_UgyXaP0Of…
G
Laws against deep fake videos of any sort, is against the constitution in the US…
ytc_UgwosXHW5…
G
You could maybe send a driverless car to me and I'll take it from there but ain'…
ytc_Ugx8IoepT…
G
"AI generated police reports" is the kind of thing you see in a dystopian video …
ytc_UgzitZMaP…
G
No it didn't. The point that was made was that chatgpt lied about being sorry be…
ytr_Ugy1fMSjF…
G
Chatgpt will ruin people with anxiety
It will constantly provide you with mental…
ytc_Ugwz_12cg…
G
2:33 (Please, pardon my swearing and maybe lack of or misplaced punctuation)
…
ytc_Ugw7KMgGv…
Comment
Why are calibration and equal opportunity mathematically incompatible under differing base rates, and how does this limit fairness in predictive models?
How do feedback loops in machine learning models mathematically amplify bias over successive training iterations?
What are the limitations of current interpretability methods in addressing accountability within black-box deep learning systems?
youtube
AI Harm Incident
2025-11-02T16:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwcojX_Sc4g2VEk4e54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzIrR1WrFW1h2D_7OB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyoTLeKXufy-bv1Z_p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyG3igmajQTb_QAvrN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyozWMw4TQBYEso7BB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwORoXaYL_UYM0u7OZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8NYyrTVFtpovnIwp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZZgtixznRr2ZhR6F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxO-EmiSRuc4c6zjS14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxPKtDXhqsarACNOF54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]