Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I give all of you 3 to 7 years, all artist would start using ai everything in th…
ytc_UgyoBLGK-…
G
He told the AI to do this otherwise it is very kind AI is not toxic he said this…
ytc_UgwO_GkCT…
G
lower class nd business man log laughing in corner😂😂😂😂😂 aur bano ameer ai aa rha…
ytc_Ugz9y4qOi…
G
Same as civil engineering. The work load will reduce for us, but if a bridge fal…
ytc_UgwmAob6F…
G
Ai is a toy. It's not a tool, or a replacement of actual creativity. Oney using …
ytc_Ugxwuran0…
G
Pretty words, I truly enjoyed this video your formating and style is captivating…
ytc_UgywYORfx…
G
The one thing people should do is just block and report them immediately because…
ytc_UgyrcI4Sh…
G
Right before you made the comment about having a say about Ai, i was thinking t…
ytc_UgxDxZ8WC…
Comment
The question is, is it really that bad? Yes, probably, but it's this way because we humans have a bias and there is no bias free training data. Because we are the training data. What are the solutions? Either accept that we don't filter it and make room for problematic biases to surface because at least some people will use it that way? Or filter it and accept that it will work more restricted with the obvious problems. Let's hope we have a way to train it that it can filter for intend of its users that people who try to use it in "bad" ways can't and the rest can use it freely, but then some will still complain about it. Because humans have different biases and understanding of what they should get. All these problems are HUMAN problems, not AI problems.
youtube
2024-02-28T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz8V251qsxPFSKOz_p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw2xvhWl8ZWJJ7ZaZR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzfSX2b6LiYumqcEMZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxeUzyNHTwkwXKfln94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxb27uxTSUW1OGn-Ch4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylkIzGlElJhJ83olx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz76ttVuzcqMa8ekkh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxQVKQdGsZicch-dTp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwaNINYCXUrQXiL4lB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx4qRRGKEqxCQvmTUh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}
]