Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
wrong, ai is inefficient takes lots of money to make and has lots of legal issue…
ytr_UgzkC895F…
G
Get these fools out of here, self-driving trucks, loading robots and so on are…
ytc_UgzJD-Fy5…
G
I look in the comments to see if anybody has pointed out that it’s AI. Then I ca…
ytc_UgyQqfeWP…
G
So far, on researching technical subjects, AI generates lots of misinformation. …
ytc_UgzzTGkLs…
G
Its already happening, here on reddit and everywhere else. Turns out people dont…
rdc_m5mfv2q
G
I don't believe that's a robot. That is a person pretending to be a robot if you…
ytc_Ugyo62nAU…
G
Good thing we have expert like this alberta - who has no idea about how AI gener…
ytc_UgwVnHM1D…
G
Training AI is more important than upholding some artist's little "rights". Jus…
ytc_UgxHCL903…
Comment
The AI that predicted he would be shot uses scores or numbers to determine whether or not you would be involved in a certain crime, the higher the score the more likely you are. It uses your friends and neighbors to know when to raise or lower the score, his friend had been involved in a shooting and his score went from something like 14 and went up to more than 200, he was later shot because the police came to his house and his neighbors thought he was a snitch.
youtube
AI Bias
2023-02-16T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy-WsoVrbE1_O01siB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYyBHPYZ-DK5I8OJR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwDeJ5v0r_9SDBwIoB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw1NSj0cUF68FeChl94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwPL76nejAQobhlTA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxRD6oe3QP-OdAiKCV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxjaaqmq5CFlqz0tPR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzM5qiFRJPAa6p58Dx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzZb9Pl2Dv0KzgVcf94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzjcDz9AG5D8rULB5p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]