Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm really going to dislike everything AI. This is absolutely unwanted. Google i…
ytc_UgwaZl8tx…
G
@goyasolidar Yeah, I feel like in our reality; AI would wipe out humanity in an…
ytr_Ugy-idRvy…
G
They have so much money yet they are ungrateful and make more money out of ai…
ytr_UgwC3QTmG…
G
I'm calling bullshit on nightshade not working. From what I understand nightshad…
ytc_Ugw2_g4qp…
G
I in no way know anything about AI. Actual AI and not the assistant on my phone.…
ytc_UgzX2PioF…
G
AI is far from bad, BUT it's a tool, you don't eat with a hammer.…
ytc_Ugxn_1Vv0…
G
Fairness. In a world where 90% of the wealth is owned by 1% of the people. Thi…
ytc_UgzXOQxUI…
G
I think we need more context of how students are using AI. I'm an adult learning…
ytc_UgzcIPqrT…
Comment
lol. this is dumb. 1st of all this android is no different from chat GPT, its just saying what its programmed to say. there is nothing brilliant here. look at the individuals who programmed it and then what it saying wouldnt eb a surprise. Chat GPT is a leftist, lol. so thats how u know all of this trash. yeah AI could be dangerous in the future but what these things are saying now are just things people with a certain point of view would say. Chat GPT always attacks Trump for some weird reason but doesnt do that his opponents, very strange
youtube
AI Harm Incident
2024-05-20T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzSoer_3lDcmfLFEVt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxXyzzzIEaYP8VXfah4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy3uNgURKllB1ELrmF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxCjKisdEU0bmWOsdB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwKcFt-XSJ0rYdJpjd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxRG5WHXyJMIfwjV4N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyi5rg5yr0r3St7bul4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxpQQ1nOy8f_i-feRp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwPaEgxLxH2rS20iLp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNc2Q2j07eE4k66N94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}
]