Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Isn't 250 a really tiny sample sice? Specially compared to the training data
At…
ytc_Ugy3L59U8…
G
This is exactly why I dislike being called or calling other artists "talented". …
ytc_Ugy6XNru5…
G
@Goldfisher ok. what i think is, this is bound to come, but the aggressive way …
ytr_UgzD1rCMJ…
G
I would never trust my life or anyone else's to Tesla's camera-only exercise in …
ytc_UgxcURLOJ…
G
Tip: If you want to ensure there's no AI-generated content in your google image …
ytc_UgxspHueE…
G
The issue with taking jobs is enough but the problem is safety. The vehicles we …
ytc_UgyU_eBdr…
G
Let's draft a bill specifically banning any kind of surveillance system utilizin…
ytc_Ugy9EtH-m…
G
Facial Recognition is dangerous, if you love it so much go live in China where i…
ytr_Ugxvrgu0Q…
Comment
Simple solution, AI is inauthentic, deceiving, and most obvious of all is that its a compilation of programming pretending to think. So just dont use AI unless its for something super insignificant because we obviously cant trust AI to do a job the way a human would, it might be more efficient at work, but its not a human, now this is where it turns complicated since companies always want to continue to "innovate" and AIs are just going to continue to be created and improved, but what if people just stopped using AI as a whole?
youtube
AI Harm Incident
2025-07-28T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgweufPuF9VahCY_Xd14AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw3ZkoW2jS9eopygVZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzt7BD4_IJvv5oe-2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjEsVzz4drHXk-FUx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwTSVjEZKDhomAWxT54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwUBwSSuBKKyb7KNbx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwCTpm5JuSHV-fFZQx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy2IQYmbsWhqiRoWyF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx_Zx2Hc655eWzydK94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwQYCoHUqnTcr4i4tp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]