Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
😂 its getting harder and harder denying AI is smarter than radiologists, and by …
ytc_UgxEsjT3i…
G
I wholy agree with you! But another point on why I as an artist won't use AI, is…
ytc_UgyrXPNxy…
G
Would you be willing to give up your privilege to drive if self driving cars wer…
ytc_UgznSfPz9…
G
Art is are ...as long as you make it and create it in some way ai is stupid and …
ytc_UgyEuGx8N…
G
45% i am also opening my own machine learning company because engineering job ko…
ytr_Ugx3Davzu…
G
Duh what'd you think they were building AI for in the first place? They're going…
ytc_UgwUmeNfp…
G
That’s why whenever there’s a stupid Waymo in front of me, I just keep 3 cars di…
ytc_UgyOeQMfu…
G
The irony is the AI "artist" really is the tool. By choosing his favorite versio…
ytc_UgxtnNGXy…
Comment
FOR THIS TO HELP THE AVERAGE PERSON, there would have to be far more obvious signs of AI fakery. Most of these re really subtle to the casual viewer, which most people probably are. Also, most of these "tells" would be barely visible or in fact un-seeable to people with mild to more severe visual issues, like myself, who has developed problems in recent years,? Even though I was in the "image business" for well over a half century, these "tells" ---with a few exceptions---are not detectable. Sooooo, what would you suggest is the way to detect them, especially as AI figures out how to not make these mistakes?
Personally, I think that we have to develop a very smart, critical thinking approach when it comes to fakery that has a direct impact on politics, science, and the news in general. What things would YOU suggest in regards to intellectual fakery done by bad actors using AI? (And maybe talk about inflection, word usage, attitude etc)
youtube
2026-01-14T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzQIx4S3PR6h1JsfDx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyPZtPpTOXUVvp4H494AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyVuqXYI24TnV99rXZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzeGQtEvve5ogD8feR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwkBC9K-tlkZ9cQoSx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyv61M7kdVmwC4wybx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwchX3Mc_inK_RrJHp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyqQmkU-fDcU-QYT014AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7s7QOKwtmPZTygT54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-2kOt6zw28RULJ3B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]