Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bo idea why one would but when I was reviewing the platform I wanted to see if i…
ytr_UgyC8vPGe…
G
AI as of right now are only good at immediate tasks, they're not good at compreh…
ytc_UgyNHPbGf…
G
I am a Canadian engineer and community advocate. I support a medically complex a…
ytc_UgxawS3eb…
G
Stop watching (or at least learn from the mistakes) movies involving “AI” and ro…
ytc_UgwMzVHOp…
G
I can't believe govt still allows common people to beta test self driving softwa…
ytc_UgyqFK0E5…
G
2030 the point of no return. One year ago it was 2027. We've went up 3 more year…
ytc_Ugy-2rMz5…
G
That's not good reasoning though? Just because a human can't judge feelings from…
ytc_UgzqAX5zv…
G
A lot of these arguments are reactionary and entrenched in human essentialism. I…
ytc_UgyILGPnz…
Comment
One thing I like about AI is that right now AI has finally hit its feedback loop, now AI is learning from itself and absorbing bad information that was also spat out by AI. Considering AI has completely taken over the internet and that’s how AI learns. So as it continues to spread bad information, the worst AI gets. It’s been incredible to witness
youtube
AI Responsibility
2025-10-01T12:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxClIt0e15h7CfuA1N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2k6DB_JLoc3GIwit4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgywxK6bhHrQkS7Crb14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzV_b1pJyd9MvgHyuR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzzYnOUppCjHvtvJbp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy-63WJjz74ceUps_14AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzyCLMCJPB_RwdfoEN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwl5ZOs019dnqszzTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytxuvDeKVcRaRGFqR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyk6G9QQ3SzNM04Bm54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}
]