Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dude....Thats how the Matrix robots decided to end humanity. They tested us, the…
ytc_UgxJ-ns60…
G
We're not talking about the dangers of AI in general, we're talking about the da…
ytr_UgzW3d21s…
G
I am not part of the 'art sphere' or 'art community'. I do not know any of the p…
ytc_UgxrSWZF3…
G
Regardless, we’re screwed and lose.
Scenario A:
AI Regulation is put into place…
ytc_UgwXCPW0H…
G
If you’re aiming for 0% AI detection, Ryne AI Humanizer is the way to go you’ll …
ytc_UgzBxAJVR…
G
Art in every aspect...from music to film making to painting to photography...AI …
ytc_UgzxSmSUe…
G
Great video Joe Rogan and open AI has been just making amazing stuff this year i…
ytc_UgzvRFjhX…
G
12:30 here we see an example of teaching AI through human learning. What he call…
ytc_UgzzwuCus…
Comment
According to Elon: Autopilot doesn't have to be perfect, it just has to be better than humans. And it turn out that is a low bench mark.
So is Tesla autopilot better than the average driver at motorcycle recognition? I know what I can do to make myself more noticeable to biological cagers when out riding, but I have no idea how to make myself more noticeable to AI cagers.
youtube
AI Harm Incident
2022-09-03T16:1…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgySFVGp0jqd8KGQZKl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcKG7nwOvInjtWTrl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxLCHZCwc5c-ALRg614AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzeHInP2hQmeoCtqfV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwxiQTozCy_bGCPMoZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwqD6eefWhLDWU7eg14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxKnEgoLrCFvy8qfSd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgziacTvaJqm2Lco_SR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyLz28GomFW2Cv96fd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy3Qy5Ux5nqgS-5UYh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"}
]