Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I see that all the time. It's happening right now at an alarming rate. And the…
ytr_UgxAOlG63…
G
I too was on the techno-optimist "thanks to AI we won't *have* to work, our need…
ytc_Ugwur366S…
G
Nah that ai crap on Google is terrible Id rather find my answers on Reddit 😂…
ytc_Ugz0jwdry…
G
Truth be told LLM are very far from a real AI and none of this corporate bozos w…
ytc_UgyqOXURB…
G
US census states there are about 3.5 million truckers in the US. If any percenta…
ytc_UgzqsvqD1…
G
I memorised my keyboard bc of character ai I’m an expert so whenever we have a f…
ytc_UgzHXKxf-…
G
I feel a similar thing is happening with online articles. I keep coming across …
rdc_lkc2o34
G
Dentist had an asbestos ceiling tile cocked "open", made me nervous. That was ho…
ytc_Ugz5gg0fM…
Comment
It's fascinating.
You laugh at people that got themselwes honeypotted into dating an ai, due to them unknowingly exploiting its tendency to be agreeable to everything, and then what? ask that same ai some questions and announce armageddon?
LLMs aren't sentient not because of their lack of pure computing power, or suitable training data, or bacause they aren't sufficiently advanced yet. They aren't sentient precisely bacause they are LLMs. And asking what is essentially a poor replica of a but a single part of human brain - language understanding - and sumutaniously gaslighting it into giving you a wild answer, then saying "WOW! a LLM says we got a 65% chance to die! It's surely right!"... Brother. Look at yourself.
youtube
AI Harm Incident
2025-10-12T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugyixu4KgX1Z6d-sWA94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxlhNo1leyTt6gOyrF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwPGAVKMaFKfyXXRph4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwNsO6nG0nqNghwO6h4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyMuRH3CCp-MsoJ3_l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzf9YPU_Ci65bAXahN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxf0OoAaMH7E8N7Rmh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxXRYqr_SDNGg__YKp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxwdHKDduXxBkJvTA94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxuY9IdbYepiPupMF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]