Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you’ve spent any time on Hacker News this wont be the least bit surprising. …
rdc_eg0piuh
G
Bhai ye konsa app use kr rha hai. Ai bol rha hai. Chatgpt tho pta hain…
ytr_Ugxg1-ZFR…
G
The only thing I want AI imaging for is for my own projects cuz im not an artist…
ytc_UgzuOOzbn…
G
26:35 I completely disagree, slaves don't have to be human, but they have to be …
ytc_Ugw_ZhMke…
G
Nope, but their self driving can actually get right up to a charger without touc…
ytr_UgzW1wE2f…
G
Land is not offered in Siberia actually, but in further eastern regions, like Ch…
rdc_d2xcid1
G
Don’t people realize that other car companies also provide this kind of autonomo…
ytc_UgzoQ-xnQ…
G
Meta leeched off 82 Terabytes of pirated books (God knows what OpenAI has done).…
ytr_Ugw5-vYEP…
Comment
I wish people understood that calling LLM technology "AI" is a huge misrepresentation of what it is, in order to sell a product. It isn't intelligence, it doesn't think. All it can do is spit out words that *mimic* what a human might say. ChatGPT and all the other "AI" chatbots out there are like really advanced random word generators. The advice of a chatbot on ANY subject is less than worthless. I cannot WAIT for the AI bubble to burst, hopefully before it takes any more lives than these companies already have.
youtube
AI Harm Incident
2025-11-25T00:5…
♥ 66
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxu9LMz7vCXppum27l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFPn8hBVJZRZdXqgF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4pZtxJ9gguF5jhPN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzR9_PXFn9reSjXQL14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzfs8BErOKUt3tVLHN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyL-7MwJki1sBJBOrJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyiOqWqiD9HU9LKrdl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwTedqfJsLTHd8Z1md4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwRSfdtBtdNs9tWUn14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxzXmMuLhgXS0v1N7R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]