Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
How? They are copying the AI image. No skill in making this. They didn't brainst…
ytr_UgyYJxJEz…
G
"Hi Shipra, we are sorry to say that you got the wrong answer but in any case, t…
ytr_UgzR2lm-n…
G
There is this interesting thing, that Cyberdyne model T-70 is a real thing. I sa…
ytc_UgytTcs0i…
G
the governments gunna come crawling back to its people.....how do u tax a robot …
ytc_UgyaLfRM0…
G
We cannot give AI a conscious. Like humans, they will develop that on their own.…
ytr_Ugzr1mlka…
G
I don't think it is that simple. To me it is analogous to a young artist learnin…
ytr_UgyL8apoP…
G
The thing that makes me feel OK about AI is that history, non fiction and fictio…
ytc_UgzobE2cV…
G
Cosmic robotics for making space station in the mars and also for instaalling an…
ytc_UgwCtFFqR…
Comment
When you create a new species (AI) that is smarter than you, you lose control of your future. I went to Harvard for computer science. Then AI came on the scene, and people predicted it would be decades before we should be concerned. Now we know it's much faster and smarter than we thought. Businesses are going to use it as it doesn't take vacations, doesn't need health care, and the list of why it's better in all respects for a company continues.
We thought it would take longer, but AI will soon be able to program itself. This new world will happen faster and faster. LLM (large language models) were thought only possible in 2050 or so, just five years ago. They started out being as smart as a high school student, new versions were created, and now as smart as a college graduate. Companies will be forced to use them or perish. Much like businesses taking advantage of cheap labor in China starting in the 70s, and soon American workers were "too expensive".
youtube
AI Harm Incident
2025-06-20T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyD2lWZqHZy1dSQ7o14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugz_9iw7F5U6UvfUvyl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyZbzUIHwJMhoiFdRh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy17hjlShobN0rlaYp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzNo_ZL1K8by6V3yO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9GO5-0ZgxmfRulKl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyan9pTN0GLatkcbEh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzXL4YW8hAvverSSjh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGL_Z5hmR_gqcBJw94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvpZtoRXKuHG4NqVZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]