Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the end of the world is coming but anyway,, you're wrong. AI is behaving as inte…
ytc_UgxKgAwma…
G
The question is, can AI come up with original thought? Original concepts, beside…
ytc_UgyVfIDs6…
G
I was gonna go to school to be a spanish translator but now with how fast ai is …
rdc_my3o223
G
I think it's clear that the takeaway isn't that ChatGPT is concious, or that Ale…
ytc_UgwrPo8H_…
G
It's kinda sad to see the creators drain their lives to give life to a robot. Pr…
ytc_UggkfFDIi…
G
Eh i use Gemini and honestly I put in the best prompts but the images still does…
ytc_UgwJdxA7K…
G
@ neither I am, I just find funny people who say AI is worse for people when its…
ytr_UgxtIVfKa…
G
It's really not a great idea.
I try not to judge too harshly, it's far too com…
ytc_UgzOw5HtJ…
Comment
First these "godfathers" developed something, and then they are going around the world telling others not to use it. In both cases, their pecuniary incentives are presumably higher than their moral or intellectual ones.
This is not comparable with the Manhattan project, because then, it was a desperate time. In the last 3 decades, the desperation has been less clear. If they began to feel that AI could be potentially harmful, why couldn't they nib the findings in the bud? Such censorship is not unheard of in academia.
youtube
AI Responsibility
2025-08-24T14:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwMZH8P1lQVQh_mNzt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyTScDvE4XcW0Kpd5x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwkrIt9P3qgAuYQhPR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgypxRzHcrakoIov7WR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfRN3gLwry8nlfIuB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwLWijgKdxuhN5eyzN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxfMR2bajN4Y7_ewBZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzvFLj2xKr_3Vv8Jxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4gWM8xuLihBr1Kkp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwLGxGCkp3XmPRt5e94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]