Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Corridor did something like this but with one robot and 2 or 3 Russian soldiers …
ytc_UgzLJxi4h…
G
who's gonna put your drink on the robot if you live alone? that means you have t…
ytc_Ugxedloc_…
G
I just realised 2030 is les than 4 years away. OMG time is flying!
I must say i…
rdc_ohps66p
G
Keep in mind that AI robots will guard the energy fields as well from human reta…
ytc_UgxHqSGGD…
G
A person watching a video of people watching a A.i realising that she is a watch…
ytc_Ugxc0xFZa…
G
Defending a.i in a capitalist centric world where corporations are constantly cu…
ytc_Ugxct-JCg…
G
Had a chat with an AI bot last night. Totally useless. Had to talk to a human to…
ytc_Ugz2IDr5k…
G
Pretty much all successful PhDs end up with some sort of scientific discovery. I…
ytc_UgzZ_dK0H…
Comment
“Corporation simply do not jokingly describe their products as humanity ending monsters”
Sam Altman is constantly talking about how AI could destroy the world because it materially benefits him.
It benefits Open AI to frame AI as an existential issue (either from it destroying us, or from the Chinese beating the US to build the first genuine AI), because once you do that there’s no limit on the money you can get people to throw at it, which they need since they’re blowing through billions of dollars per year.
youtube
AI Moral Status
2025-12-14T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgySp1S_NR9VaTmnUgl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2fWY9gbQqi1epg8t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwa2OhJaldG-hrxWip4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGnWZe-q0ir9r32P94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzlFiCRci-GGFnlLMV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz6sgI0AwdZzl7TyYN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz3sZRbMv8ZwukC_dF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwMOjzKcaYPhzcnCr14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz3aOaEvWZPWv3d09N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJCE5eBqkIfFOdRDN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]