Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Je crois que vous faites un vrai travail de fond. Moi je m'inquiète cependant de…
ytc_UgyU7VQeO…
G
Yep, call center jobs are going to be operated by AI just like the one some ppl …
ytc_Ugymzn_Sa…
G
A person is defined as a psychopath based on their persistent lack of empathy an…
ytc_UgxjjplBQ…
G
AI is extremely dangerous once the point of control is lost and we don't have an…
ytc_Ugz4fxZiC…
G
Ai art is less "art" and more "cracked tracing with a bit of photoshop clean up"…
ytc_UgxrrKESM…
G
In ten years the only arguments against self driving trucks are going to be rela…
ytc_UgweCrohS…
G
First of all you don’t have to be “born with a gift” to be an artist. You can pr…
ytc_UgwGu_T-W…
G
The more I hear about AI the more I think about BSG that show especially the rem…
ytc_UgzwhwV0M…
Comment
@MaxwellAyo Haha, yep, exactly. It sounds like a contradiction. But there are many different tech people, and Neil is listening only to those who confirm his "optimist" bias. Here's how to think about it:
1. We should NOT believe the tech people who are _required_ to support AI. The CEOs like Satya Nadella (Microsoft) and Amjad Masad (Replit) are _required_ to downplay AI risk because otherwise they will be removed as CEO. This is because AI is (for now) making billions of dollars for their shareholders and investors, so these CEOs must outwardly support AI. Likewise, the employees at AI companies are _required_ to say only good things about AI, otherwise they will be fired.
2. We SHOULD believe the tech people who left AI companies so that they could tell the truth. This includes Geoff Hinton (Google), Daniel Kokotajlo (OpenAI), Steven Adler (OpenAI), Miles Brundage (OpenAI), and many others.
3. Then there's Sam Altman, CEO of OpenAI. He's complicated. He gets his own category. In 2015 he wrote an essay that said AI is "probably the greatest threat to the continued existence of humanity." Now, in his latest essay (titled "The Gentle Singularity") he says that AI will be wonderful — but this is after OpenAI has raised $64 billion in investment capital. He is a master at saying the right things to the right people at the right time to make a company's numbers go up. Even so, he still signed the Statement on AI Risk (which says that AI can be as catastrophic as nuclear war and pandemics) so we must _never forget this_ and hold him to it, no matter what he says next.
youtube
AI Moral Status
2025-07-24T04:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgzmTkSdVq60o-Z1Tih4AaABAg.AKwc-EU86DeAKwmubuQCW9","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzmTkSdVq60o-Z1Tih4AaABAg.AKwc-EU86DeAKwmvYye-cM","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzmTkSdVq60o-Z1Tih4AaABAg.AKwc-EU86DeAKwnxzr7AQS","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgzmTkSdVq60o-Z1Tih4AaABAg.AKwc-EU86DeAKwvnsqK4WK","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzmTkSdVq60o-Z1Tih4AaABAg.AKwc-EU86DeAKx3_HxAK1c","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzTcWf2kvWsDJvdMv94AaABAg.AKwZ5E-OTZPAKyI2-YMW5k","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzTcWf2kvWsDJvdMv94AaABAg.AKwZ5E-OTZPAKyMtdPQrAA","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzOLN7FjeGylw-yjLZ4AaABAg.AKwV3CcN6E_AKy1JEMPHix","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgwahPJeQ_LeTKcUhPd4AaABAg.AKwUnMBvzExAKyILP0yhbu","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgzcORwrPOW2jPacwXJ4AaABAg.AKwSbTTW-CXAKyRCLiXYcO","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]