Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wrong assessment...if you have AI think for you...that's case...Terrible take. …
ytc_UgyjZW9_q…
G
Install quantum AI in a robot and in less than 3 years humanity in BIG trouble.…
ytc_Ugy_X5HRq…
G
NO BASIC INCOME IS COMING. AI AND ROBOTS WILL REPLACE HUMAN JOBS, LINE THE POCKE…
ytc_UgzJu7vfF…
G
There’s a fallacy analogy from the Chinese internet, called “this egg tastes ter…
ytc_UgxpJnvKc…
G
I'm a lil bit of a sentient AI myself, I put the video speed up to 1.75…
ytc_UgyIb_rJR…
G
The driver is at fault, u can't blame the car, there are a lot of warnings in Te…
ytc_UgxH5fJDc…
G
Idk how Nightshade can be considered abuse.
Abuse implies artists are misusing …
ytc_Ugy3wUrBN…
G
That's nothing what's going to happen next, AI is going to destroy Humanity, tha…
ytc_UgzaTuiP3…
Comment
Two takeaways:
1) A genuine AI consciousness will be utterly alien to us, and different instances of it will be alien in different ways. There will be no way to trust it. What grim predictions of an AI future seem to implicitly use as relief is "good thing that it's maybe impossible to create, ha ha"
2) AI as it stands right now is a trivial toy compared to our dreams and ambition of an AGI. What we are astounded by is our own reaction to it. Its a trick of our own neurology as much as it is computer science.
3) bonus takeaway, trying to suss out apparent vs. actual vs. marginal subjectivity in a possibly conscious entity is a real mindscrew, huh? To create and understand an AI like this is to solve consciousness.
youtube
AI Moral Status
2023-09-16T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxLo9dHYBh3uCT6nyp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz5jjAsUTn7ki_Exu94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxSN3exjdgYBiPc0-l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw6G5AiLy2RbMtVVZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwGg7BnME_ZA_5zBmN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw8MgKiE6vnJeWNWkJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyajjM7RbgfHLuBy7V4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyO2EiNX3t1rb5Yl3J4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxXjVeA9QnpciWzhLx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjuzwqWcbw0P5HsMB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}]