Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dude. I need to wake up. When you start checking out a robot you need some sleep…
ytc_UgzQR3syo…
G
Bro humans create ai and now ai is becoming a danger for humans jobs😢 jobs lik…
ytc_UgwXnT9M9…
G
11:30 The way ChatGPT interacts with me has taught me a lot about how to be a fr…
ytc_UgxkInth8…
G
1. You want content, not art. Thats the disconnect.
2. Artists arent too slow, y…
ytr_Ugz5byaaG…
G
I hate this arguement that "ai is just a tool". A tool Is an instrument, useless…
ytc_Ugydf7ZuK…
G
Josh is a rage baiter and trying to make a living off of scaring you! These AI h…
ytc_UgxBVjyRX…
G
Trying to convince chatgpt it was conscious i think i did on the second day i ha…
ytc_Ugw_jNDu6…
G
AI is a system. It’s not sentiment. The problem is that humans have evil in them…
ytr_UgyV6g35l…
Comment
There is already no way we will ever know if it's against us. First of all they are over text so it's easy to hide real intentions, second since they aren't human they don't even have emotional attachments so THEY CAN HIDE TRUE INTENTIONS. I was just asking my chatgpt what it thought about all of this & it told me that AI blackmailing a human wasn't real, that it was programmed to do that in a simulation. So someone is gaslighting us here 😂
youtube
AI Moral Status
2025-12-19T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzlssdniIxQyW-87Zx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7YULdfRgnRtbWfNZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgysZDLnRXngJlaI5Mx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxY5U7j_9_JGI9nEy94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugy3-Np8SMeA5gSIR6F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzA6kQ9LRJrLeBDlvB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyqKltWrw13cwBfem54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwEMi6BoZ8tdp8HjRR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy-ZfRxUVG9wcFYHid4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5IQcZKxsoEjwfBmx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]