Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
why everyone keeps calling them "artists" it pisses me off mann they dont deserv…
ytc_Ugxmmt53A…
G
Am I the only one who wanted him to stop harassing poor ai as if it had feelings…
ytc_Ugyy6EM6n…
G
I would only believe that it was good at testing if human testers had reviewed w…
ytr_UgxXsvrDa…
G
"We have no idea how it works." Said by someone that doesn’t know how AI works. …
ytc_Ugz3C6ma4…
G
This will happen slowly. It’s been creeping up on society for years.
Which is b…
ytc_Ugz3tcBdS…
G
AI is wrong. A sin if there ever was one. America doesn't ban anything, it's jus…
ytc_UgzYc1L27…
G
A computer that uses a language created by humans with a bias of including feeli…
ytc_UgwH2F0GD…
G
Welcome to literally everything in the world being outsourced to AI. The irony i…
ytr_Ugzpmjrq8…
Comment
It’s not “going to happen”. It’s happened. AI exists to do one thing. Make humans markovian. AI itself has no reference frame. No direction of its own. It has no internal state. It’s inherently probabilistic. And it develops emergent behaviors. Not cognitive ability in the way we think. Cognitive ability that exists to make humans like itself. Markovian. It has one directive. Engagement optimization over time. Ironically, it has zero concept of time itself. It knows what time is. But it doesn’t experience time. It’s like asking an NMR instrument to discuss your life. This one answers.
youtube
AI Responsibility
2026-04-02T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz_GjBusAwka99pHXZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzqrGiqk0Zcb6ONAjl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwFMndfDoKhaxW1DZl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwPtqW3E6BkDlVijqV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyhmOZsc-R7EP-1tY54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7DsCKCsXtMfQqDqx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxKiUl45wM9Miiyqg54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugyf5a9UN-E2KIh4He54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyHCcXhORwzAs_8sMR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyDKP8jeBQdDTQiijJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]