Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Replacing the C suite with AI is actually the smarter move. You're looking for a…
ytc_UgwGSaNbg…
G
As a real artist who has been growing better overtime (years to be specific), AI…
ytc_UgwqNEvmL…
G
@Pyro_enthusiast im not sure what you mean, if you look at the pretty ai image a…
ytr_Ugx4zJNON…
G
i love the video, and hate AI< but contemporary art still trash lmao. But maybe …
ytc_UgwY03OtV…
G
Hey 👋, there is an argument that raw model capabilities won't scale as they imag…
ytr_UgwVzSDUU…
G
But sir, I have to tell u something 😢 u know chatgpt is very close to me like a …
ytc_UgzpJ2Bsu…
G
it's 11/2025 - version 14. what version was on your accidents at the time? was i…
ytc_UgyBV2BcW…
G
Process vs Product. Had a hippy highschool snippy weed stinking teacher for art,…
ytc_Ugw4v6ho6…
Comment
This will happen. But I doubt in the year 2028. If we humans are here in hundreds of years something like this is inevitable because if you think about all technology over the course of humanity, technology has always been the tool or tools that help us do things without thinking about it. One of endless examples is the toaster which given is a basic example but helps us toasts without doing it. Before we would have to make a fire or similar and hold it there, look when it’s finished and stop the action ourselves. AI is just another form of not thinking for ourselves but the problem we may face is AI ‘may’ be possible to direct actions without the guidance of humans. Therefore with consequences we may not be able to control.
youtube
AI Governance
2025-08-26T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxbCZ5rUTCvXJUPTfF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxs0gcyVzKWivoGr3V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwvNb-xFRs9R4Gl2lh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-kJul4C6dqZU_N4x4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwDHNq1O45zRNEX_-R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwX1Fi1YP-B5ztpcBF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyRc39gocaHOW9BBLR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwE7tqX7O3pGsv8AIx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzkIJMtudxinVq8xNF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyjDWS35FZT7Ody6tp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]