Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I hate to bump up your algorithm numbers with a comment but... gotta ask the sim…
ytc_UgwR6KLb3…
G
People still think that Ai is like a computer. This thing has evolved into being…
ytc_UgydXVHMw…
G
I feel good because of the fact that abstract art is just not something AI can r…
ytc_UgwtuaLEr…
G
The difference between an actual artist and an "AI artist" is a piece of paper a…
ytc_UgwkUbc5s…
G
But...but we can't put any regulations on the AI industry!! We need to let the t…
ytc_UgwpWq2OV…
G
The simple reality is if your job is easy enough for a computer to do it, then a…
rdc_lgtjrbg
G
Brother!! I dont know where u got the only 5% is because of AI. A company today …
ytc_UgyHzDksS…
G
@keithpoppy9227someone scaring you AI will take over while selling tools fir AI?…
ytr_UgwmNxU8j…
Comment
The first thing is that they should not make a single robot in humanoid form, they should not look like us, we should never confuse them as human, they are not us. The biggest issue is job loss and what humans will do for money and how we will function having no purpose. The companies developing AI will become quadrillionaires, and many of those who are developing AGI have no interest in saving humanity. Why would they want to give us universal income so we can survive? They have exactly zero motivation to take care of us, it does not benefit them, in fact ultimately they may view us as the insects using resources they need for themselves. If they cared about the survival of the human race they would have chosen to do this only when they put every safeguard in place first. We can all thank Sam Altman for safety being completely abandoned. They are far more interested in their own salvation so they can witness what is to come, we are just something getting in their way!
youtube
AI Governance
2025-12-08T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxU_zhG_Jo59YxLJRJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzhyhMkmGf8kCJK1RB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxM728SphNwsfrOr-d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyy0ZIV6sTro2cEUf54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkrnOJh5y8fnIp-th4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy8jUXR8BLjZlxC_a14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugw6mt-dDKEoqBj4pOJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugys_yRWukI_tTyMULB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwpEHZYFZqdlwhOxbx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugyq0OHXDF5CIU60I994AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]