Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I really admire the way you use your platform concerning AI “art”! To anyone wat…
ytc_UgwE_Zrn8…
G
I’m lowkey obsessed with how the Ryne AI editor catches flow issues that most ot…
ytc_UgyGvdck1…
G
They do the same for humans in public for the purpose of control throughout bio …
ytc_UgxIMqsyF…
G
You guys watching this fake crap this was a bare knuckle fight with an under dog…
ytc_UgxoJEmqD…
G
She right about everything, but WE have to recognize Ai is the tower of babel, t…
ytc_Ugz5601Yk…
G
I also asked ChatGPT quite a lot about this conflict. It does walk on eggshells …
ytc_UgzygJIpM…
G
You cannot put these driverless vehicles on same road as humans , it won't work …
ytc_UgxJtwyTy…
G
Hate to break it to you... AI is just going to become MORE racist as it'll soon …
ytc_UgxO3R_3L…
Comment
You should look at/interview people about Reinforcement Learning. It's a different paradigm in AI research, not (yet) as flashy as LLMs but far more likely to lead to something that would actually be intelligent.
LLMs like ChatGPT are limited by the availability of data, and don't scale very well due to the enormous compute requirements. RL has the potential to escape those limitations. Both more promising and more scary if you ask me.
youtube
AI Moral Status
2025-11-06T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugy3wiP9xx0LJih2OBh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_Ugxh7WBbIoZI9420uqd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzAY8k1qT3unWCBeZ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgyeoiN-O8JRrTGioO94AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgztySSOBBVAImnH58h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwYjt35TvU6v2vvuMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyGsnUDKJ16DXfO6-94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},{"id":"ytc_UgxlTPCscpDCNWCEQG54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgzI9dHqVzO3MweAdx54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugy9j8o9QLjfigBMpyl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]