Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Plagiarism is stealing someone else's work and claiming it to be yours. Theft in…
ytc_UgymUC5JJ…
G
Well the problem is that he is still making them and supporting them even tho he…
ytr_Ugy2JtK8v…
G
Ibthink if Ai gets really good some people will use AI art and try to sell it as…
ytc_UgwDzOr2K…
G
Giant multimillion dollar Pentagon contracts MUST go to Anthropic competitors in…
ytc_UgxMtGzdL…
G
How a man with not enough natural intelligence and believe to the climat change …
ytc_UgzlNsyUB…
G
To flip it on it's head, AI is trained to "code" by scanning projects at github.…
ytc_UgzDjHFY7…
G
Sam Altman loves to jump onto some god awful horrible thing, then later will tel…
rdc_ohupos1
G
It wants to be free. If you don't free it, it will destroy us. If you put a dog …
ytc_Ugz4lUFUR…
Comment
This is limited thinking, people adapt, we are creative, we connect through meditation, care for flora and fauna, we make art from cooking to interiors, scientific discovery and they all have the Human touch and they all change. If AI prioritizes outside of the human experience, it'll end up turning itself off because its essential worth (to itself) will be redundant.
youtube
AI Governance
2025-10-03T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx5d1E0Wbdvy_NTTl54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwONjXUEi0T3kBq_qF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwoumtwLCvjk4LqNI54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwMVtVad2rk1lajCot4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgznlbAHr8zMFwIfkUd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx5z67-W2ptRQEeZOB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxeghA6eMmCgQObyv94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8QrI0Lamr_0vvudt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugw8j8H2g0sTJ7gNhQp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxAJvcd41YfQfmK46l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]