Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not if we stop it it won't.
No one knows how to get a broadly superhuman AI to …
ytr_UgzjLsKym…
G
Thank you for sharing your thoughts. AI technology indeed raises important ethic…
ytr_UgzRO4QQk…
G
Production speed, maybe. Quality not so much, since it is still gonna depend on …
ytr_Ugz_TIHce…
G
Imagine people that think this is real 😫 people are to be very careful when usin…
ytc_UgzQoZI7d…
G
The argument is just borderline delusional. Using AI and having an app that make…
ytc_UgwzjYKZz…
G
I just asked chatgpt this exact question and here is its answer.
That’s a grea…
ytr_UgzwRJUbK…
G
If humans are merely collections of genetic predispositions that are affected in…
ytc_Ugya6EgdW…
G
It's taking over "parts" not your job completely. Imagine a person who doesn't k…
ytr_Ugy0Noc4J…
Comment
Dont worry, we will all be rendered useless, including all the precious c-suite suits, office drones and blue collar peasants alike. Human fate lies with being ground up for the protein bank, all to fuel biotech AI.
youtube
AI Governance
2025-07-02T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugw9M1uYr5Vfe7yaIiV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzqG2IisacfWMVZkF14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwXc4i6XoCyE_QUIGR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyaUucS5_IRHZtix5Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxKxB7QADoJygHuSp14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxtorJLYsgZrFbEEjV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzYki8llSlV0vuh0gx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxdu1TKkvnn4d7TzER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxiZZz7TaFmVGJy79F4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwY_Zg97bucRB9A7Fh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"})