Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One of the biggest problems in AGI is that it requires far more computational re…
ytr_UgzqvP_89…
G
Any time that AI is targeted to those under 25 years of age, we’re going to have…
ytc_Ugz3wxDgs…
G
This Earth is a big experiment. There is absolutely nothing anyone should worry …
ytc_UgwgYfMPF…
G
Why USA creating AI is good but China doing the same it is suddenly bad? Do you …
ytc_UgyIaSNz9…
G
If AI is so intelligent that it will become better than humans, do you think it …
ytc_UgzjUCqeS…
G
It won’t be AI to replace humans, it will be some greedy people to make it do so…
ytc_UgxDmZhqd…
G
I don’t think our current AI will lead to AGI. There is more to human consciousn…
ytc_UgzKq4pTc…
G
Hate to say it but If you use Ai to “make” art. You’re not an artist. You didn’t…
ytc_Ugw1_nYTO…
Comment
Well AI certainly can't be worse than public defenders. If the future, they will be as good as the best lawyers if not better.
youtube
Cross-Cultural
2025-09-02T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyyy2Eo3nYoQJ6HKzR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwsngAsxrMkg3lzhcV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw-_wt3Ani7lWe34KB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwJNsFKwmSr8zfo4-p4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx1C61kNC3xzwbKyG14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw6VqUkMR54rqZ3om14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrBnj7IO9SJ_WKthN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_KIMPPRsV4VnyNgp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgweZP0B5zXEO5Xja8d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwRWDkkNx6Ry57TeBJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]