Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've only used character ai twice.
The second time I used it, I just did the 'Im…
ytc_UgyTHnG2J…
G
As someone who likes to play around with AI art I can say that the current versi…
ytc_UgzWMM5Ws…
G
AI poses a potential risk if it is not regulated. There must be rules and regul…
ytc_UgyahBhPZ…
G
you should ask question about motivation of CEOs saying that. They all have the …
ytc_UgyYsPH5w…
G
Brady interviewed Professor Mike Merrifield and then his "digital doppelganger" …
ytc_UgyvWbPUr…
G
AI lol 😆 😂that will never happen when you have to tell it and show it everything…
ytc_UgyfgtPQC…
G
si avant je ne trouvait pas d emploi, ben maintenant ce sera vraiment imposible.…
ytc_UgxMCEDq2…
G
i reallly appreciate this video. though i am a super hard hater of AI, lol... y…
ytc_Ugyf2FMcd…
Comment
If Sasha used AI properly, it would tell her that CO2 does not have the impact that she is concerned about. Basic Science + Facts.
youtube
AI Responsibility
2025-08-25T22:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyfN0Ed2ixcNrYGQ1d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzb-jUFkJtv6aGYhFB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0CLUNXNYo1Vtpe6F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxunGyKpIWwOG0kmSt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx-q7SCGKoTFzmSq3B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw4LhLfwhvXiguj0_14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzHK4L4L0XmJpA13IB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyJiGNaihSJSgYLLtN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKc6Sfo9mUzZi5Rjh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx8f7M9UMBNDEnsICx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]