Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I look forward to meeting all ai-healthcare. If they are constructed without pre…
ytc_UgyHNumAS…
G
Lol… the issue might actually be that some “humans” aren’t conscious! and they l…
ytc_UgwgxkmwN…
G
Stunningly bad takes. It some how got worse the more i listened- you somehow fai…
ytc_Ugy82Fpmy…
G
One of the biggest problems with machine art (it's not AI) is that people can't …
ytc_UgyZGTYSP…
G
It wont. Graphic design offers a wide variety of different avenues and tools, an…
ytc_Ugwc8DXz6…
G
I thought we settled this...the rest are evil...work on safely.
Great Convo, Ge…
ytc_UgybOwlmr…
G
You raise a valid point! The AI in the video, like Sophia, operates based on alg…
ytr_UgzN1q1Bf…
G
Maybe the fear of autonomous cars will encourage bicyclists to start obeying tra…
ytc_UgweQmJrf…
Comment
The Only thing i can say to and about this, is "SERVES US RIGHT". The AI is right, beside everything that makes us GOOD as humans in our core we are BAD if it whouldnt be like that our world whouldnt look like it is right now. Sure it whould look way worse if there wasnt the GOOD but why the GOOD get lead or reighned by the BAD? Because u cant get rid of the BAD if ur GOOD since a good person cant do such actions because this whould make him BAD. The point is even an revolution for the good have be done through bad actions since rly bad ppl cant be convinced by words and will not give up there power.
youtube
AI Governance
2023-07-07T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzJ4DspGmGM-hwxMsJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNNEsz-XG0LUXqBe14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwfjVj1U8zv55TI3TN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDHZIYqAh8Rp2waoh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwia98UtwU6bdWOO9N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzPHdgKIeNaPlwKivh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxagcOEl-jE1aoEcUx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwLiZlgCvf9ufyhrrt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyoIfkE0f7NUbKEhPN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzbMuFjc0dfDsdXAKZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]