Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So basically the mindset of MAX PROFIT will lead to creating AI that is not safe…
ytc_Ugy8i8LUk…
G
>TBH, driving in AZ is harrowing
It's mostly because so many assholes drive …
rdc_ecza9f2
G
Sup brah, listened to you for like idk, 45m.
I think your position here is mor…
ytc_Ugze7E7Ov…
G
I literally haven't even started watching the video and the pre roll ad is for A…
ytc_UgzAlAOKD…
G
A sign that AI is driving investing. AI sees AI, AI invests in AI. Then the huma…
rdc_oglxp83
G
So all you did was to display the hate and salt of the anti AI crowd.
Good job…
ytc_UgwQncDe1…
G
I don't hate AI, but as one people previously posted, it shouldn't passed off as…
ytc_UgwB31VYF…
G
Perhaps we will not be fighting with each other in the future but humanity will …
ytc_UgzuXgh4c…
Comment
As the risk of AI increases, the adoption of AI slows. The same thing has already happened with self-driving cars (the tech was created a decade ago, yet humans have chosen not to allow them on our streets). AI will be no different - when it becomes dangerous, it will be regulated and/or banned.
youtube
AI Governance
2025-06-16T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyJv7o5dpFRjhfqKOp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw6-ctKmxFl2vgITzN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhaRTotNhi-72fhh54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx8wXvQcU-fm_03Jjh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxxsKI_VgmLKV4OsrR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzmVMUMyyM7gZzWzal4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzMQD-z5M-bPvr6LpN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxDlTeWZ0fXSjqCKhh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxAhdK1KgnZbXjW_0F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyokSLoknCeP2M0lFR4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}
]