Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can up your game if you use it properly, it can also downgrade your game if y…
ytc_UgwEQdN0b…
G
If AI is far more dangerous then why are you protecting AI and making them? I do…
ytc_Ugwp4fr01…
G
How often are people able to break ai chatbots and get them to say terrible thin…
ytc_Ugyiw6da_…
G
I'd laugh if the Robotaxis are all secretly being driven by some dudes in India.…
ytc_Ugzsjx8OV…
G
Yes …. Absolutely ….. Fast And The Furious 13 couldn’t possibly be made by AI.
…
ytr_UgxaQEWR2…
G
As a social democrat, I'd like to propose a compromise proposal:
Heavily automa…
ytc_UgzyeYAzp…
G
Remove all the "values" And "morals" from these "Ai" And Their Best Advice Would…
ytc_UgwxS9Kcv…
G
I love how they went from “it’s art because I wrote the prompt!” To “use ai to m…
ytc_UgysQqMuw…
Comment
IMO AI technology should definitely be regulated by the US Congress and not the states. That is because AI is a global and overreaching technology that can encompass almost every part of life. The fact that it is controlled and manipulated by Corporate technologist with little oversight is truly concerning. It is ridiculous that you could regulate it state by state with any consistency. So far Congress has totally failed on this and appears frozen into stupefaction on this subject. In the end we will end up paying the price for this failure and lack of attention it now deserves.
youtube
AI Governance
2025-05-10T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz01_5_Hb6yn5xKq5N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzwurJCdaaT_5UaW314AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzBHMx8_t6BkMBj8Ox4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzrhsbrOvBqAxJLej54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"concern"},
{"id":"ytc_Ugw2ZTplbMprtYsaOol4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyd1JzU5FReyNYZfL54AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxbueR2JaacTRJvoCJ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy7Q9frg44rugvZLZx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwG8o6HESvDVl6Z8lJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz0j3RPi6242EsIZhx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]