Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Matter of ethics at the hands of the person who is creating these things.
"Shou…
rdc_nzk54od
G
YESSS I went to the nascar cup race and I saw these things walking around and I …
ytc_UgzYadFVK…
G
AI is the noose that will hang Zuckerberg, Musk, Thiel, and the rest of those go…
ytc_UgzlvFi76…
G
I always thought that Anthropic chose the name Claude as a homage to Claude Shan…
ytc_UgwRy7yuU…
G
These people get thoroughly fooled by ChatGPT spitting out snippets that were wr…
ytc_UgxqM6p2H…
G
That robot is so dumb you know about that robot that robot is super duper dumb G…
ytc_Ugz4nxmDg…
G
Even if AI can put up manipulative strategies on upcoming democratic American el…
ytc_UgyR81r2V…
G
Bernie Sanders is such a great guy, but the reality of the 21st century is upon …
ytc_Ugz13Ud0A…
Comment
I think the biggest risk with current AI is just people thinking it’s smarter than it is and giving it decision making power over something really dangerous. The best models still make a lot of mistakes and because they’re basically just guessing and don’t actually understand anything, they sometimes just guess wrong on things you wouldn’t expect they could mess up. But it’s all just probability. Like, you can ask pretty complicated physics problems and get ok answers. But every once in a while it’ll tell you that gravity makes things accelerate away from the earth rather than towards it because it doesn’t actually know how gravity works as a concept. It’s just playing chance.
youtube
AI Governance
2025-10-15T21:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx0eO84iCVdGa-cKip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8PlCBzNjvAigLxFh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxlRue2H7T6_ZB_vUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyt3hv5O8ERb9YLSoB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyfgxGpRqKXk1E697R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxV6pE8mgjX3NxCgAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOAM377rC3BN7EAil4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxnVyar3ZKhY8tQS2B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXCp0x5W-aQeQ8lBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzdO69m5g0_OjZkzkd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]