Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its so odd to me how we could focus on using ai to make robots to take away hard…
ytc_UgwcozzRX…
G
Honestly using ChatGPT even for productivity is such a moral failing, considerin…
ytc_UgxtxC6q-…
G
shut the hell up stop using ai. This isn’t just about polar bears dying, which i…
ytc_Ugx0fO25n…
G
As an AI language model, I cannot generate a comment on a YouTube video on AI. P…
ytc_UgzCKvt5x…
G
Many white collar laughed as blue collar people lost their jobs to automation an…
ytc_UgzF39CyR…
G
Someone in another comment linked an article which said he told ChatGPT that he …
rdc_nat0jry
G
But what would be the motivation for a robot to do such a thing? Wouldn’t we hav…
ytc_UgxeCi6Ru…
G
Fully autonomous is not going to happen soon & I'm buggered if I would ever put …
ytc_UgxM0Emi5…
Comment
Guys, we need AGI to build humanoid robots ASAP so they can help us set up colonies on the Moon and Mars. And there’s really nothing to worry about since governments already control the internet and can limit traffic speed, block certain servers or shut it down whenever they want..
youtube
Cross-Cultural
2025-11-16T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz0yCImuESpQP4x6Bd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw0LAZksbgEzOtbTFl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyuM3iWucntHYBW94d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzn7Skx_5AVIwrgXXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxrJsGh0tdnYE4Xxb94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxVd1QYCLuIx5V2sO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxqLRasuGHpovpgZ6N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyJpcBZoFwbVk9q2z54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdFlqWte21A8J13Ut4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzzeyMnkCBScBi5gLR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]