Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Amazing how good that sounds! Actually makes me "feel something" as though it's …
ytc_UgzefQJ-k…
G
@rivenoak It's not just about personal relationships, it's about huge unemployme…
ytr_UgxSLnOWw…
G
I would rather grab a phone from 2016 and learn how to draw shit, than to use ai…
ytc_UgzbEbMh7…
G
This bias is called algorithmic bias, and the data sets the AIs pull from is ske…
ytc_UgwKY_aPo…
G
How can we trust this debate if chatgpt was used for sponsoring,which means it …
ytc_Ugz_cZs4b…
G
While I do agree that ai has progressed a lot in the last few years, most of the…
ytr_UgyXYg3F7…
G
You better start worrying about more than AI or there won’t be a utube community…
ytc_UgwoU9n8P…
G
in any thread related to AI if your comment is not explicitly shitting on AI the…
rdc_nm78e8h
Comment
I think that Karen Hao’ s message to the world can be likened to Rachel Carson in the 1960’s. Brilliant and amazing breadth and depth of knowledge and journalism! Everyone should read her book, “ Empire of AI”. This is an epic tipping point like the significance of the monolith in “2001 A Space Odyssey “. The difference here is that this is very real…not science fiction… and baring down on humanity very quickly. I wish that I believed that AGI…when it happens…will be “ aligned” with the best for humanity. Sadly, since humans are training these models, the greedy part of human nature cannot be avoided. From what I understand, these kinds of mistakes will move like lightening in terms of being incorporated into these super intelligent machines and cannot be corrected in time to save us. Steven Hawking knew what he was warning about.
youtube
Cross-Cultural
2025-06-30T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzSW4Q1xgZF12GykVl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwzaa6_sSCF1SNL4Kx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxiNdcjAkRfjWYqujV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyWgH9j3VELIq03Sj14AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzO2ZBO3CMuVH7hCG54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyROJP45-2RInyCcMt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyAmHoz-ZwSNAz1M1J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugy6mQsco82yGxjvsqt4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzDcBXH_FkIIhFK9Yt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7y6XJtDFZRbh7_8J4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]