Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is this why most governments seem to be pushing communist style ideologies? Bec…
ytc_Ugy2XYXVI…
G
I don't like people... except hot girls lol, so I'm all for AI. We just need bas…
ytc_Ugy3aV28r…
G
If the training set includes copyrighted work, how is output not derivative by d…
ytc_Ugypnym0a…
G
AI says "All Humans look alike. Kill them all."
Human obeys the order, 'cause "C…
ytc_UgxCmywTB…
G
The creators of Suno don't care about our music, they think the whole world list…
ytr_UgxuoaXda…
G
So smart they’re stupid! Can someone please train Ai to destroy the evil elites?…
ytc_UgwJgHnYK…
G
While I would normally be all for automating crappy jobs (trucking has got to be…
ytc_UgyV96b8g…
G
I just watched the WhistlenDiesel video. The only thing these clankers are going…
ytc_Ugwqiy_8U…
Comment
There's insufficient regulation in place now to protect humanity from the developments of AI because AI is already putting white collar workers out of jobs NOW! As AI capacity increases how do you think that's going to affect people in all sectors? Having an AI doctor that can diagnose cancer faster is going to be redundant if it puts human doctors and others out of work in the process there needs to be an extensive discussion on the scope of AI and serious humane decisions and rules set up to weigh up positive and negative outcomes. The open letter on AI from 2015 is more relevant now when considering that some of the things predicted by its signatories have already occurred in the past decade.
youtube
Cross-Cultural
2025-07-05T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyFeHy9G4sfz1iooxF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwjjgZycDlFTL-G7gN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyxN8ssJyMpakRuJUx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwDxueJK1r1xwevsx14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwKrnWVm5ymrw51Bdh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzSfrZOWvafB-9VtdB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw0kSFItRftBZ8bY2J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzuopg50LtiISXO8SN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgymbPMU0qQOORQ9bQB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwTQkIKO-5Fk3S7yKl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]