Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
to find the worst in this, some btcha$$ churches are already implementing AI Jes…
ytr_UgzxetkaE…
G
Thats why your major matters now in my opinion. You should focus more on which c…
ytc_UgzBdH4WX…
G
And yet despite all these concerns and warning signs, they continue to create an…
ytc_UgwzleD-h…
G
Aha the dude used ChatGPT-3.5. All answers from that one came directly from the …
ytc_Ugy_ouSln…
G
Oh wow now that's make sense, AI don't "create" an art, it just learn by copying…
ytc_Ugzo5LFLk…
G
Of course they will be replaced. All executers will be replaced. Some might stay…
ytc_UgyeFRIVD…
G
Don't worry, AI will get smarter and us humans will get dumber and eventually we…
ytc_Ugxx8sPxV…
G
To sum up, humans are garbage and we probably deserve extermination from our sen…
ytc_UghKeWexK…
Comment
Just started to touch on the real issue at hand at the end. The real question is- what are the outcomes of building an AGI? What data are companies using and what models are they building behind closed doors? What do they know that we don't? What insights are they gaining? Especially regarding social media algorithms- probably the most insidious form of AI. The motives of the likes of sam altman are obviously complete control and omniscience- it's the unintended consequences of their egoic ignorance that will get out of control FAST. So fast that by the time there is an unstoppable threat to humanity, they'll just be starting to get the hint that they're no longer in control
youtube
Cross-Cultural
2025-07-01T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxXQlGvKa3FhPGPRol4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzv7S-JpbmpiPULBB14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugytt7npj9nMdmfeG4Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMKP3WrF-OqW9LgbF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxs2eNr1qmSdmsoUNB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxocQAtM_U02k9Hq194AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwJ8bKutlIPBv-Zj1d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyKAGV-VtZ95CzHW6h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzMpe90KKODOqWdH0h4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7GA96lz_zjiGPQap4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}
]