Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@laurentiuvladutmaneamy claims are easily proven. Check out Firefly, Midjourney,…
ytr_UgyRqKFMM…
G
@DETERMINATION355 The only way AI can steal art is if it is trained to do a spec…
ytr_UgywgOCMt…
G
The difference is that… you still learnt the tool, you still learnt how to smudg…
ytc_UgwNX-ujs…
G
CEO? Who is a ceo but a member of board and the face of the company. More often …
rdc_jszf684
G
Im 31, and frankly I think I could make a more fair and stable living selling wa…
ytc_UgzJOTx_L…
G
Dude it's just getting started. Current hardware is inferior to what is about to…
ytr_UgxWkOCjB…
G
The trouble as far as I understand it, is if anyone wants to programme AI to be …
ytc_UgxupcZ14…
G
I mean I'm sorry for the little girls or whatever but let me say this loud and c…
ytc_UgwuE49VG…
Comment
50:40 before Ilya and Roman calling out major issues with ASI safety, there was Hugo de Garis. Hugo was interviewed in the 2009 Ray Kurzweil documentary “Transcendent Man”. Hugo was the dissenter to Kurzweil’s optimistic Ai future. Hugo predicted the death of billions, primarily thru world alignments for and against ASI. The result is a devastating world war. Hugo coined the term Artilect which he thought was better than saying Ai, but it never caught on. He did write a sci-fi book called the “The Artilect War”
youtube
AI Governance
2025-09-05T03:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyOrZ_M_PMVAo0JyKN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyTuGSBUl5h69IAHJZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugypk2pN6fTst_VSLDp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_Ugw0P5hzgah8tf99fgV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_Ugx10RkPeGg9cWqBpkV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},{"id":"ytc_UgwlRdhppltx_IqKU4p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgzkTDkQ-awHVmJ7hP54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},{"id":"ytc_Ugy3T7MnAgI9agOfqtB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},{"id":"ytc_Ugxj2WNCaKkfb6AyEEN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgxcpfWkD8iyRWA3rW54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]