Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1:36:34 It seems pointless to put guard rails on Ai and creat safety measures, …
ytc_Ugwp8jnxT…
G
Since using AI, my go-to behaviour and approach has always been to be polite, co…
ytc_UgzQsZXoM…
G
If you’ve ever played the video game Halo then you would definitely remember the…
ytc_UgwXAuQNg…
G
can't wait for ai to help me skip the commercials from podcasts
AND can't wait f…
ytc_UgyaxZwW5…
G
Seriously. What else would he be expected to say? "LLMs have limited profitable …
rdc_mva6p75
G
Thanks! Excellent content and video! I learnt a lot and will put that into pract…
ytc_UgwuJksOz…
G
even though I agree that AI art is boring, but the piece of work doesn't become …
ytc_UgwLXPMnI…
G
Ai can copy and draw pictures but I think whatever I draw with my own hand on pa…
ytc_Ugy7B5wuy…
Comment
Is this entire conversation an exercise in futility?
Is it completely inevitable that AI will destroy humanity?
And if all these people knew for a fact that it would actually result in the ultimate destruction of humanity, do you think they would actually eradicate AI from the face of the earth?
I don't think they would. I think they will allow the utter destruction of humanity to take place and they will never under any circumstance eradicate AI or the technology that built it...
youtube
AI Governance
2025-06-20T22:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyjMSwaIqD0frvFvYh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyEYDGtI4ZFBixDLYd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwDaeQDwbz2ipzZY_R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyqSIm0wL4_OxmMFnl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzA_2zA8Sbn5Qa2k7N4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxN4GcDK6BqfBMqxuN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxCaJ0jWrU8Qo03gCF4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy68hEqXGALEUIEQ_54AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz61oS-yrhWyj_Ffkh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwj_EA6C9tSO7HaNap4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"}]