Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thank you Kirk and your son James, very good content and information. Wow. AI …
ytc_Ugwf0AdTh…
G
In a future world the Crown Vic will be subject to a pit manoeuvre from a passin…
ytc_UgweGkWNq…
G
Naw so true I got chills when I sea the I saw you character ai the unholy things…
ytc_Ugw7s0DQd…
G
Bruh, the point it's not technology, the point Is that if you're an AI """artist…
ytc_UgwcDVooM…
G
Artist will survive i am a artist too ai îs just a robot who thinks îs better ar…
ytc_UgyB1404b…
G
Water consumption, loss of farm land, higher electric bills, higher water and se…
ytc_Ugxodl9Fe…
G
Imagine an AI that is more intelligent than all humans combined.
Imagine if thi…
ytc_Ugz-q3PTX…
G
I totally get where you're coming from! AI can definitely seem a bit unsettling …
ytr_UgyoZfkoC…
Comment
Your argument about prioritizing control over the expansion of AI is thoughtful and important, and I agree the risks deserve serious attention. But I have one question that keeps coming to mind.
If responsible nations and organizations slow development in the name of safety, what happens if less responsible actors simply ignore those limits and continue advancing the technology anyway?
It reminds me of the classic gun-control dilemma: you can restrict law-abiding citizens, but criminals may still obtain weapons regardless of the rules. In that situation, the restrictions mainly affect the people already willing to follow them.
So if the “responsible world” pauses or restrains AI development while others do not, how do we prevent creating a power imbalance where the least regulated actors end up with the most advanced systems?
How are you going to solve that?
youtube
AI Governance
2026-03-16T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwytkjWRz4txk43RDZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyDQ_Q9wQI37Ckr39t4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzs1Dnhv1nAB7lhcUl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwNSBBqcpEepOavDxR4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugzw4rnrrKUPQplRV1V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZz0UeIxQ-1Z-FU0R4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgywmdzGtrV06GWdJh14AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyGXKLpZniJ36Seu5J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzr_dGza4U624ENI3t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy2fT--RinN6azFL9N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]