Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
as long as you’re not having ai write the whole book for you i see no issue with…
ytc_UgxkYvNCB…
G
I enjoy making AI art. But I also know AI cannot pay taxes or buy products.…
ytc_UgyV35TLY…
G
2:26 For those who don't know, the AI act in Texas used as a way to regulate any…
ytc_Ugw0_GdPz…
G
I continued to iterate on these thoughts. No, the rich would remain rich, but th…
ytr_UgzjiLZpG…
G
they sure do. and thats wild tech. but we have design.
also amazon is mostly au…
ytr_UgxvYR88H…
G
I think this argument is mute. We all know another Carrington event will happen …
ytc_UgwlhicE9…
G
Do you really think we can achieve the kind of outcomes you’re talking about wit…
ytr_UgxPXMdza…
G
This is why we should be concerned with AI systems and the existence of idiots i…
ytc_UgxYMLIrY…
Comment
56:08 Edit: Later on ya'll reference Elon saying he didnt want to create terminator but realized he choice was either be a player or be kn the sidelines.
To adress the question of why would we continue to build ai knowing there is at least 20% odds it kills us all, I would like to point out the context of the world we live in. Ai is clearly an arms race, this arms race has every country that can play the game playing. Choosing not to build ai is akin to choosing not to develop the atomic bomb because of the possibility of destroying the world. One person taking the moral stance not to build ai because they fear the possibility of killing everyone does not stop anyone else from choosing to develop ai anyway. 20% risk that is in your hands seems like the better option.
youtube
AI Moral Status
2026-01-27T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxSFCO02UrFuSpjfAZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxF4kpWe0I7ZJJbpEx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzLZBUlP_WkIYoHIGZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzi8TOcX6LUuErsqEN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyIv1-i443K6xz3Rg14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBJBT7wHYm_0MkzVF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1hEJ2nIIydyC3QnV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyXB-fI3s41Zj-aXeJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzDITD15vIXo4o9ADR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxjBCORjkNJr1CjiCF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})