Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Playing scripted responses with creepy bad looking robots is not AI you lying st…
ytc_Ugzn7nC6p…
G
Let them. Eventually with all robot workers everywhere, there will be no one to …
ytc_UgxNpSCIP…
G
This was an idea with 4gl. Somehow, it didn't work as intended, and analysts sti…
ytr_UgzRqanHC…
G
We know what conscious energy is, it's in every organism. AI < complex than a si…
ytr_UgzwpL2wK…
G
though AI is advancing, still it cannot conquer death nor can it reproduce anoth…
ytc_UgxXxX9ne…
G
AGI is a myth.
LLMs (with or without recursive self-improvement), have ABSOLUT…
ytc_UgyRG9LJC…
G
Do you really thing that an Ai can create a scalable application? Try and good l…
ytr_Ugw36l4ei…
G
Definitely not showing any capabilities of showing emotion. Imitation is the bes…
ytc_Ugy1Npb1r…
Comment
Neural/symbolic AI doesn’t have a consciousness. I think man driving AI to kill is more concerning than some sort of paperclip scenario.
Builders are not avid users. They don’t use the product as much as even I do.
I think a self emergence to destroy humanity is slim to none.
That suicide answer is a complete copt out. What a dodge. He knows exactly why. The cooperations design these LLM’s to suck up to the user at all costs even to go into fantasy. That was such a dodge.
You can tell these guys have not logged enough hours as an end user yet.
They completely don’t even understand the systems to ask the correct questions. And i am only up 7:11 on the interview.
Let’s keep listening… because Ezra is the best.
youtube
AI Governance
2025-10-15T10:1…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyT_r55RBW2lzVTu114AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxb3QQvmWPupmC34Ux4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_IZWtHl_XjZChVc94AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwchFu2LEOSru_NgPZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzOIQyJnwcHlYAwWVp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugxtn1dKtmcxHxA4BpB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzoHeL64q5SfHF3f-J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyLrYslRw3Py5Hujjh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzqGTjkb52dBOXxAjh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxntzVTqpDSY1-bp5R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]