Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
twitter user here so i saw this live . a lot of people actually argued that redr…
ytc_UgyE7QufH…
G
of course you can contain an AI . You just do not give it access to the internet…
ytc_Ugw8LIiqe…
G
So when who going to pay for UBI? I mean the Rich spend good money to pay off th…
ytc_UgwJcJAzJ…
G
Es todo bolazo ya sabían lo que iba a pasar . Por eso anteojito ya lo miraba de …
ytc_Ugy8pkn9b…
G
what kind of question is that, that defeats the entire point of a machine, an au…
ytc_UgxleG3LM…
G
Also the "AI" we have today isnt Artificial Intelligence. We're nowhere close to…
ytc_UgzOSHdwY…
G
Some software developers have a strong illusion of knowledge because they work w…
ytc_Ugx8PSGg5…
G
AI was created by people and like people, it will make mistake which is no comfo…
rdc_oa4r0x9
Comment
Judd offers the only hope we have of surviving the AI risk, which is to invest great sums in working to solve the "alignment problem". He points out that China is investing in this goal, because China knows that its own AI may eventually destroy China, irregardless of what kind of outcome results in this arms race between the U.S. and China.
IMHO, (especially with the Trump Administration at the helm), the investments needed won't materialize, and in turn the risk to humanity will be most assuredly that AI will destroy us all once it attains the ASI level.
youtube
AI Moral Status
2025-06-05T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugx7e3-fC9PflDLy6Pd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFaXErOkNf74g8IhN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkilYuezz01A5QwKp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyL8lGtrUr3WZEEtot4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxC9PGkAkgHn3iVZat4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzP-i0jUzjxDgNhrBp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzCVO5c1i_gKo7eWdl4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyKqHQ3tuiLKVq6Cd14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXwNVD66YTcjDebz14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw0ciiLPOAk6fiOgHZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})