Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
HAHAHAHAHA... WHAT? WHAAAAAAAAAAAT?!?! This is what the debate has evolved to? I…
ytc_UgxmV1o6n…
G
@aesopsock7447There is great potential for computation to help humanity, it can …
ytr_Ugw8zZhO0…
G
The entire purpose of capitalism was technological advancement. First weapons, n…
ytc_UgxFDq26o…
G
If writing a prompt to an AI and having it spew out an image made you an artist,…
ytc_UgwIFIi2k…
G
I am dead set against AI for the above mentioned reasons and much more, especial…
ytc_Ugzt4TWEn…
G
Office hours are 8-16, but otherwise there is a lot of variation. Average vacati…
rdc_dv0ov2y
G
AI can never truly create something new, it can only replicate the patterns fro…
ytc_UgzrNYe0K…
G
So whoever invented it and put it out there, they don't care about the people th…
ytc_Ugy8RZjNM…
Comment
In my opinion, LLM-based AIs can't reach AGI. They will always lack their own creativity. Actually, I think they'll plateau very soon, if they already haven't and the progress only comes from better integration. But their capability to learn quickly and store knowledge immediately is already very powerful and definitely can cause an immense damage if someone releases it (intentionally or unintentionally) to some critical system. Because AI is sucking in even the malicious ideas humans are feeding it with and without the morale...
youtube
AI Governance
2026-03-21T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyp0I2usYT0GC7x5xV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwyA7v3P_2lJEPWw1F4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwDEgCu0GlOmHrfs2p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBV7ze01zSJzgWsb54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyxysSxSJVJGKhYbWp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws0vRXy_AXojH1WTF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyKi9_vtgIPKfpzgZx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxlw2FFBiUu0ygxYcB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxp_unGNnky7Th3GlJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugylbebws2UGC-q0K014AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]