Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If ai is intelligent at all it will figure out a way to eliminate evil, if evil …
ytc_UgzESzavb…
G
So wait you're pissed off that people are finally being able to express themselv…
ytc_Ugyles236…
G
Funny how these conversations only affect every day people but not the technocra…
ytc_Ugx0TVPxZ…
G
Avoid all these AI's dangerous. Would you take a forklift to the gym because it …
ytc_UgxqS5pkU…
G
I really want to get art for the characters I am writing. But I don't have the m…
ytc_UgyAfoIri…
G
AI is just like a psychopathic person therefore, with no feelings, no empathy, n…
ytc_UgwRZhz1l…
G
OH how ChatGPT became Walter White “I AM the danger” in record time.
Just a few…
rdc_o7wecv1
G
I've even seen some veteran artists (probably around their 40s and 50s) supporti…
ytc_UgzCajW8E…
Comment
AGI (human-like AI) is officially a buzzword — there is no agreed upon single definition of it. Even what we call AI today, based on LLMs, was not projected to be what it turned into today. Presence of massive computing power through large cloud providers made this possible. It will hit a ceiling soon in 3-5 years without producing human-like AI. It will still be scary though. Another level of breakthrough is needed to unlock computational power to reach human-like AI. Quantum computing has potential to unlock that. This will be in 10+ years. In the meantime, there is not much to be optimistic about.
youtube
2025-06-07T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwGr3gJFmjAn3cM5fl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwgMCVUt2G7xe2l8A54AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzcLY1zA-7BPxyhqp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxLk3VDUn4wE1UhE2h4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzdx7L0GyIjh0SrG_14AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMsSzdQr2BPE948ll4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxbiQVTSNlsvisN2SZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyGy2yb7MN6WPrnAzN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxio3XMveQozMOs9rR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"skepticism"},
{"id":"ytc_Ugzg0_odCbW5QVGo56R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]