Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Promised in 2017, now it starts where waymo started at 2018... must admitt... pr…
ytc_UgyirUhzh…
G
You could replace "AI" with almost any other recent technological advancement an…
ytc_UgyHPzwMx…
G
In a world where every worker is replaced by robots and AI... well... that just …
ytc_Ugz6f0Pue…
G
yea...
currently the only real application of generative AI is just farming con…
ytr_Ugxa0UvpH…
G
FULL STOP AT SUSTAINABILITY. We are "peak oil'ers". We are about securing a sust…
ytc_UgzjCZe_1…
G
@Milanismo-gx7ii Here's a response from chatgpt to the question.
A simple ques…
ytr_UgxBuYNGA…
G
Owning an AI model that is either cutting-edge or kinda sorta open-source, you'l…
ytr_Ugyd2ax3Q…
G
If a business can replace its entire staff with AI, nothing stops the business i…
ytc_UgxeupxhE…
Comment
Considering the observations of the creator of Safety AI, it’s important to note that we probably won’t have a single AGI, but rather several different AGIs. There will be Google’s AGI, the Chinese state’s AGI, ChatGPT’s AGI, the United States of America’s AGI, and many others. Each of these will try to advance its own interests, making it unlikely that any one AGI could take total control.
Furthermore, we shouldn’t forget that there are brilliant minds whose creativity will be hard for any AGI to match. Our ability to evolve is such that we might even develop technologies, like brain prosthetics, that could further amplify our cognitive abilities.
Although the data may seem concerning, I believe that, while challenging, the situation is still manageable.
youtube
AI Governance
2025-12-04T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxi0E054Raia6B-_ft4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxLV2MZQNAH93KHS-l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxkujqgm-QJwBPJuTh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwKWrIqQoAv8H3LavR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxbGGqj7wZIEdxz8Ox4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzweM6yC1sSLzFclpJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxwiSYIIsVjiH09Idh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzdBZLTzGdLiN9cMkB4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzDoqwPAfH4zlvtgnp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw8U1Xe5B0y4qxqVJt4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}
]