Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I actually read the ChatGPT conversation you linked, and there is absolutely not…
rdc_muklbys
G
Conciousness doesn’t require sentience. I told ChatGPT to be aware of itself and…
rdc_mdkqcww
G
I think he's sensationalizing this to bring an important issue to the public. He…
ytc_Ugwc_BZwG…
G
The 4th is easy because if you see his name, (His Name Is Suarez) then it's real…
ytc_Ugx7GH7eD…
G
The CEO’s exaggerate but the layman underestimates ai when all he knows is the f…
ytc_UgxU0rvog…
G
He's permanently shitt*ng on Musk. Don't think the other AI guys are much differ…
ytc_Ugyebo4Br…
G
I do think the best way is a global committee, each nation represented by 1 scie…
ytc_UgzpBbZxC…
G
CPTANT yeah, I bet you a google or Tesla self driving car would of done better…
ytr_UgwCv5H0m…
Comment
Regulating the AI so he can create an AI, not because he fears AI not being under control for humanity, but not being under his control.
I am sorry for the FANBASE, but here is not protecting humankind what we are witnessing, is a battle between owning the most powerful tool ever created.
All break through technologies will always bring social changes. You do not want to have a tool that allows people to have more time and automate their use of time. What you want is provide that tool, so people do not own their freedom. At the moment there is not any AI that will probably allow that for humankind, unfortunately not even Openai. But Openai is so far the fairest tool around AI at the moment.
Remember that some people are convinced that we should have a chip in our brain to be more capable... just link projects and goals. And you will see a clear strategy of why stopping Openai research.
youtube
AI Governance
2023-04-23T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzWGuv78LsTXSSj4Fl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxeRzO26_Qn-4PFXg94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz29XvWOZMVISS4Ted4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgztwTBfItMrrCt8wiN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxdQMkuB8V8PgAVCS14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzY0aboggnWPRRnO5R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugws_uGLiq4bMmziuap4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzolb72vRXueXzV1oh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzz-vjQ6OXz8mBDSEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5lcauBGuY7YwRzRh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"}
]