Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
And Sam you are wrong about so many things in this video primarily about how AI …
ytc_UgwtOul8u…
G
For whatever reason, most people seem to have no idea how fast this is moving an…
ytr_UgwUI7kik…
G
@VultureSkins also, if you assume generative AI generates every word or phrase i…
ytr_UgyOBiF0o…
G
Moral grey area. I'd say it's better to use a real photo as a reference though, …
ytr_UgxNfuDE7…
G
I agree with 90% of your points, but also there are times where I just want a qu…
ytc_Ugy_u-GjL…
G
The thing about AI's dark side is that their "progress" is still based on the op…
ytc_Ugyiarc7F…
G
People having this debate on a phone that has an account with a cellphone compan…
rdc_ohyf7be
G
I feel like humanity is like a male praying mantis knowing its death is imminent…
ytc_Ugxps1mS9…
Comment
As a background, Elon Musk knew about the dangers of unregulated AI long before this AI boom.
He was the one who started OpenAI many years ago, with the vision of AI safety while the technology was still at the cusp of development so that in the long-term it doesn’t end up detrimental to humanity.
OpenAI is the organization that created Chat GPT. It has been bought by Microsoft and Elon Musk has disassociated himself with OpenAI now. Elon now says OpenAI is far from the original vision he has intended and is disappointed with what it has become.
Too bad humanity is late to realize they’re wrong and Elon was right all along.
youtube
AI Governance
2023-04-22T03:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxdw6g3OIgZ7PMY15J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxNlvVoc95CntF7t194AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNh3T2SHiX14j_pJ54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwyBmSLWWTmGaU8Xkh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyb3PeG8sI4lq_v6aN4AaABAg","responsibility":"company","reasoning":"unclear","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxnTa00XUXLi_b4XBR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzZTGenW9NXR3G4654AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzN-npA1BNeKWUeDIB4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzD1Qya2Ugo7JFs1Zh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2r9vFxnoKeQDRZ1Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]