Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey …
ytc_Ugy6NlDM3…
G
@leaderteammimikyu3024 if only they weren't snake oil that only works with old A…
ytr_Ugw00HgZK…
G
Remember that in North America, capitalism is in _absolute_ control of society. …
ytc_UgwuSBbtc…
G
AI will never replace humans. Period. Just like when they said robots would repl…
ytc_UgysIG_c2…
G
Yes they do deserve rights far more than humans, ai is objective flawless and lo…
ytc_UgwCZM29Y…
G
Artificial intelligence is demonic… It is like a modern day Ouija board with all…
ytc_UgxPihM2Y…
G
why do you think that ai will destroy humans and what does ai have to gain from …
ytr_UgzLvJxw9…
G
unfortunately, you are incorrect related to the radiology discussion in the begi…
ytc_UgxMWvVO0…
Comment
Hii there Elon and Tucker! Beautiful interview. As usual, when it's about Radical Innovation, I always advice that consideration be given to the Capitalisation and Destructive Effects of the Radical Innovation in order to fix (reduce) the Destructive Effects and improve the Capitalisation Effects.
Making the same with AI as well.
II. Regulation of AI, it's a good idea but it will be very hard to have that in a near future. However, agree that a constructive collaborative and cooperative framework is needed on the issue.
youtube
AI Governance
2023-04-18T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxorcJT8XwqCWgpvEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7w15RlynwnrTsuYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxRiT59zXJKa7BlvD14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxLJWbB5bdAXB0CkXV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyWS7TouVb8-OpV1ep4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzS3ZC1ffAnq8uhiFJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwPHku-3XxXYeUVLu94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzTkOYpTAAFQqHFUUF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzWHQ-xs_yR0NawdoN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxK1Z0MW7rCk0qjthB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"}
]