Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Richard_Potato Nah mate, I'm not buying your weird take AT ALL.
Just one exampl…
ytr_Ugw0jEnRd…
G
I'm not a fan of data centers but jump from 97 DB to 140 DB is actually huge. It…
ytc_UgxMBLsBA…
G
I disagree with Bernie here, on how to approach the situation.
We should strive…
ytc_UgybxO8SL…
G
Im laughing at your country. Your president posts rascist ai slop while your gov…
ytr_UgxihqEk6…
G
Thank you for sharing your concern. It's important to remember that the purpose …
ytr_UgwJ2oATp…
G
Turns out if you use large data sets generated by humans to make your AI you als…
ytc_Ugw1PSqCu…
G
@thewannabecritic7490Let me repeat, I'm not saying AI isn't a problem and isn't…
ytr_UgxpQ1ONl…
G
@TheDiaryOfACEO if you are so concerned why don’t you start a petition with so U…
ytc_UgwqiTpcS…
Comment
The Trouble is corporations own most everything their spending many billions every year on AGI. Their bottom line is trillions that their investments can make. They don’t want to think in terms of safety. Elon Musk has been saying for years to slow things down, It’s not happening. If we don’t get AGI first someone else will. So there is your answer, follow the money. AI may be built by us but it won’t ever be controlled by us. I don’t understand how anyone can think we can control something that is 100 times more powerful then are smartest human. If we have the smartest human, say for example, this person can speak ten languages and is a master in math. This same person knows next to nothing about medicine. Well, AGI, say at the present time knows twice as much as this person in math. AGI also knows just about every language plus every other subject, and can out perform most every human in two years. We’re not controlling it know and it keeps gaining more and more data every day. Nvidia is talking about more and more data storage etc etc more memory needs billions of dollars and so it goes. I agree with Roman, big money speaks and their not going to allow safety as their first objective. I don’t think most billionaires can possibly think in such terms. Yes AGI could help humanity in so many wonderful ways, but it just might be as distrustful of us humans, as we are about ourselves. It’s hard not to see the mess us humans have got ourselves in, but you must be blind if you see it.
youtube
2024-06-11T02:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzpLlNGFc3YJuNeRux4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwcmfqcgiBy3UKK_Dx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxOpm8Brpy_RzBvFLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx7WK25ydv724vvlfF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugze3e9pdWk1-9ARvlB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxXEKfbMZq_SFkchH14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzjIvlLUvmrQQGjE6d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxTA72GRTokAuYcYEl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy0wINNYX1bofNiRsZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5pCVmEXXuxeS96mh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]