Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Please. AI isnt going to be anything until about another 20 years from now. What…
ytc_UgxXqESRz…
G
So now we have a Chinese bullshit generator that is just as effective as the ame…
rdc_m9fwgoh
G
@MASKEDB „I agree that life isn’t fair and we should try to make it fairer. That…
ytr_Ugzbv-RVs…
G
“Universal basic income” is just a fancy way of saying everyone will be on welfa…
ytc_UgzViiJ4O…
G
It is so stressful for people being replaced by AI. It looks efficiency improvem…
ytc_UgxsqB3ga…
G
The crazy irony is that a lot of people nowadays have gotten those degrees with …
ytc_UgwkwuwTd…
G
I've spent the last week going through documentation for libpcap, writing a wrap…
ytc_UgydFmTm9…
G
I asked Google Gemini: is abortion killing a human
He said:
The question of wh…
ytc_UgzGbLqBP…
Comment
A lot of people claim that a general AI would be able to do so much, that a universal income system would be developed. But it seems more likely that whoever develops the AI first will consolidate more wealth and power than anyone else on earth, and it will always be difficult to convince the majority of politicians that universal income can be paid for by taxing those ultra wealthy who get rich off of all the people who can barely afford basic living expenses. I don't know if the sci-fi examples of the singularity AGI is even possible, and I don't think intelligence can be measured the way many tech CEOs describe it. Anyway, Niel had a physicist on Startalk just a day or two ago that had a lot to say about this and it was an excellent episode. I don't remember his name, but his book was a long title that started with More Everything... and its all about critiquing the rise of AI and other big tech billionaire goals, like going to Mars.
youtube
AI Moral Status
2025-12-08T23:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxToSmdUI55Ar7oCyN4AaABAg.AKzeOlz8MfYAL1hz6Eq0gN","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxToSmdUI55Ar7oCyN4AaABAg.AKzeOlz8MfYAL4D4tDlnkC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_UgxCPSoh3LipNk7QAet4AaABAg.AKyWbPbFs_dAKykwDf9bo1","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugxc4B6bCl5g9HaoPgl4AaABAg.AKyKkxQx2txAKyL7qS3Dmr","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgyWjAIrzOVVmRWx1Qh4AaABAg.AKy-ksyCO6wAKz9w5j_mYj","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgyWjAIrzOVVmRWx1Qh4AaABAg.AKy-ksyCO6wAKzC6B_qDAH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgyWjAIrzOVVmRWx1Qh4AaABAg.AKy-ksyCO6wAKziGO9VFoI","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgyWjAIrzOVVmRWx1Qh4AaABAg.AKy-ksyCO6wAL0I-irX6yR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxwELpb3zk4KZ5kEjJ4AaABAg.AKxu9YWA1gNAQUtA5Tkv5p","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgzO4z7a2S9DzFtR3dt4AaABAg.AKxplsvWF7tAKy9M7E7PSU","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]