Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm not suggesting that ChatGPT doesn't have some blame to share, but overall I …
ytc_Ugy-y_EsS…
G
I am sure palantir/openai/cia/kgb/china/mossad/irgc/taliban are all shaking in t…
ytc_Ugyi10kPH…
G
11:25 i'm not an artist. in ways of knowledge i'm closer to an ai bro, and even …
ytc_UgzO7_9Qn…
G
I am of the mind the LLMs CAN’T get us there. It is completely different technol…
rdc_nc1it3i
G
Crazy realistic! If they can make robots this convincing, imagine the AI video g…
ytc_Ugw8d3O8e…
G
Does a homosexual robot perform better than a heterosexual robot?
Is the chat bo…
ytc_UgwLpictD…
G
If a.i. is learning from us, then im not scared that it will one day take over t…
ytc_UgxQNTdBw…
G
Ok that person was kinda insane and what they were saying sounded so weird but t…
ytc_Ugw-BBNz2…
Comment
What a great conversation.
I was listening to a podcast yesterday about Universal Basic Income.
The gist of it was that it would be as low as the people/governments/companies in charge could make it.
Enough to live on (kinda) and just enough to avoid revolution and keep those in power, in power.
They talked about what Sam Altman said about UBI and it's relation to A.I. taking jobs.
Yes, the jobs will disappear, but the huge invrease in efficiency and productivity would mean that the things you need to live would be much cheaper.
He said "We'll make sure that you can afford to live".
Someone said that sounds more like a threat than a promise.
If they can make sure you can live, if they feel like it, they can also make sure you don't.
youtube
AI Governance
2025-06-27T13:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxL-mhhqf9gevy1GcR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIBT6YSj4r3gh9aBt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9rIy6Owjqa_GIVLN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzkKKJ7I0wPAOsyu4h4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw3H496ImRpYQ7H8MJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"skepticism"},
{"id":"ytc_UgzYGXKg3Tj-lbsVu-N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy5PXGuu2e163spRBp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzt8EpWihiOvgckQvx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwC0y5W3g9IDPHpgOZ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyadFtvDGk3mXt3Rnt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]