Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro... chat gpt can be useful yes but. It is not very smart, if your asking abou…
ytc_UgzC5ci0o…
G
Istg Anthropic makes the worst possible statements in public and tons of ai hype…
ytc_UgyZIbK3x…
G
i m a proffesional mobile cleaner. so the ai robot should drive and clean everyw…
ytc_Ugyl0Pmfi…
G
All forms of AI are statistically based and depends on input sample size and var…
ytc_UgyxEB2it…
G
This looks like a Mortal Kombat fatality from that robot character with the drea…
ytc_Ugzk91Bys…
G
I doubt even unemployed new grads think this unless they suck at coding.
I've …
rdc_moxjds9
G
Waymo's are great when they don't do that. I never experienced that much drama …
ytc_UgxE15Rxe…
G
If the writer used the artist’s original designs and drawings, then the writer s…
ytc_Ugw2iw9aj…
Comment
Putting government in control of AI it's our worst nightmare. There's a logic miss-assumption here. Elon seems to think government acts in public interest but it only act in it's own interest for power and profit. At least private companies respect more public interests or they won't make money of it. State are people, the worst people, politicians hungry for power and control. They will regulate this thing so nobody but themselves can use it to spy, coarse and enslave citizens. Elon's path will lead to more corporatism and lobby for companies colluding with government. There's no public interest in it and they don't care for your safety at all. Much better in private sector where we can see and participate AI developments.
youtube
AI Governance
2023-04-18T09:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxaUozZ3YKL7YxT2p94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyxpTY08jrAL26BZ8V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw4HScAn3ssF5aw7kZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwB2SKteKPATMzazbt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyISgMbvzoyWeGM3lN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyAH65Kcj5fkrKljm14AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqJvkpFzIKlCyVNFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxcmWmTGbrFN1022a14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2uIZqbrEQVmh7-jN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy7gh9yXDRNg8WI22t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]