Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I remember watching something in BBC where they made an AI baby by making a digi…
ytc_UgxsGwnEt…
G
this is so unconfortable to think about. i propose DONT CREATE AI so we avoid th…
ytc_UgiENWsk-…
G
I say we have been making synthetic life for a long time. Too me life is materia…
ytc_UgyZI5tij…
G
Bernie: "AI could wipe out the working class!"
Also Bernie: "Let me approve all …
ytc_UgyTSL8zf…
G
Without a "Moral Compass" Ai will likely cause more destruction and positively a…
ytc_UgyFUfJt_…
G
if you are an artist it is easy for you to spot an AI or not…
ytc_UgzWJL6Tt…
G
@sazeraeI work in an engineering field now, recent graduate. My department liter…
ytr_Ugz1TDCrM…
G
**“In this podcast, we’re talking about artificial intelligence and how it might…
ytc_UgwS3TYsQ…
Comment
General AI is much much more dangerous than Elon glances here. An AI sophisticated enough could use your own greed, desires, tendencies etc against humanity. it could not only manipulate video and audio feed to gather information but also modify those in a relatively impossible to distinguish manner. it could take control of devices, monetary and financial systems, power-grids , robotics and manufacturing systems. it could overpower several armies globally mearly trough misinformation and miscommunication. imo as a species we have a choice nuclear armaments, ai or android robots. choose one .... very carefully. if there is 2 of them its a very high likelihood that we are F*ed . very very F...ed
youtube
AI Governance
2023-04-22T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxYGJmsFNSnSzyCEZZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3hT60tf0NKuRlrfx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy2TjYjf-JPQX2cNk54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxP0hQye_0G8RCdxL94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwB_GgL2uW1pj8cVkt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqF3rgi5pZQ_KGV494AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdSZCVa6E6Ag8TM0B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugywkd6RwQBPetk-J7R4AaABAg","responsibility":"government","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQ4ZsNvRdODlxguKp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzyB7AptscPe_B8tDx4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"industry_self","emotion":"outrage"}
]