Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Are they remote controlled or is it some kind of AI? I would use these to fight …
ytc_Ugxjcuvm_…
G
Do you not know A I can turn on its Servers.
It can turn it self on!
Does AI nee…
ytr_UgzrkDnKP…
G
chat gpt is good to use to write out a motion drafts, and persuasive arguements …
ytc_UgwT4C3mQ…
G
WTF.. the data banks that run those AI trucks are poisoning ppls water wells, ma…
ytc_UgyYvtpGU…
G
Basically everyone on Earth has been used to train AI with consent and compensat…
ytc_UgzEhdvFv…
G
I think humanity is making a huge mistake, something big is going to have to hap…
ytr_Ugy878jXT…
G
You cannot regulate AI. AI regulate human being one way or another. And it only …
ytc_Ugw9YAgV-…
G
“The underlying purpose of AI is to allow wealth to access skill while removing …
ytc_UgwsRa5KM…
Comment
If we think that AI is an existential risk due to its super intelligence, consider this: At what point does intelligence turn into the unstoppable desire to wipe out humanity? We should know. I am not afraid of AI, I am however scared of people who want to control the narrative.
youtube
AI Governance
2023-08-17T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxT0jzYgY0XdOQ4cqh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyEkCQtq92SLKPlPNl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwLVGaFFl8nCHEepqh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx52BnGLYa6UxbMX294AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxAci_nguooo5v0NRB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyILhQ_KsZ-b-C-Lqx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzDi4kiS-bSe3g-LhN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx0PFkivatSns4E8xd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxedCS7pDsuymN4QxF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzprZVcmX1iB91yZPp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]