Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sounds like the AI is a comedian. Next time actually ask it about a place it kno…
ytr_Ugx6MYU2r…
G
Considering the ones useing surveillance are mostly Marxist i think surveillance…
ytc_UgxnbaeEu…
G
The issue is centient being is that we evolve/change and we have emotions like …
ytc_UgxZFYaHg…
G
1:45:10 the answer is YES:
Hold up... This AI is having an existential crisi…
ytc_UgwHgKmDv…
G
I wouldn’t worry about ai too much given that Israel is bombing Iran and Ukrain…
ytc_Ugx1mBjUn…
G
Well, A.I.'s already lie. Consistently.
Assuming it is able to infer that it isn…
ytc_UgxU2aai-…
G
The presupposition I notice in this talk is that consciousness is even a possibi…
ytc_Ugy5mppCv…
G
Do not download files from ChatGPT. It took over 3 of my devices. Had to destroy…
ytc_UgyGWpjTH…
Comment
I constantly hear that the safe way to control AGI when it comes about will be to make sure it’s aligned with human values and ideas.
You know, Stalin and Hitler were people. They had values and ideas…
Assuming humans know what is right and wrong is preposterous. We don’t, we never have. To align artificial intelligences with human values and ideas is insane. It’s asking, no begging for conflict.
The only entity with any agency over AGI or ASI, will be AGI or ASI itself. Humans are nothing more than fumbling monkeys in comparison.
youtube
AI Governance
2024-04-18T19:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzetiop39l6WX4GZI54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw566Qs3Q-V1eam3I54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw0rExiCYEJeD4HAsB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxH8NAWXDYNQmSc7rp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwwGcXkpCQV7N6KagN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyp8OlELh7er5nsBUV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxnm588c-YZ1lyLBlt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFBYVEzLdwyvT8nsV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwFGKtVn8-FRq2Dpll4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxu7leEA9xcFl24W2B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]