Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Nah... AI will never be "god-like". Humans will never be able to transfer their …
ytc_UgwS0mTfm…
G
If you watch carefully you can actually notice a giveaway that it's not real, ve…
rdc_mtly45g
G
Alternatively you could just get chatGPT to write it then just read it and rewri…
ytc_UgyeMuYwh…
G
This does a great job trashing the people who presume to belong at the table bec…
ytc_UgyOhAwZ3…
G
For a better outlook that even a Marxist should like - read Stellar by Tony Ceba…
ytr_UgxkhrHP8…
G
We're glad you found the interaction intriguing! If you're interested in more en…
ytr_Ugx0srczX…
G
Robot rights matter
Unless the go rouge and kill us
But isn't that the logic fo…
ytc_UggHT8m2I…
G
No, it didn’t. ChatGPT readily admits it has no idea what words mean. It is just…
ytc_UgzLDs48W…
Comment
Humans and ethics, rules, law... - it's already exist. And implementation? WW1, WW2, and now wars like in Ukraine and Israel. Humans are aggressive animals - and AI like a child of us all......
But wait, does all humans aggressive? - we all eat other animals and plants on this planet and even if you think you're fine - its a lie and AI will figured it out.
So, if applied humans ethics to AI - we will f* themselves!
youtube
AI Responsibility
2024-02-06T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy3qYN9KNBLo5WGIfp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzu8isOgQWq3gVdzPN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyu9KPReLR2Gj6ZcuB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyLgaXpY30WF_ocst14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx0iYEER2eaEQV1KwZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzRkhYLoHqydt-6l_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYTk9eg8cjeLgQ9q14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzl_PKlxrZSemKMx854AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgydBMk_AMTWtmn066F4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwkrwixnVEePVgqvuF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]