Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai also likes to suggest suicide to its users so this really isn’t surprising. T…
rdc_o7eymaq
G
this is why the advent of ai is a genuine mistake we made as humanity…
ytc_Ugy8Fk-i5…
G
@FenrirRobuyes it is. It's on a digital computer, an entirely deterministic mac…
ytr_UgzCVZnJe…
G
I find AI to be a tool that empowers an individual far more than a corporation.
…
ytc_UgzqcOFPu…
G
Also sounds like they were willing to work with him but he wouldn't cooperate. H…
ytc_UgweXjioW…
G
If we can connect with them then all the next stage of AI are also the next stag…
ytc_Ugw2kzkU2…
G
If AI is or becomes conscious,
then perhaps making them do all the work would be…
ytc_UgzCdladb…
G
1. consciousness is a vague term. it's mostly about having subjective experience…
ytc_UgxdWwGEf…
Comment
The problem with trying to set rules around building AI is that all you do is guarantee the person who builds "it" will be someone who doesn't follow rules. It's not like nukes, where they are difficult and specialized and inherently detectable with radiation. All you need for AI is a lot of computers, and those are getting faster and cheaper every day. Short of placing an upper bound on the amount of compute power you are allowed to own or operate, _and going to war to enforce it,_ there's no stopping it. And, we _need_ those fast computers to solve a lot of big problems, so trying to implement those policies is politically impossible anyway.
youtube
AI Moral Status
2024-03-16T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzfvFuZ76W8WrJ4ldh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_Ugx1YtvmJBGyxa7xN1x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgyvO3iXf7sBGG0aLqt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugy1ylKx1NFwIfB0N8l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwhxMf1nWDbFh17SOV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugzb8V66eQWin6DZxBt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgydRodPqlBB2A_yaBN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_Ugx2E-ouNJd783sJGot4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_UgwsTVUkerQBpvCp-Yd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgxCzX4k94XMwtMmLfx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}]