Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is true and. I agree 100% with this statement. I am concerned also about it…
ytc_UgxCh0qIr…
G
AI can only compute input, If it gets bad or corrupted input same will be true …
ytc_Ugx9hePrl…
G
It seems like you're referring to a part of the video where the robot named Soph…
ytr_UgwyXIbtd…
G
@goldnugget-e5j not all AI is based on stealing art. There is AI based software…
ytr_UgzvpnQcq…
G
Remember. If A.I. goes rogue , all power must be cut. And everything from ah ce…
ytc_Ugx07S6VM…
G
This will come across as...well, you can name it as you like, but I have done th…
ytc_Ugym4BofD…
G
Nothing wrong with AI art. It’s better quality than a lot of peoples artwork alr…
ytc_Ugwe2qNwj…
G
That's an interesting perspective! In the video, Sophia emphasizes her continuou…
ytr_Ugzq3UhrE…
Comment
1:37:57 That's what all leading AI companies are explicitly trying to build. No assumptions needed. The available evidence indicates that they are poised to succeed at creating exactly the thing that the available evidence indicates would be extremely dangerous. Why would we give them a free pass to do this just because of some skepticism that they will succeed at their horrible plans? Clearly they should not be allowed to do what they themselves claim is one-to-one with "the bad thing".
It has not always been like this, and this is in fact extremely different. You will not find the majority of any scientific field anytime in history saying that the technology they are building could result in human extinction. You are rewriting the statements of experts in your head before even processing them, interpreting every cautious understatement as brazen overstatement.
youtube
2025-11-20T23:2…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxhev4NGxygLF8oZMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxDuXnyJgV4hXBoO4B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJ_Utk815mESSL_xd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxf2ysfrcjwOnYW4F54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxmcFJw3kKLiERMxy54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxL0H9m8rS1m5QivgV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyFFyGyzTCs1cqrp2N4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMyuR03RQrhVBnhxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcR4GjDOFwp7z_kMd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzG-M5F4kw2zM21MZh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]