Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was prepping for an interview. I was stuck on a leetcode hard problem the othe…
rdc_mt8kn5y
G
I really think a lot of the people mocking others for finding comfort or connect…
rdc_n7u5mac
G
Can you highlight some examples of human AI symbiosis? It seems like something w…
rdc_jmfv58o
G
remember when people used to joke feeding an ai the internet would make it want …
rdc_l4duiio
G
I agree.... The kid would've never came to chatgpt in the first place if the par…
ytr_UgwjCajKw…
G
These are rather stupid "Insights." If you tell a AI or anyone. "You will be shu…
ytc_UgxAyEvne…
G
Hecklefish is so adorable! Thank you for taking care of this sweetheart!
I rem…
ytc_UgwS_mqH4…
G
Copilot always reacts to your coding style. The moment you get sloppy it will su…
ytc_UgzDOI8VZ…
Comment
The video is interesting but, as someone who is working on developing AI tools, there is a massive chasm between the AIs of today and an ASI that has the means to kill us. First of all, AIs do not "think". AIs have no idea of what a human or an AI is. All they have is a vector map of tokens/concepts that "human"/"AI" is related to.
When posed with these fictional scenarios, you gave in the video, it can be argued that AI engages with agentic misalignment simply because it is fed a ton of human data, with examples of humans exhibiting self preservation or killing others when aligned with previous human "goals" or "aims". Admittedly, this needs more research, but it's a leap of logic to claim on the back of this that AIs can "reason" and "think". I know that this is never explicitly stated, but you use language that insinuates as such.
The reason it is so important to remember that AIs cannot "think" is because, if all AIs do is adjust weights and probabilities based on human generated data, then there is a solid argument that AIs can never become more "intelligent" than humans. This is because AI will never be exposed to data generated by a superhuman intelligence (SI), so how could it possibly produce any output based on this fictional SI?
Overall, I think this topic of discussion is very worthwhile and sorely needed in a non-doomer manner. AI companies do need proper regulating, but I think the danger that AI will become superhumanly intelligent, thus potentially triggering the exponential ASI boom, is massively overblown. In my opinion, issues such as companies attempting to lay off everyone to replace them with AI, copyright laws/content theft and people giving up critical thinking skills to AI (which remember cannnot think) are far more salient than the danger of ASI, at least in the short term.
youtube
AI Governance
2025-08-26T15:2…
♥ 108
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxB-7V4zW9pint2Llx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzayTktYQLHMCwZLVp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtkYL7gALyAeBzomF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1onCVK2MzAw1C2u54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwI9p6oBenlaFOZWxJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]