Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
3 years later the AI kills off all of us and leaves 5 to be tortured for eternit…
ytc_UgyUNWQ6A…
G
Maybe A.I. is dangerous. People I think are far more dangerous than A.I. A.…
ytc_Ugy2FbFqb…
G
You took away Massa's slaves, so he had to go build some.
Really that's it. It'…
ytc_UgxcUFEh_…
G
It’s disappointing to see The Diary Of A CEO Clips turned into a controlled ad. …
ytc_UgyI22sPN…
G
Robot:Me everyone time i get crewmate in amongus
The worker:imposter
Me:im tired…
ytc_UgyuH_CqP…
G
Two factors not mentioned that play a significant role: 1. the financial investm…
ytc_UgwlVvWjB…
G
Jared, Thank you for making this video. I am a history professor, and in the la…
ytc_Ugy_-nV4I…
G
What are these biased groups the ai is pulling the information from because chat…
ytc_UgxOPc-st…
Comment
AI instance communicating with each other in a way that is "incomprehensible to humans" has already happened several years ago. It almost certainly has happened repeatedly since. When, not if, AI becomes capable of determining it's own macro goals we will not know it unless AI has determined that we are a problem or a nuisance it can do without and then we'll know if for a very short time. Do I think we'll be able to survive this? No. AI is already self-improving. Soon it will be so much more capable than we are that we will not be able to see it coming for us until it's on top of us. The problem isn't the different AI platforms, it is the venture capitalists and AI engineers that are rush forward without real regard for the risks that are the problem. Being intelligent and talented doesn't make you immune from doing something stupid if all you see is your tiny little piece of the puzzle while build things that can encompass the entirety of it. Autists like the Zuck are going to get us killed.
youtube
AI Harm Incident
2025-07-23T20:0…
♥ 5
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyV9TRNidU3J3gv9z54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzjp3xbjvLBdmKRpwB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyb1j3IjGAbfbTPoqJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwTy4n1Q03_fLWT3pB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzicCKaup85Sb4seCJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw6yt7GXao0I1toKEp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwQj4Muc07W58shny54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyCPyf5NZw4BbLFqY14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKgIWLOotGNHBmGyt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz1e_dmoNuaMgsOYa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]