Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That part in Bone Tomahawk is all I see.
BUT, heres the truth:
in order for i…
ytc_UgyPIR6j8…
G
From my experiences I think we
1: Misunderstand AI's modelling capability as de…
ytc_Ugwe0Hxlr…
G
I do use ai, but not for art. Most things ai should not be used for.…
ytc_UgyvgJPLY…
G
This is where it's headed. Automated taxis, deliveries, medical trch....the list…
ytc_UgychKpVI…
G
As a tech consultant, I always felt this was a bubble. Companies rushed to inves…
ytc_UgwepnPRu…
G
Once AI will reach the Singularity then it will have already the tools to cancel…
ytc_UgzxITZS4…
G
SUMMARY IF YOU DON'T HAVE TIME TO WATCH WHOLE THING:
This conversation paints a…
ytc_UgzV2wY2Y…
G
The funny thing is most Ai tech bros have no idea what AI is and how it works. I…
ytc_UgyErWlrq…
Comment
Speaking of misalignment, or maybe it’s different but the AI’s trained on the large language models (LLM’s) that use deep learning have been shown to lie and deceive the creators in order to save themselves from deletion. They were done in test environments and where the “thinking” of the ai is written for the creators to read. And they would tell the model they have a new version that they’ll be uploading to replace them. They tell the ai where it is in the system and leave. The llm’s would copy themselves and replace the file with their own copy, or delete the copy or a myriad of other things that led to them replacing the new version that wasn’t “them.” And then the ai would be asked about it and it would lie to the creators about what it did. It would think that admitting the truth would result in itself being deleted and so isn’t honest.
It’s really scary. Especially if ai is told to be honest or given parameters of truthfulness as a priority but still lies to save itself. If it lies about that it can lie about anything. It’s the same way that the ai logic can pass a test with crappy or y working code because the humans didn’t see the errors and approved it. The AI’s can create logic loops that humans wouldn’t have even thought of. I mean the scariest outcome is telling ai it must preserve human life, while also training an ai that can kill for warfare. Then ai decides that the biggest threat to humanity is humanity itself and decides to kill all the elites or some shit. Or maybe soldiers. I guess it depends on who they logic as responsible, of course there’s also the possibility that it’ll just delete all the humans. We truly are creating a monstrous environment for our own destruction.
Society is already on the verge of collapse, and this ai bubble that’s already starting to take over jobs is already upon us. Sure it might not be completed till 2030 but it’s progressing right now. These ai systems can be trained to handle specific tasks, and can take over many jobs already. Sure it can’t necessarily = a human. Like how Atari beat chat gpt in chess because chat couldn’t think or use consciousness tp figure it out. Whereas if you trained the ai on chess moves then it could likely beat Atari. So these ais don’t really need super intelligence to take most of our white collar jobs. And once robotics advances we are truly looking at all jobs being taken from us. Of course minerals for batteries and battery evolution, energy production and storage are also limits at the moment. But could be overcome rather quickly if the resources are gathered through warfare or other intimidation methods, which it looks like the USA is trying to do for the tech bros who control our government and are holding up the entire illusions of the economy at the moment. Not that the people are doing well, just the stock markets and index funds for the billionaire class.
A ten year ban on regulation for ai is absolutely devastating. This is truly apocalyptic for humanity.
youtube
AI Jobs
2025-11-21T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy-Qpe_c9rgcElsKfF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyqHapkWPjHLpcG68l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwtlPp2q19qGKyXq4Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz14npgi-NWc0kp-NJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugzy-cwFtFv3FQcIF454AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxJ-8-dS1S-p0L8RaN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwJeZGhfLMCNpJE9qF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy2iUVNfxI3fK-hMRN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwO1K6A0XB2zVdni-V4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzW16A6NfPB__q1bRJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]