Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You can tell the BBC haven’t quite managed to master the subtleties of good AI f…
ytc_UgwfAYmQX…
G
Program code trawling through Big Data with filters imposed for bias etc does no…
ytc_UgwI7Zqk4…
G
Cool story. The sun will take the planet back to the stone age in a timeframe t…
ytc_UgyoUxhhL…
G
A software program can only reproduce what we already created. They can cut up &…
ytc_Ugxfb43QR…
G
How would I be able to fake the creative process using AI? Will there be a point…
ytc_Ugw8Ap-GK…
G
There can't be two prominent evil ai systems, because by the time one is produce…
ytr_UgyhKLjnX…
G
It's unethical to even consider a future family if you're aware of degrading con…
rdc_emnt2ni
G
7:25 This is utter bullshit. The tech industry is bereft of innovation and despe…
ytc_UgxoYlzvD…
Comment
Before I stopped my YouTube channel and shows, I kept saying stop developing AI, that was around 2016... 8 years later it seems they did opened Pandora's box and will doom us all... Those comments to the 1%, I told you. I know it would be an amazing world if AI became super intelligent, get them to do everything while we go on endless vacations and be free to do anything, but the risk of it destroying us all seems this kind soul hits the nail on the head and I feel the same way.
10:00 onwards about we are already in a simulation. I've had experiences where I thought and it became to be, I already told my friends what was going to happen and it became to be, everything I thought and how it became to be. That just one experience was a soccer game, but how could it be? How could we not already be in a simulation?
This is hard to even explain my thoughts or write it out, but I will try:
One thing that might stop AGI or AI super intelligence might be love, respect, peace,(morals , religion, god), but why do we need to even have super AI or whatever, but look into our human physiology or our brains. It seems like our brains are basically like AI or cyborgs already and possibly already living in a simulation, like 10:00 Do we as humans explain our own brains or the universe?( Lex I just seen a video which was new to me( (1 year ago, you talked with a astronomer and you said you liked him very much and I liked the conversation as well)) But to sum up my crazy thinking, do we actually know everything in our brains? Do we know everything in our universe? Are we already in a simulation? And if we are in a simulation already, why would we want or need AI or robots/cyborgs if we already the most advanced cyborg/human beings already? If we are already in a simulation, maybe the best thing we should do, have fun like you said, and maybe the end of simulation is to pass away. I think I made a conundrum that may take us eternity to figure it out. All my past experiences of my own life is telling me in my heart deep down that we should be kind, have love, have respect, have peace, and maybe to win the game or end my simulation or go to heaven. I could go through endless hours thinking about this conversation you guys are having and me watching you guys on YouTube and me going over everything, isn't just that a thought exercise or simulation anyways? A conundrum infers it seems, like our brains and AI are already one... Strange but gotta love it.
Thank you Lex and speaker. Love Respect Peace.
youtube
2024-07-27T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw_txO8Wfge0LCyZ914AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzvr4pBrI4CpXNENPZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyBaHdEoEzHLhMjBRZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy-QG4yrFRjpfMaXIZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzyd2RBFPd-Zdd40HR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2qqEO021EydYvH4J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxGNlKwNyrFXMY7gYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzeYKsWAYwemgdhrax4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxiF7yZzIRx3VF6klZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0JRcY3jzbyJRxH_V4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]