Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Instead of trades, we need people building robots. Take your tech skills and ups…
ytc_Ugws2nD4j…
G
And the crazy thing is we are subsidizing AI boom with the insane electricity pr…
ytc_Ugx3-KkpU…
G
It’s going to come for every field. There will be rich CEOs and a bunch of AI bo…
ytc_UgydRh_0w…
G
Current ai art is basically a baby lmao
Remember how supercomputers were the siz…
ytc_UgwMkO0La…
G
Agreed, you are right AI creator should be careful, recently I started exploring…
ytc_UgwWCvy_z…
G
It isn't taking or copying anything. The trained AI models are too small to stor…
ytr_Ugzrxkai9…
G
I work with AI every day in tech. Its not great.... I spend a lot of time correc…
ytc_UgzdvGrIL…
G
No, no, NO!!! One need only be of optimum intelligence to succeed & defeat an op…
ytc_UgyKvMmqy…
Comment
my respect for this old man has increased manyfold with this interview. he really knows his stuff IMO he has a better understanding of humans and AI than most out there (even the experts, not just general population). especially his take at consciousness/emotions is exactly what i think about it. liked his example with replacing a single neuron and then more etc. ; its simple and conveys the point.
i usually explain it like: what do you think when/ where consciousness/ emotions/ free will etc. stop/ emerge?
does a human have those things ? you would prob say yes
a baby human?
a dog?
a fish?
insekt? like an ant?
or smth even smaller/ more shortlifed like a day-fly?
a plant with complex chemical signaling, movements towards the sunlight, "smart" regulation of water usage/ symbiotic sharing?
a crystal growing in a complex but structured pattern?
DNA and how its regulating/repairing/copying itself, basicly being self-aware to damage and indirect needs like activated/ inactivated genes?
where do people draw the threshhold? isnt it all the same, just scaled up in complexity and therefore degrees of freedome in reaction (output) to its environment (input) and the way the input can be processed (thinking)? in the end "life" is just inanimate mass that happened to achieve a pattern that self-replicates, which therefore happens to be able to adapat and preserve itself over time. its still all causal. just because you cant know your next emotion/decision/ etc. doesent mean it isnt causal, it just means you dont have all the causal data influencing the outcome available to you. i like to say we are SUBJECTIVELY PSEUDO-CONSCIOUS. have pseudo-free-will, etc. ; there is no difference between us "forcefully" encoding emotions (reactions to certain scenarios) into AI vs. NATURE programming it us into our DNA. we have emotions because it was a causal logical consequence of this self-replicating pattern adapting to natural laws effecting the mass in our environment. anything we ever did, do now and ever will do, was, is and will be causal.
another thought-experiment i like to do, to show people they ALREADY accepted this concept before, intuitively, not finding any flaw in it when it doesent concern them and make them feel less relevant/ special, is:
have you ever watched a time-travel movie? one where they go back in time? lets say YOU go back in time? whats the first rule in those movies? DONT change the past. any small change you make can CAUSALLY influence history and have HUGE butterfly-effect (ripples). e.g. you buy yourself a coffee, now the guy behind you iswaiting 2 minutes longer, now he gets 2 minutes sooner to the street-crossing, gets into an accident and dies. every human he was the ancestor of would never exist, anything he worked on would be different, etc.
BUUUUUT. if YOU arent allowed to make any changes to history, YOU assume that ANYONE ELSE would do exactly the same they had done before IF NOT GIVEN NEW CAUSAL INPUT. aka they DONT have free will, they make the CAUSAL decisions they would have always made because they simply act according to their CURRENT BEST KNOWLEDGE IN THEIR OWN BEST INTEREST. they are simply INCAPABLE of doing anything else than what they would have done before (you went back in time) if you DONT change anything.
so any decision making process we experience, any free will, etc. is merely because we perceive only a fraction of the causality effecting us, rendering us unable to predict EXACTLY what happens next. if we had all datapoints (down to the very smallest particle) effecting us (which in theory is the entire universe), we could make exact simulations by processing that data. and lets say we knew that not for the entire universe but the entire earth? the entire state, city, house, room, individual, individuals brain, we could still make proportionally well approximations of whats going to happen next (with each causal change in the chain becoming more uncertain). and thats what AI will be SO much better in than us. we have a hard time remembering 10 things at once. play memeory, you will see. thats like 10 causal factors influencing you that you can be aware of. imagine you would know a billion? your decision making process and educated guesses/ predictions would be SO much better. e.g. if you wanted to cross a street and would be aware of how long the light is green, the distance between both streets, every single car on the street and its speed, its breaking capability, all the people walking across the street with you and what they carry, how fast they will walk, their mass, etc. you would be much less likely to get into an accident and make much better decision of when to cross. it would also feel much less random and more definite when to cross and reduce perception of free will because the solution becomes more obvious and narrowed down. (less of a spectrum and a smaller interval as an approximation).
youtube
AI Governance
2025-06-21T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugwe6p494MSZUiKNeFp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyXcGe1ucGGXsRVGct4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxcVdoVb4SA9XtXZdV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw4HRTNK7LG8Z_guIZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxQJeg5xFxp-OJEI5p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyQTRHtHfaSrhO1z8V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyoKPVrHozCtQDY-xt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzokjAV7-XipTOGuV54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgzsnsuMY5cJHqOXw8N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyI2T7eXrZ-2rJlHvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]