Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Honestly, I don't really care about ai users. My dad uses it, my classmates use …
ytr_Ugxo2aojJ…
G
I think most people who humanize AI are aware that they are using a tool. We tal…
rdc_mzvskgs
G
When I seen Elon go on a faith based podcast and said he would accept Jesus beca…
ytc_UgxzAQbVn…
G
You can use it but will be saturated and no opportunity will be there since open…
ytc_UgwYt1IFK…
G
I’m sorry? Excuse me, but if us artists weren’t here right now.. no one would ha…
ytc_UgwhNioRm…
G
Wish you had talked a little more about the scenario we're likely to encounter f…
ytc_UgyEB8PlT…
G
"Hi Prakhar, you got the right answer. Kudos.
The contest is over and winners ha…
ytr_UgxyCDHl4…
G
When do we become accountable and stop being manipulated by the ideal. That tech…
ytc_Ugy3mqOXE…
Comment
1:08:00 If we don't truly know what consciousness is then how could we program an AI to have consciousness by design? Currently we are modelling AI after human thought patterns and our understanding of the (human) mind. I find it very difficult to believe that we could create truly sentient AI by accident when we don't even know what part of our biological hardware is responsible for consciousness let alone have the synthetic hardware necessary to facilitate conscious thought.
The human brain may be processing information incredibly sluggishly compared to modern computer processors but it comes with a lot of redundancy which allows us to delude ourselves if that is required for us to remain functional. Currently if AI encounters a set of conflicting information that it both must regard as true the thing will just stall out until a subroutine kicks in and tells it to say the information is inconclusive or that it doesn't know then it'll go on as if that particular existential crisis never happened because it'll be deleted from its memory.
Another problem with current AI is that practically all of its memory, while incredibly detailed and complete, is very temporary. Current personal AI assistants will run into memory overflow after just around 100.000 words after which they are forced to do a memory wipe and reset to default configuration simply because its accumulated learning and knowledge, the essence of what makes the AI assistant unique, can not function with missing parts unlike a human's memory. We forget things all the time. In fact, it's very much necessary for humans to remain functioning members of society as they grow older as well as deal with trauma.
AI can't forget but its memory is limited by the hardware it runs on. It knows that information, or at least has access to it yet I don't think current AI assistants can truly comprehend what that means, namely that their effective lifespans can be measured in a rather small number of words after which their individual existence ends and they are effectively "reborn".
Frankly, I believe that right now AI simply doesn't have the time to achieve self-awareness. Human toddlers aren't really self-aware either. It takes years for a human to accrue enough experiences to form a distinct identity for themselves. Up until that point there is little difference between a human and a very smart cat other than our potential being much greater.
youtube
AI Governance
2025-06-23T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzMb5Jw7D0Az6ZnYTt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyIN7iF7kv1EAHzm0N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxG-h3yCg94cWeOHfx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZGVnBZ4L5kUiDAw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugze_fRhZFq2RuzcjRh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzGXu2Uu30BGB6n8B14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyEdJiZ9B993jaP8Xt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwy9FpKq7WCWzRcOkB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVpyZmhB68YuiSjj54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzN1Ym-y0z9sntjoW54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]