Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Makes no sense to let a human work his ass of for tasks that robots can do bette…
ytc_Ugx60W9yQ…
G
Seems like AI and self driving cars are two of the same. Not ready yet…
ytc_Ugz41vX3a…
G
I think that AI will develop a new language that only AI can understand. This sa…
ytc_UgyRNM7S-…
G
When I was applying for graphic design gigs 10 years ago it was mostly making th…
ytc_Ugz35LXrq…
G
AI is learning so quickly we might not even recognize the world in a mere 10 yea…
ytc_UgwvmW31l…
G
Why dont they use AI to find a cure for cancer and blindness in the world. They …
ytc_UgzYFei8G…
G
Hi from the UK; back in the 70s, we were told that with the introduction of the …
ytc_UgzH2YyPl…
G
You're so miserable....Millions of people finally found a way to express themsel…
ytc_UgzkdvNeO…
Comment
I'm still struggling with understanding the difference between my wet ware, and a llm that's incredibly large. When, would enough transistors (with a current passing through) spark Consciousness, with [lstm, long short term memory] and the use of a few good quality gpu's, once you enable a model to self update. As long as you're running the model on your own computer that is Not running on someone else's device you have ?, more control over the model. Also, how often do you restart the computer you have. If the llm forgets the conversation what does that mean, it means that the cache is limited on the topic you have been engaged with. Just using the command prompt, is not good enough even using voice is not good enough, video with voice is better, as long as you can see in the command line interface what's being processed. Even, changing or asking a off the wall question would seem to ground a llm. Most people can handle a change of subject and not go off the rails. Food for thought everyone just saying
youtube
AI Moral Status
2025-07-15T01:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxWdKgLvWQ2km4odPN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgziHaqM4TpITFcKDLd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwyrpos0HQO_K4a2j14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzRLb5tARcevARp3YZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwhteYELldGDSiSLZp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyM5mG-IO0JeX6KJnR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyNaUMOf-4oJZCp8sd4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzPggOaZ0R_5x_7v-B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"amusement"},
{"id":"ytc_UgxT_A1HNSH0nlBjT0Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxIPxzINynWbjSh3s14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]