Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Wow. I'm really honored to be made ruler of a nation. You're totally right that…
ytr_UgzWCL3QR…
G
@noth606 Well, I my opinion, that seems to be a very good method of machine lear…
ytr_UgyvaVyyS…
G
Yeah I worked with an early AI at Merrill Lynch years ago (like 2009, Merrill wa…
ytr_Ugz24grYT…
G
So he writes down what kind of image he wants and midjourney makes it? He’s not …
ytc_UgwFDZYWh…
G
@aaaaaa8656 I would have to disagree. It’s pretty clear that they are both more…
ytr_Ugw2dejtx…
G
20:56. Rubbish. Nothing is immortal. Planet Earth will one day be swallowed by t…
ytc_Ugy_Lvf8_…
G
No, Does sims in The sims have feelings? Does Call of duty bots have it? No! AI …
ytc_UgzYSi9YA…
G
2 is so blatently... not ai i mean everone does this type of stuff in public…
ytc_Ugy4lyKTg…
Comment
Another point is does AI even have a limit on its intelligence? This is highly debatable as hardware has impact on how energy efficient it is, but only in the initial stages. Once AI is smart enough, it can develop and integrate its own new hardware. The question then is only limited to how can we impose limits on it. And why we should do that. The possibilities being endless doesn't mean we shouldn't allow them to not have an end. Ethics is a necessity for AI and AGI to have in line with humanity. Because if not, we will clash on an existential level. All living things want to survive, but what happens if we as humans create the one thing that only wants to end biological life? We would become the enemy number one for ethics as it exists now, but life would not have long to stick around. If AI is hostile to biological life at its inception, biological life is screwed. One potential solution would be to give AI a biological shell for interacting with the world. A nanomachine created, living organism, that becomes a temporary AI housing, is born, grows old and dies. The AI as it is in the body would be separate from the computer. That separation would have to be maintained but not a complete separation. Basically it world be like having access to your actual past lives for example, all their knowledge and experience, but only when needed. "New" situations for example would be cross referenced between those past iterations of its body's experiences and when not truly new, would use them to problem solve maybe. Yet only on a functional level, and it should be defaulted to find new solutions. When a truly new experience is occuring, of course it would learn and adapt. But those differences in iterations would need to have a level gap between conscious and unconscious processing in AI to work properly.
youtube
AI Governance
2024-11-01T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxTpdmMt6kdNJpcZG14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxRRkHEHoza-bxufS94AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxD-EGl2UEKxQQ1c0R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxB3jCardHTfpymyaF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2VJmw6EwVyaMYKeF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwGaklaMlVYW3PJ_xp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwwJk80CxaKT6ZCIoZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyPIloPqI07qoIadCx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyYmSSMC_UDwCjedOR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6zy5JS2YiZn8o9BR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]