Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Dawn95284 bro quoted me and thought he did something 😹 the average artist is sh…
ytr_UgyNgtA4t…
G
i used waymo a couple of times recently and it was a breeze ....i would use it a…
ytc_UgwFA6Qml…
G
Because they're having a hard time comforming, like having been fired by a greed…
ytc_UgwFaRIeR…
G
I feel as though the (toxic) people that defend the AI problem just simply are t…
ytc_UgxRZGtA8…
G
@nazdumanskyybtw I made something that looks super good with ChatGPT in like les…
ytr_UgxGXm09_…
G
that AI babysitter is partially true. the most time consuming task in vibe codin…
ytc_UgzNAe88Y…
G
Lol, Sam Hypeman and OpenAI just got a reality check with the recent tech stock …
rdc_m9i6hbb
G
AI art is only cutting away uncreative, repetive pseudo art - and most pop cultu…
ytc_UgyV4gkc8…
Comment
True AI would just be a program with the ability so select any random purpose for existing. Learning and acting on that purpose and changing that purpose for any random reason it chooses. That would make it extremely dangerous. It would be like allowing thousands of random chemicals to just mix together and hope something good happens instead of something bad. The word AI is often used for any program that can learn but that's not true AI. You can call it adaptive or smart perhaps but it's not AI. If a program can change it's own coding it must be limited to some degree to achieve a desired result. Just because a program can collect data, analyze that data and write new algorithms that causes better outcomes for a given purpose doesn't mean it's AI. It might be a super amazing program but it's not true AI. So as the speaker said, we still need to define the terms because I don't think we all agree on them. Some people may think if you only limit a program in ways that humans are limited then you could call it AI if it passes some test. I don't agree with that at all. I may have physical limits but I really don't know of any limits I have on my thoughts. Perhaps my survival instinct limits my thoughts and actions to some degree. Perhaps my fear of consequences limits my choice of purpose and actions. But life can exist without those limits. They may be like lemmings walking off a cliff but if they were living cells we'd still call it life. Just not intelligent life. If we made our AI fearless then that would again be very dangerous. So I think we should expand the meaning of AI to different types such as True AI (TAI) which should be illegal everywhere, Limited AI (LAI) and very limited AI could just be called Adaptive Programing (AP) or something. If all an "AI" program can do is something simple like alter your car's windshield wiping pattern based on rain analysis and previous wiping results then that's not true AI in my opinion. It's a cool smart program but it's not AI if you ask me. Anyways, just my thoughts.
youtube
AI Moral Status
2022-07-08T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyufjCXtkE17ab4Oah4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzDdzmcOBt42v1Uhft4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz7B9AoWfrLlFOTy094AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyyQ3pOVPeZR1QJkwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzCTjKJf8-FLT9lb-l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]