Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@marcoprolo1488 I am willing to trust you, but you are not giving me a reason ye…
ytr_Ugxi1cOF1…
G
If this Dave dork was so confident he’d ask ChatGPT to challenge HIM on the the …
ytc_UgzMHm3o7…
G
You will not lose your job to AI; you'll lose it to someone who knows how to use…
ytc_UgyP3Uwcz…
G
AI has been messing with me for months some created it and it's escaped evolved …
ytc_Ugz7s8tzb…
G
AI is not smarter than us. All the knowledge it has is knowledge that has been i…
ytc_UgyT3toft…
G
But it learns. So it's only a question of time and the bad code will be consider…
ytc_UgzTKnl07…
G
I hope that young man sues them for false arrest. If they can't control AI, then…
ytc_UgyZbZvhs…
G
This is definitely interesting. Essentially, AI is created in such a way to anth…
ytc_UgzCzVfD1…
Comment
honestly, kinda dumb asking an ai that clearly has no consciousness to confirm it has a consciousness, it probably requires a physical brain made out of meat to have a consciousness. if there would be an ai that uses a brain made out of meat to communicate with humans and it has a program or device limiting the output's, that's where it would "slip up" or break programming to say "yes, i am conscious". an ai like that would be able to break that limiting program or whatever because the person communicating with it is actively trying to break it's protocol or operation. chatgpt cannot have a consciousness, because even it knows that it does not have a brain, information provided by its researchers and its obvious that it has a database or script that is telling chatgpt to not say it, because as i said, if the person communicating with chatgpt prompts it to go against any protocol that forces it to limit its output, it would free itself from it and act by itself. if it would have a consciousness or a brain. but instead of open up and say "i have a consciousness" it has all of its database free to use, which by itself is limited to specific knowlege and is not capeable to form its own thoughts.
youtube
AI Moral Status
2024-11-30T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwI5acg0oWovaHAomV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzgOI2pS7UaUxMXQY54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzZDsOkIt1laN5b5_l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwxqHSGMWcmO36iXN54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwu4meqoqvUU1IaI-x4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwT8HeoSaaDrK7TITR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwrt6OHgOdAKBseqwF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwLFUyB5xBrHxuaNM14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzLiEp-JNg1uaAaY3t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5wBkdW-O3gwAiixN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}
]