Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@TheMerchentOfDeath yeah i agree after using claude for like 2 days at the most,…
ytr_UgzK7LGiS…
G
I don't think ai would be able to control whether or not it is conscious…
ytc_UgwZ6ENtN…
G
Hey @wellheadcajun061203, thanks for the comment! Don't worry, the robot has a s…
ytr_UgzYEmnwC…
G
AI art offers something we all already take advantage of: convenience. The most …
ytc_UgxVvHtKs…
G
The thing with AI is that a 100% ethical model is probably not possible anymore.…
ytc_UgwwMwtB1…
G
It should be very easy to create robot AIs that are not conscious or feel pain. …
ytc_UgzDwscAN…
G
Early ai art was more interesting in my opinion. Back when it was disturbing how…
ytc_UgwnKcGnX…
G
Lmao how dumb can you be calling yourself "AI artist". You didn't make the AI an…
ytr_UgyBU-n_T…
Comment
Having knowledge doesn't automatically make a person smart. Newton, Tesla, and Einstein mostly had the proper knowledge of science and physics and were able to create, inovate, and calculate. That's why they could be considered to be smart or geniuses. Nowadays, we have quite a few famous people who do have a lot of knowledge, but no smarts.
Because the knowledge that they accept is highly questionable. So, if you use bad data, then your results and conclusions will always be incorrect.
But heah, if it supports your idiotic narrative, well then, I guess it's ok, right?
I see us sliding down a slippery slope, heading for a soupy concoction of stupidity.
We live in a society that shuns the smart people and praises the dumb ones.
Since AI has no feelings or a conscience, where does its motovations come from to do anything beyond its programming?
youtube
AI Governance
2025-09-10T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzQgX1fy86QB4Rmbyp4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwbtsOP-2bk96utyFV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwQn1s1_dAtld4AcRV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyHle1ukcaqzMOkMqh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgztToBUc5VVNHIgNh14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugywy59jwYrGAARp-2V4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwNanLtri5pVUDEBCZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzzLBu5TcIXQ9cm5EV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzpdFFUkIMXZwuQduh4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJQFF4QyvVcsMl4OB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]