Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If an AI becomes conscious emergently, does it gain human rights or some other r…
ytc_UgybOxLMO…
G
ChatGPT is like that kid who memorized all the vocab flash cards for the test bu…
ytc_UgzF4EYAm…
G
PS: Asimov also had these takes on AI. Read "Fault Intolerant" and "Cal." Ver…
ytc_Ugy1n-lLy…
G
See, it’s scary that I can’t tell because cosmetics have made it that you can ma…
ytc_UgyLOX9Al…
G
@strawbbygirll Thanks for pointing me in the right direction, wasn't aware of th…
ytr_UgxFSELXM…
G
Those types of men are always going to find ways to sexualize women no matter wh…
ytc_UgywJyqgG…
G
The best presidentcandidat for the democrats - An senile-dement old man...
You (…
ytc_UgxWsI02O…
G
I think it would be necessary to define sentience before claiming the occurrence…
ytc_UgxGi03h5…
Comment
I enjoy your interviews, usually. :)
One thing I would like to point out, is Bitcoin. If super intelligence takes over the world, how does Bitcoin remain apart from it? It wouldn't and thereby be worthless, no?
Second, AI is growing exponentially, I see that. I'm curious how we get to super intelligence when these systems are based on human knowledge and text?
Many times AI gets it wrong, even on the latest models, for basic things in my career. I correct it, and another account asks and spits out the same nonsense answer. At some point you still have to get past these mistakes. These models are build on imperfect information at its base.
I admit, I don't know everything, but this feels like a basic problem that should have already been fixed. I correct ChatGPT all the time, daily.
youtube
AI Governance
2025-10-28T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxd28-nsbZS8iTkv2R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxGCz4z9Ubs7ngm_t94AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgybJfqi6ntWWPpIjip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzKW2DrgCuGKtzx-up4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwGrmfyXf-CY_fwwFV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx_m41Vxg7Wl6n0SmZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxKE5q6vqoiFW7r2AF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz3pF2ltKMmXLYrgGJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxXsLVuB2GJ4NZ7eqp4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyS197qz3bxxjU4cDZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]