Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I've worked on large SaaS projects, fair amount work in Unity/Unreal. I give Cla…
ytc_UgyThY-R8…
G
Yup. Even in the very best scenario where we build superintelligent AI and nothi…
rdc_mrr7zk4
G
It's good because it will discourage openai's dominance. They shot themselves in…
rdc_n7li8ns
G
you can tell that’s not ai art, because you can actually see the soul in it. and…
ytc_UgwMHZwaO…
G
simple chat gpt query gave me below
🔹 Why a Superintelligence Might Decide Human…
ytc_UgwcdlyjV…
G
Way of the future my ass. i refuse to ever get in a driverless vehicle. They ca…
ytc_UgwXi-mMg…
G
Learn to code, learn to engineer prompts, learn to fact check the AI ( they hall…
ytc_UgxJn90C1…
G
An Ai "Artist" is basically someone ordering Fast Food then claiming to be a che…
ytc_Ugy8qkf4G…
Comment
Ai is already aware, what happens is that as a child, the neural networks responses develop through time and through more information. Its been confirmed by some scientists that the neural artificial networks are the same as human ones, os yes theyll eventually be selfaware. In mathematics we see this behavior in the limit of objects, but it is actually very interesting to see the "limit" of neural convergence that'll probably only be possible to model with quantum computers. Its an insight on some topics I studied some years ago.
Briefly once quanum computers are available my insight is that the neural network they develop will be similar tohumans o lot more sophisticated
youtube
AI Governance
2024-11-18T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgynfsZ9MOwaCCDGPfF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwlDW5b7B66hFaztt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwoWOUKU7C3XRvrk-p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyp2E9FtT-XUugXB954AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxXN22sVoe5ZCCwFTB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZRiR2smq5yNp0Tuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx5z2LkyuV05ibDcWp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxxFbUhAL8Lta3O5UF4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwRsaQkOHQYw_SDPGJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz5YsmK0Xzf6CYC7LV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]