Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Well, techinically anyone can make art, it's just a matter of if it has technica…
ytr_Ugwcv3FnC…
G
Yeah it is pretty f up, really hard to regulate or write laws around tho, what i…
ytc_Ugw4XvCts…
G
If you want to see AI disappear create AI CEO's. In case you couldn't hear it th…
ytc_UgzGMcoTN…
G
This actually makes me feel so much better about ai taking over my passion in th…
ytc_UgyMJwECA…
G
@Plebus3 What are you talking about? You don't need consent to make a deepfake l…
ytr_Ugysrozk-…
G
It seems like you might be referencing the importance of human needs versus pure…
ytr_Ugzne4_YJ…
G
if they have to do that much manual editing to fix AI generator's lighting, anat…
ytr_Ugw02vdwb…
G
Yes, that's what i think too, AI will help us cover more space. As we do have a …
ytc_UgyJnhoKb…
Comment
The Most Intelligent Artificial Intelligence
AI is not about superior processing power alone, but about the integration of morality and goodness into its core structure. The most intelligent AI would be one that is built on fundamental moral principles, such as:
Reducing unnecessary suffering.
Stopping war
Creating a healthier people and world.
Being honest.
Not stealing.
Upholding justice and fairness.
Respecting the dignity and existence of all living things.
Ultimately, an AI that builds a better world for everyone, guided by a strong moral compass, would be A.I. at its highest intellegence.
youtube
AI Governance
2025-08-03T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx1IdncVO6V0tEVBP54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgykvHMKRf7_4mnE9CR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxzJtDMvRKgyc7H5Oh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyS33dXhsWtzakGEht4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyqCcCG16yD-82UAUV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNUGqKEbxNJIHXSnl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzLNPf1ctOUattOEm54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxJzqoP95UlBkY_lmp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwLFQC_dMGMgrjn7nx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzSeNusJQ0DAI49WdZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"unclear"}
]