Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When they say God- like AI that's when God will punish planet earth, Read Revela…
ytc_UgzVCn7kB…
G
In the 1980’s the buzzword that was going to replace all of us was Robotics by t…
ytc_Ugx8HqC9M…
G
Its not a real ai, it is a computer designed to respond with preprogrammed answe…
ytc_UgwoOnjX9…
G
AI should not be used in the direction of getting reports, instead it should bab…
ytc_Ugz-_F_sT…
G
humans need to be the masters not the reverse - do not WORSHIP technology - it i…
ytr_Ugxlenlpn…
G
Thank you for your observation! Sophia's movements may remind you of the Chucky …
ytr_UgwppvNz9…
G
If people dont have money because there is no more job. Who do they think will…
ytc_UgyzMS4xI…
G
BRAVO ÉLYSE TRAVAIL DE REHERCHES INCROYABLES , JE VOUS ÉCOUTE DEPUIS QUÉBEC , CA…
ytc_UgzBppIgC…
Comment
Humans must agree to limit the architecture of Agentic AI systems to those with READ-ONLY GOALS. Goal should never be modifiable by AI. The AI systems should operate in a loop that checks every "thought", every plan and every interaction against its Read-only goals, set by humans.
AI Goal setting (and control) by humans will be the single most critical aspect of AI Engineering, as failure in this area can lead to extinction.
youtube
AI Governance
2025-08-13T05:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzHbyHr8BQmKOJI_8t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyfv3_yck0fEbd-vIl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugzb1_gtmPpOHb6sXWd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxlaLVtkoMqseLSwN94AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwx6T6j_PG_4hmoZ2x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxmen0r82zywpa0aT94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzhNbZtrih6h9sxn1Z4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwOx40P27mm7BJIWAt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxvatfpCv0Y9hZ4x1t4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxFudW6sfQhYS5ANwx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]