Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What happen to open source philosophy? When you don't know what actually running…
ytc_UgyCe0LoR…
G
Tech Bros: "I bet if we say AI will replace most coders, we can cut headcount an…
ytc_UgzzcQju1…
G
AI art generation is yet to turn a profit. its only upheld by investor injection…
ytr_UgzdPvuEc…
G
You should all watch gossip goblins videos on humanities' destiny involving AI. …
ytc_UgynqZ-ni…
G
53:40 The solution is, you can't have cooperation without the option to not coop…
ytc_UgzmTc702…
G
Everyone should be using ai to make their job streamlined and more productive, n…
ytc_Ugwe6__AG…
G
We need to switch to an economic system that’s purpose is to care for human life…
ytc_UgwaODFC3…
G
in digital graphics, im working on a snake game that has a lot of stuff using ge…
ytc_UgyPz_dtm…
Comment
Decision Engines need to be incorporated into ALL artificial intelligence / machine learning. It is an audit trail to uncovering why an inference was made over another with root sources for logic. Its the big fail of GPT / Bing right now. It can create fake answers - ghosting. With a decision engine it would have a chain of logic as to what led up to the decision. Even GPT knows it needs one. Just ask.
youtube
AI Governance
2023-08-06T22:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyCpTxsAB4pHDqvYUh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyMZwNMe5E5lybxC9V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQMSUp6D3OpMO9yfh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAeGg0QeFT4Ujc0Ep4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxVYRVA_34r6oiNHyB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugzl9k3eE2xswnaYePJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwnPmRYF5vSRXFnBMZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwDUV_Xyv9YKtOBFQ54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwtVlRmfkMWfaTMTW14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzXThTvEfSBzbpAvnN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]