Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If (or when) it appears that the evils of AI exceed human malevolence, then I wi…
ytc_UgzXqrFuf…
G
Real intelligence understands the integrity of all creation, and will only work …
ytc_UgyQ64djk…
G
Ahh, that makes a lot of sense. I’ll admit that from the article title and such,…
rdc_famcwsw
G
Finally. Call centers are being replaced by AI. I hope 99% of call centers gets …
ytc_UgzECASWy…
G
@ysgramornorris2452 AI is a tool and AI as an Organizational aid takes nothing f…
ytr_UgzzmG1wZ…
G
The answer to AI companies is called boycotting and parallel economies. People n…
ytc_UgyFgO3op…
G
As someone who wants to become an indie dev and be the only one in my company i …
ytc_Ugz6Lhjvm…
G
offtopic. but are you noticing how the ai bubble is READY to burst?. smth i see…
ytr_UgxdeRouE…
Comment
As Engineer, i been dealing with this question for a while, i specialize in hardware design, but also deal in cybersecurity and there has been talks in many events i been to about the problems with AI having access to hardware internal processes without secure control shut offs. We could develop hardware key access and security shut off directives, but they are still controlled by some form of programming, we might have to make independent programming developments to control AI access, but the development is highly complex.
youtube
AI Governance
2026-02-13T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz_aHN8kBtfstKzhL54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxhDrLhihfJklufwqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxd9wUk0U5EI2mZ3Ud4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzc0PoCCvDmeuJG4uV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxVbHjeWASL5m4o7xV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwslQqsHxWQW_5qHvB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwdhCZevxjWCzmc-DZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwmLPWBpcbYahcCfap4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzBV_JHrjry5uLOftB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwTgAk4kLjFxczti1t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"indifference"}
]