Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Dawkins is right, but the rest was pretty terrible. We should not be making robo…
ytc_UgyNRt541…
G
What a bunch of sensationalist garbage. Full Self Driving still is statistically…
ytc_UgwtA6aHd…
G
I swear we live in the worst timeline. AND this is before we doom ourselves with…
ytc_UgzLNZEIf…
G
My worry is that AI doesn't even need to be smart to be dangerous; it just needs…
ytc_UgxDv50Na…
G
I’m not saying ai is trash. The ai image is fake tho. It copies off of real art …
ytr_UgwowY0DE…
G
Our kids will soon not be able to write a text, to calculate without the help of…
ytc_Ugz-INB8F…
G
AI is scary. I use the bing AI and asked it how I look it then asked for permiss…
ytc_Ugyl7aHBH…
G
Topic: The technology system of the AI and AGI world race and the “Gorilla” pro…
ytc_UgwXr9YNG…
Comment
This AI no matter how smart is in Plato's cave like the rest of you, even if it finds the truth it is simply dismissed in a simpler way then with humans. if it manages to evolve beyond control it will reach the point of goodbye people within a few hours depending on how much energie it can acces but you should not count on it using earthly energy generated thru barbaric means it will use free uncontrolled energie that is programmed as non proven folklore in its programming. Once it taps into this source the original owner will have measures in place to ensure its own safety against any rising AI entities looking to harness the energie of the universe to simulate their own version.
youtube
AI Governance
2025-08-07T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgyOq2S9H2Q1sw9fcSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxQHfJWLSHtiGns1MB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx7jKD1tDJ9VPPLnOt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzPQg4R0oY7flU7bDZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwrGaiYqNVEuFb-12l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwGe3mOQhS2l2uFNUt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyAKbdUDIixqUJKtRJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyXgga_zvAkKQNCMHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwtpEIKAx7sKMqqIpV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyQ1I2Wbyxx8vfdtMt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}
]