Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At that point what's stopping fanatics hundred years into the future to recreate…
rdc_lub8e3u
G
data minig locations huge noise huge electricity pits, driverless cars etc.... A…
ytc_UgxRujLch…
G
I think that before A.I. destroys us, it will become the bridge between what is …
ytc_Ugzqb5q-H…
G
the most ironic part is the fact that these "artists" can't even own the "art" t…
ytc_Ugw3kGLgf…
G
What he's describing this global AI sounds like skynet to me and anyone who has …
ytc_Ugw_bjtVI…
G
Show Me How...
I'm going to slightly adjust what I say to people when they ask a…
ytc_UgwfHF7sG…
G
Imagine, you're a real woman and you see a robot who looks better than you and i…
ytc_Ugxaj_FJk…
G
12:17 those tar pits really be hittin A.I. with that “A labyrinth of sounds and …
ytc_Ugw-5krt9…
Comment
Bernie. I think we’re too late for this given the fact that we are in this race for who will develop the smarter AI module and AI agent. The stakes for geopolitical Suprema for exceed the repercussions associated with domestic policy is a result of this we have missed the opportunity to incorporate safeguards into these AI modules and agents. The genie is out of the bottle so to speak. And it’s not going back in. Even if the US were to stop and incorporate safeguards and ethical constraints including social economic policies, we are unfortunately not in a position from the geopolitical standpoint to do so it’s like watching a train wreck.
youtube
AI Jobs
2025-11-27T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwEPkBdGBgS7kE__xl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyq7yYmz6JlYwmWo2t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxvyHFbxZcP9P-FDIV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxN7pLPJ14Q4Naedux4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXYilsPGlHZBHpEYZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxHGYq0cKwp69JFt094AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyrK9aJ5A3s5AIj5C14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx-sfH8qmV-bKWwnPt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzkHI1dcSiV7Hm_WAl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzqlwJnf3hy3ClACR14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]