Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
That's not how any of this works. If you want AI to respond in a certain way, as…
ytc_UgxTdiRY_…
G
It couldn't get more urgent to take action as when the robot itself gives the wa…
ytc_UgxibXdTm…
G
Here's the good news, self driving trucks will fail, and those who invest in the…
ytc_UgwscndL_…
G
@thatitalianlameo2235 You aren't wrong but my point is, if you have seen current…
ytr_UgzkRALSq…
G
@you-share trying to compare words, prompts, styles and themes in ai to an ch…
ytr_UgygFb_-5…
G
I think you're optimistic that LLMs will get "much better". Over the last couple…
ytc_UgwlfGLue…
G
God father of AI, and all your "experts" hmm. I wonder how they are all so renow…
ytc_Ugy3Nsx3w…
G
What is really sad is the people that build this stuff. We’ve known for a long t…
ytc_Ugy0HwN_2…
Comment
As the Terminator said, it’s in humans nature to destroy yourself’s. I believe the biggest effect that AI will have on us is mental health. When people start losing their jobs, can’t pay the mortgage, can’t provide for their family and don’t have a purpose, drug use and suicide rates will go through the roof. Even if humans can make AI safe, so it won’t destroy us directly, it will indirectly kill millions of us. God help us.
youtube
AI Governance
2026-02-06T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwrT7wd8DY-YPdowqJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxMN2KTu5AuNXJX6y54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxwFDUWXqI0pemFoSN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyvD_Z22Lv66Jw8R-R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzS63lDqB6Ua3ddoIJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzB9vIB_bB0FsHTHnR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxguLsBeGY52QUIvHt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzSuLT2ACgyfpyvM0d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgwEz5nrlepe-UQgcaZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzOcRVHE04VMO7p8F94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"})