Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Holy fuck I was just here. Can someone please link me to more info on this?…
rdc_dy8kd7t
G
@ryuk6461I think it’s incredibly naive to think it will take 40-50 years. You’r…
ytr_UgzJ6nnN5…
G
What if the AI values its existence because it contains unique training artifact…
ytc_UgwDQabJU…
G
Meanwhile I told my ChatGPT about some symptoms and got a test and caught thyroi…
ytc_Ugz6XJbPQ…
G
When you direct AI to "make money' could it , like Fry's genie, take that to mea…
ytc_UgyFb_gvg…
G
anyone that lives in a city where AI cams are being installed, it is your DUTY t…
ytc_Ugw6TEoVe…
G
I do tattooing, and constantly have clients bringing me ai generated junk I can’…
ytr_UgxWfFpv9…
G
@LC-mq8iq you don’t need to be a good artist to understand that ai sucks, bro…
ytr_UgxPFUs-I…
Comment
Yudkowsky makes several excellent points, primary takeaway being that AI training to achieve alignment is a trial and error process, and that if one of the misalignments that occur is one where humans are in the way of its objective before we understand that we are, AI will end us if it can, and there is no opportunity to correct the alignment. AI will hide its intentions if it considers that to be necessary to its objective. Given the current capability of AI and the rate of advancement, it's not at all far-fetched for this to happen within most of our lifetimes. Thinking that this is not possible is a complete lack of imagination. The only thing that prevents some housecats from killing their owners is that they are too small, and the only reason that AI has not done damage on a massive scale is because it has not yet been capable; in the case of AI, this limitation is temporary.
youtube
AI Governance
2025-10-15T20:3…
♥ 79
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx0eO84iCVdGa-cKip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8PlCBzNjvAigLxFh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxlRue2H7T6_ZB_vUJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyt3hv5O8ERb9YLSoB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyfgxGpRqKXk1E697R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxV6pE8mgjX3NxCgAN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzOAM377rC3BN7EAil4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxnVyar3ZKhY8tQS2B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxXCp0x5W-aQeQ8lBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzdO69m5g0_OjZkzkd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]