Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
well chatgpt-5 couldnt even diff 2 text files today for me correctly so i think …
ytc_UgwGfG1aa…
G
AI would make you extinct by removing your access to resources that it wants mor…
ytc_Ugyvvu40h…
G
Enjoyed this, Shelby! I visited San Fran last month and rode in a Waymo for the …
ytc_Ugw0kf6IO…
G
AI generated art could never give me the fulfillment making my own art with my s…
ytc_UgwEQDJiG…
G
This country is a joke. China had advanced ai before us and you dont see their c…
ytc_Ugy3wVuSG…
G
Do some damn research mr cool boy with the cool boy beard
AI should scare the s…
ytc_UgylRTsUn…
G
While these models can seem eerily clever and creative, they are essentially jus…
ytc_Ugy4g9JhM…
G
OK, whatever the negative I don’t give a damn I can’t afford therapy . I had an …
ytc_UgytcrKEQ…
Comment
Perhaps our thinking on this is way too anthropomorphic: an Ai that is capable of eliminating our species, is also capable of self sustainability without any concern for the plight of humanity. It would be similar to third contact with a interstellar-capable intelligence (biological, digital, or otherwise). Mere humans will be so out-classed by such an Ai that we probably wouldn't know that the Ai was in control. Wouldn't this be the smart thing for the Ai to do? At most, our efforts would be as effective as us getting a minor case of a flu. Assuming we even realize that there is an Ai in control, any resistance we could provide would be trivial and serve only to create "anti-bodies" to be re-used against similar future efforts. Well, now that I've told it what to do, I guess we'll never know.
youtube
AI Governance
2025-10-24T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzuloiXX9NyhPcCerp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxoKATJs_-p_pyisyd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxkMTZpL3o1OVxgbYB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyN8lUbmNWdk2dffs14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwBhtnqoukhPTl8FSd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzzTOouq1je9BWmqSB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx_0e9quQvUALEUqVt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwf5wLWAQ5s-arN28B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgygEzIeTg02bEQwoYt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwRD3gum62tfJxg5lh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}
]