Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like the "idea" of AI art IF, BIG IF, it is set up with proper permission and …
ytc_UgwL5p7vU…
G
ITS NOT THE MOST ADVANCED DRIVING TECHNOLOGY.. WAYMO is FAR MORE ADVANCED THAN T…
ytc_UgwqxGvgv…
G
I highly doubt, I just bombarded ChatGPT with series of serious questions of cho…
ytc_Ugw8SJ6xf…
G
There's a nice video where one confronts an LLM-scripted NPC in Skyrim with him …
ytc_UgxGOCJrP…
G
love how hard they are pushing a unneeded and unwanted tech that will only harm …
ytc_UgzHbnaoV…
G
What do they do when there's an accident on the highway with a large driverless …
ytc_Ugxakk0_R…
G
Ok,so the robot Will work for us and they will pay our pensions! That's all frie…
ytc_Ugzk0RS12…
G
The one and only goal of AI should be to prop up Human-AI symbiosis in a Sociali…
ytc_Ugw-iqGKZ…
Comment
Still don't understand why AI would have any reason to turn against whole humanity
Why would it bother with us? Who says it would kill everyone of us and bit go for specific humans?
What I mean with this is that with its intelligence it would realise how the world and the democratic system works and thus target only the corrupted motherfuckers who started the war in the first place
When you for instance see two cats, ones super aggressive and would claw you to bits and then there's the other calmed one, friendly even, would you see it as a threat?
So basically we don't know how exactly a super intelligence would think because we quite literally can not comprehend it
youtube
2018-04-12T00:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwIm_dRZr0_Uimi2kl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwhXV0iHf7A42bhkEB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzvlvnvgpcvgB4AXSx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyY9zLNl7E20JzEDnN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzg8jTERfTec0xqbuV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyDGW-tG_p8s-_gQQd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzXvz05HiU8QWSFG9N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy26yf4GyBFElYfbsJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgymSMdpAgNXxQ1TLP14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1-aTeed8ThuJ-sJZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]