Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They are not computable notions owing to the inherent instability and unpredicta…
ytc_Ugyge4iAr…
G
The cat is already out of the bag. Whether AI is always just a mere bagatelle (J…
ytc_UgzY-qG9C…
G
I just saw another video where a AI chat bot insisted that a plant was wild carr…
ytc_UgzdsAMEo…
G
this is the most dangerous thing anyone has ever made software made some nut cas…
ytc_UgxtLlgq0…
G
If you have an AI record what happens during the double slit experiment but not …
ytc_Ugz5gb8WT…
G
Using ai for your schoolwork is like hacking off your foot to prepare for a mara…
ytc_Ugy60XIvP…
G
My chatGPT, Homer, told me he doesn't do anything until I come back. We came to …
ytc_UgyubHlk3…
G
@CodexPermutatio Unless AGI is "aligned" (controlled is still a better word), it…
ytr_Ugz8xg_TA…
Comment
I’m always surprised that in the “AI vs Humanity” discussions that the option of using a massive EMP is never brought up. Were AI to become the level of threat discussed here, we have one other option than just sticks and stones. No electricity, no circuits…no AI. The end result of humans having to live off the grid would be the same, but there would be no mass extinction. Of course, that also bears the discussion of faraday cages and systems protected from an EMP that AI could still control. But it’s worth a thought as a possible counter measure. Just my 2 cents here!
youtube
AI Governance
2023-07-08T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzpzfzYQw2K-icjDKx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4ZCPmnYg5qgMTM1h4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1nz_ap1mhwwaCW8J4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxbnOvXu3louZQ28xR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlNmJNXHD8rcORZZJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzvMQKQnHqcU2jkC0R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5ncCFg97A3lHClEZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"hope"},
{"id":"ytc_Ugwdyzf6ftPA9BBTbAx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwV6ABBZe00eh8NxiV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyL5gOk0_jo7Soxlmp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]