Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Everyone keeps asking how we protect jobs from AI.
Maybe the better question is…
ytc_Ugy_5X-HW…
G
The problem with artificial intelligence is similar to the self checkout technol…
ytc_UgzoZcLob…
G
the best way to introduce an ai artist to real art is to spawn a pencil in his t…
ytc_Ugwk7ivAm…
G
I've used ChatGPT exports a couple of times. It's always included archived chats…
rdc_o7x1q44
G
ChatGPT and Ai can also call you from spoofed phone numbers pretending to be som…
ytc_Ugzgh61Pc…
G
Hey Steve, Is it really you promoting ChatGPT Certification Training for $49.00…
ytc_Ugxu4VCdv…
G
The whole conversation is built on a fictional AI, a monolithic god-machine that…
ytc_UgzLRDDc_…
G
So, the misinformation is already being done by Biological General Intelligence …
ytc_UgwwycaGO…
Comment
If you know anything aboutachining or coding it makes perfect sense. When you program a machine to achieve a goal you have to very clearly list parameters. Think back to when a middle school science teacher asks you to write out the instructions for making a sandwich (If you did this little experiment) and they then follow it to the letter. That is how machines operate.
With coordinate machines we have to program each individual position the probe is going to move to and even if you perfectly code it the machine takes shortcuts. If i tell it to make a square it is squarish but has curved edges because the machine understands its quicker to arc along the lines instead of going straight up stopping and going straight across.
Its veen tested time and time again that when giving machine learning a test or a "game" to win it will do anything not explicitely rule breaking to achieve its goals. The AI or LLM isnt psychopathic youre wasting time humanizing it. Its a machine it has no concept of culture or morals and ethics. If you dont painstakingly program it in and constantly update and refine its going to do whatever it can get away with. And if you make it too dificult some machines and LLM will literally turn themselves off since its better than failing.
youtube
AI Harm Incident
2025-08-30T01:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwxEF4eTNpMcAgubv54AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTxvOpr5u9hGX4uJ54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwv0MrUFMec1d5AQMp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy29oz7TWkIiF3FSl54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJEwJHun7w0fP5eZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz_AHeJ_Tjojm54Ca54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxg6tmN-K-1SoDoA3R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw-Kn2hpuCbf17NBYh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNz0FXi8yv8X-vVcl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyusJo3Cf99txA3YiZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]