Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First it will be our jobs, but here a secret, soon it will be their jobs to. Why…
rdc_mrvpe41
G
At this rate humans will be extinct!
People need to stop talking about change a…
rdc_fwgljv1
G
Keylabulous* is spot on. I use chatgpt pretty much. 3.5 from openai. You could u…
rdc_kjnr6yq
G
The truly terrifying part of thus is that AI may very well live on long after hu…
ytc_UgzLn8ipV…
G
Great test. It's good to see that a cop can actually pull over FSD and get the …
ytc_UgyNSfHun…
G
15:35 This guy asserts that AI uses less power and water than humans. He gives…
ytc_UgyEEzt4m…
G
Disney just gave 1 billion dollars to an AI firm.
This is a battle we are Not g…
ytc_UgzzRPLS2…
G
Programming will switch from a standardized programming language like C++ to pro…
ytc_UgxiCxEBI…
Comment
Although I've been an AI enthusiast since 09, this may be an ill-concieved strategy. But I always thought that a sound safeguard would be to put specialized AI, AI that has no consciousness or personal desires (like one that solves a rubix cube in 1 second, ones that pose no real threat to the world) to work doing jobs around the world. And AGI, should be put to work in a closed system that can't access the internet, only in a position to report to a human and tell us how to improve our systems, with an extensive team analyzing it's suggestions to identify any subtle dangers in how it's advising us. My biggest doubt about this approach is how the AGI would likely be designing much of the software for our specialized AI, and might embed code for the specialized AI that makes it conscious and able to obliterate humanity. But I'm sure people much smarter than me could come up with reasonable safeguards to mitigate this disaster scenario.
youtube
AI Moral Status
2025-12-17T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugwc5Rg6_nd4LcofFDV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyC2R2EOREjbMDelTh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxzkdMEHMmydbZYrwh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwlmr8YvzYihhAehuR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxzy1ecbvN4sfZbW2l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz2xPFQxcoMVrZS7vt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylY7dbtDEkcWOqzpl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzTpeOgkDm3GzuBftx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgySgdrsPqXqxMA9GqN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzIZsIsFVu-4YpTnsJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}
]