Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Exactly. In house onshore devs with AI are orders of magnitude more valuable tha…
rdc_mojrhz5
G
People should use fake AI images with an altered voice on their social media acc…
ytc_UgzxJtuMG…
G
The fact that the prompters tries to punish her is clear indication that they're…
ytc_UgwgBydda…
G
tumblrtoddlers out here talking about "suing" when the person in question is an …
ytc_UgyJ9Bb71…
G
We are already doomed when numbers like 99% in 5 years are being used by “expert…
ytc_UgxuYoZJ0…
G
And you’re criteria/methods are highly flawed and biased.
Your AI test model is…
ytc_UgxOr4uLT…
G
Well.... it looks like my right foot is going to be amputated in about a month…
ytc_UgyUZIVWQ…
G
Im a delivery driver. I make 150 k a year and no way my job can be automated by …
ytc_UggAr353L…
Comment
Didn't Isaac Asimov solve most of these problems in 1950 with the Three Laws of Robotics?
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
youtube
AI Governance
2024-02-01T07:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxkMpXwzOgJu0sdf2p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxSaXpuXYcDvcypYpV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxoH9ulG_duymgDFP54AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwLZJn06iPrZWB0T9p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw57Ow55EplBC_kkX14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxegNf6KdPZOMhwbjR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7uQJH78vv70ptwUJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyhjjCTQ6ibb3ckYMh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzPFRRUaMj6BMol8G14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyzzTBGOiOKa-3QnsJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]