Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This future sounds extremely stupid, 99% unemployment all jobs automated who is …
ytc_Ugz3FWfj7…
G
@curiouskelpie2822 I get the reference. I was just expressing my stance on AI in…
ytr_Ugx6IMuCj…
G
We already tried that back in the 1600s. It went really well, but then the guy w…
rdc_d7kr08e
G
AI is still in its alpha phase. Stupidity needing assistance from a 👶. Such a gr…
ytc_UgwPrlDfd…
G
Ai can't only simulate the end result of an intent and do Its best to copy and r…
ytc_UgzdXfeQF…
G
@Youwishucouldit's not just data, the whole foundation broken to begin with. Th…
ytr_Ugwt4_x_F…
G
firstly: Being one with the Father doesn’t equal divinity, Jesus says the discip…
ytr_UgzwpNnFl…
G
Looking at the development of AI in the past 10 years, I don't believe in losing…
ytc_UgznPkK1t…
Comment
Wow! Elon Musk has seen the 1984-2019 Terminator movies! What a genius! Thank God he is warning us of the potential threat to humanity that AI presents.
Back in 1942, before the term was even coined, the science fiction writer Isaac Asimov wrote The Three Laws of Robotics: A moral code to keep our machines in check. And the three laws of robotics are: a robot may not injure a human being, or through inaction allow a human being to come to harm.
youtube
AI Governance
2023-04-22T13:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzqXrro1BuHgZclS5V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyvc9qyXvpo4uuQoSd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxm90Cz9ic2BuQANbp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy1bTssf0RY0H1PJEd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxcsfOIvdygXEZXlxd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJrlVBFipR9PQPldV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugx6J6rNXfpU8X0dJpB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwPAxazlT4uef736iF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwPhI6BLMLvjUyXZOF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyH_6XkcCtqaALxZgt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]