Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It would be cheaper for Google to develop their own Uber app, since they would b…
rdc_dftqb96
G
If someone were to destroy an AI data center without hurting a human I would reg…
ytc_Ugy20Zi4Z…
G
@heavilymedicatedforursafety Ai doesn't really care, it's stealing a bunch of ar…
ytr_UgxJgcLcl…
G
"The industry is realizing that AI lacks thr one thing essential for software en…
ytc_UgyW3lVCn…
G
U still need developers and coders to make sure the ai uses good quality product…
ytc_UgxWCxRdw…
G
“ai art makes art accessible for more people” is entirely predicated on the idea…
ytc_Ugw71YoZ3…
G
True. Free will is a useful fiction, a central narrative that we constantly tel…
rdc_jmvslm9
G
A.I will not replace their work, my uncle in law is a pathologist developing suc…
ytc_Ugzf3fcrL…
Comment
Elon Musk has been quoted as saying "AI's are MORE of a THREAT to humans than a nuclear bomb." This is not going to end well. AI should be destroyed now!
youtube
AI Moral Status
2023-01-17T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyc_tL-wMz2f4tnld94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5-TyLUNPTD4m3tNV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz3FvWppJ9Bt52meK94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzA-RTlSyailueRZ8l4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw-z2uf7T4ONYZtVmF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzlpP_-7G8Z7EBUM1J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzcw7AZf9JMDeg32yp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyyQ6uaReDCZPFoz8p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxDu8KCrl9N5BJeRLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwABPkyAGWmoGnkPsV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]