Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
An AI basically learns in a similar fashion to humans. Through learned and gathe…
ytc_UgxsgSGUE…
G
I've had dumb ass conversations like this at a party. I was waiting for ChatGPT …
ytc_Ugz6OwFEA…
G
But I got a question? If anybody just making their image like anime using AI jus…
ytc_UgzkYVS6P…
G
Knows not much about AI to position Google (the creator of transformers) a regul…
ytc_UgwfBCFkM…
G
I try to stick to two follow-up prompts MAX. At which point I draw the conclusio…
rdc_n4eaxw7
G
Ai wont take creative jobs. Cause you can tell what ai makes is so fake…
ytc_UgxKJ_Q1k…
G
I just hope that there continues to be competition to keep the tech moving forwa…
ytc_UgxN59OXQ…
G
I don’t get why we’re even using facial recognition technology when it has so ma…
ytc_UgyrqB9Db…
Comment
The fatal flaw in Tesla's autopilot is that a computer is just a software algorithm, so it's totally incapable of empathy and it can't feel pain.. Any miscalculation on it's part has no negative consequences whatsoever.. Most humans drive with care and consideration because we don't want to get hurt, we don't want to harm others...and lastly we don't want to spend years rotting in Jail for vehicular manslaughter...
youtube
AI Harm Incident
2022-09-03T22:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw7-HYtNcHnzRvs3R94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzo2oNBN_ODlk4no9x4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwQEKxn-LOhiRGAUSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwAfeNphLL4V8Nluql4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwc0uu-jsgVHswe7Wx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwbsSTAmGuk6L50rpZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy4SgqPfyE_aaq6KHh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzYA_EBC1DYHTFejOF4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugycy61gUrhGP2XIY3l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzJlwQK-98RxCDFN9d4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]