Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI as its sold to us cannot work in a world where humans are required to work fo…
ytc_UgzRXWssP…
G
Ok it seems that my Chatgpt is way smarter than yours created at OpenAi.But it's…
ytc_UgxW0UlYH…
G
@andrewgrantcomedy
I intended to refer to the companies developing AI, not the…
ytr_UgynrwkZi…
G
AI will reduce the doctors needs by almost less then 50% to the current strength…
ytc_UgwBf_2wi…
G
He’s saying how will ai know it’s smarter than us when we trained it
It’s a goo…
ytc_UgwnVUsD5…
G
I do hvac. Not because I want a future for myself but because I want job securit…
ytc_UgxkWeuBg…
G
How convenient to have someone to sue, as if the chat bot or its creators are re…
ytc_UgyDtJwKS…
G
But chats are not used as training data right away. Otherwise an organized group…
ytc_UgzIJmFkC…
Comment
Any driver's assist should in every way elevate the safety margins of driving. Tesla loves to compare its features to the "average driver", but I don't feel it should be allowed on the roads, beta tested on unwilling participants, until it's better than ANY driver. If you want to call it autopilot, even moreso Full Self Driving, it shouldn't be allowed to operate on public roads until it's better in every situation than any human possibly could be. Until then, it's not a safety feature at all, it's not guaranteed to elevate safety margins, as it removes a driver which may more safely operate the vehicle from the helm. It's simply a convenient feature to illegally allow texting and other distractions.
youtube
AI Harm Incident
2022-09-03T15:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzzWedFph1jh4Uq1cp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKz-C8Z3WmXfdWFOJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwxc7zxL7oFAV0hby94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyJRGqpyfY6h4GEIxt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzgjJVEUeG5Av-SqcB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzrMD2TI58f5ZcRYhB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxtbgeLbcIfMLmcn6l4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxOUWofE_I5FaBvcn54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmfA7dV-KyXtmnDB94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyGH1zMCDN2pGTiid94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]