Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“People act like one ai prompt kills 40 Indonesian children on the spot” or some…
ytc_UgyaEtOAQ…
G
most robo-AI devs have been pursued and carried out frantically and without a th…
ytc_UgwfYlEE2…
G
Very flashy video and all but if you think that in the USA any party (be democra…
ytc_UgxnZguwo…
G
One thing is really wrong. Efficiency may be off the charts with AI. But even in…
ytc_UgyrgBMLQ…
G
Interesting, women need more emotional empathy for infants.
AI means "Averag…
ytc_UgwEOqJGX…
G
7:39 this is essentially fantasy at this point still. We have the llms and we h…
ytc_UgwKhTYWt…
G
Yep, every time I have to do a long bug hunt for something the AI introduced, I …
ytr_UgwxIW_DI…
G
AI is currently targeting two of the things I care about the most in life - musi…
ytc_UgxtPJtsi…
Comment
If Elon Musk was a bean counter, then I'd agree with you, but he's not, he's an engineer...
And as an engineer, he's done some amazing things...
Consider the fact that there are over 6 million human-caused car accidents every year, 36,000 of them are fatal.
Then, I would say that Tesla's record of road safety is amazing, compared to human driver's.
Add to that, that Tesla clearly states that autonomous driving is still in beta, therefore experimental, and humans are responsible for driving, not the AI..
So long as humans drive cars, there will be an enormous number of accidents, some of them fatal..
You got to keep things in perspective; Doing this will tell you that Tesla is saving lives...
youtube
AI Harm Incident
2022-09-25T10:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxXVkeLc73exKwsnlB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugyu0FZNCfrbF-KwDKd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzF0u64lGT56NL13LN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw5bVbyOPWJ9RZ6jLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZnoHi1HspvpD2gZh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPndd7uyLJPKqe2ER4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwMWvk4rXhKTaOH7gp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxMHxJ1BUzHabVI-k54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgwqH2p6mCR22Qxf7794AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJBGD_pPEExd3J8rN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"ban","emotion":"fear"}
]