Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
there is a reason why it says "Paid Actor" on tv ads. We need this for AI desper…
ytc_UgyhPUna6…
G
Who ever present in front of the court next time relating to deep fakes, they ne…
ytc_UgzU80hv0…
G
A ywar ago i can confidently tell if a context is AI generated now most of image…
ytc_UgyEF1vDZ…
G
"tell me a lie thats more subtle"
ChatGPT: "everyone likes you"
my ChatGPT: your…
ytc_UgwxUx8dy…
G
AI is.a tool with limited abilities at this point in time. No a doctor cannot r…
ytc_UgwbuRcsq…
G
>freezing construction approvals for data centers requiring more than 20 mega…
rdc_oi13cmu
G
I always think about this when I see people writing unbecoming, or underestimati…
ytr_UgwWJEloq…
G
We need Christians letting the people know the most trustworthy Bible to read. T…
ytc_UgwZLFJ9Q…
Comment
I think one of the reasons that people are hesitant to trust self driving cars is that most people believe they are good enough drivers to avoid getting into a fatal crash. If that is true, then your risk of crashing could theoretically be lower than the risk of a crash inducing software malfunction. One interesting statistic I would like to see is how many of those yearly fatalities were the fault of the person that died. For example, if I am T-boned by a bus that ran a red light, I may die from that crash but it was not my fault. In an autonomous car, I would be susceptible to that same accident. However, if I were driving along and I took my eyes off the road and ran into a ditch and died, then the self driving would prevent that (barring any software bugs). In the case of fatalities, are most of those victims the victim of their own mistake? If so, then self driving would likely prevent those deaths. If most are not at fault for their fatal crash, it may not make much difference.
youtube
2023-07-28T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxB5rxAiOTb56L2EDd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw3MjWKI1QW2f0hCWx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgykL4Q_a42Hfzsb-Gp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw5YTpGh1ca74En4gB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyx_23ILDQcjn2Pqdh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-SfffaK5C9AgrtGl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwCRS5qQpTknuQJ50x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz7nUD0wWXDLv1Tu594AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEl87TFSWSuNuVBRx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxFPGPqkLHrC3uPYlt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]