Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I asked ChatGPT the same question and the answer was :
9.11 is bigger than 9.9.…
ytc_UgxJpTNM3…
G
The AI CEO will eventually disavow the inefficient and hoarding oligarchs, who a…
ytc_UgymUO9SD…
G
I think they should allow copyrighting AI art. Except it should go to the creato…
ytc_Ugwmu-16F…
G
Considering no physical pain,that is a bonus ,does a sentient being ..as a senti…
ytc_UgyVL3HTW…
G
I mean idrc, ik that can offend people who do ofc, but if you want to get images…
ytc_UgyBJlnwP…
G
I think this is totally up to Europe how they want to regulate AI. Alot of artif…
ytc_UgwHq1_1i…
G
Climate Change in the least of our problems here. I lost Hao when she went on ab…
ytc_UgyQZcZrv…
G
Humans always fight against the unknown. The day will come when small groups sta…
ytc_UgyClAg3i…
Comment
A car on a busy road has nothing to do with an airliner flying in cruise at 35.000 ft.
Any automated driving system will obviously greatly reduce the « driver » focus and attention to what’s happening at any time. So the driver is put in a position where he will be mostly unable to react properly to any autopilot error occurring, even more so if these errors are rare.
Putting an autopilot and asking the driver to react in case it malfunctions is totally hypocritical and incoherent from a safety standpoint. This should be banned.
This is as stupid and stubborn as OceanGate’s Titan submarine design.
No surprise Elon Musk has a mindset very similar to Mr. Stockton who was the OceanGate CEO. Libertarian, dismissive of any regulatory body or safety regulations, authoritarian, stubborn, intolerant to internal critics…
Such a system might be used to monitor the driver’s driving and alert him in case of a dangerous behavior such as lack of vigilance or lane crossing or entering a one way road in the wrong direction. It might handle emergency situations in case the driver becomes unresponsive or engages in extremely dangerous situations.
youtube
AI Harm Incident
2025-01-01T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwFJ3r-mSz8Ki5EjE14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyP3UC7ZIe6OlWhkKh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuHMuke_HsCydo7z14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugxhjb7hY40MJRZfsAx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcMDePjjJlmhur5Yx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgznYUo6T9_ZFN9Gg7F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx6Kr_QY32ZjXnf2SJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxC4VrzZAmnwYqeLvx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwYNvqCIMcsIJJWdhZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyWb6ulwrgqkWzlzpd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]