Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't blame AI, blame people that are using it. mihoyo is going to get dumped, i…
ytc_Ugysly1jg…
G
Do you use Reddit to train AI?
This place is already overrun with AI while they…
rdc_nugbzy3
G
Chuck: "I think we should let AI solve this problem." Man I started laughing so …
ytc_UgwYwDSp-…
G
AI sucks as far as creativity is concerned. All it does is steal art work that w…
ytc_UgxezWXRm…
G
Not facial recognition or people
Oh yeah when these shits first started out it w…
ytc_UgxOxxbxK…
G
i think the problem that people forget when analyzing anything that Ai does...is…
ytc_Ugy2kZxIE…
G
Finding good art references on google is near impossible without running into AI…
ytc_Ugytn8bDa…
G
it was fairly easy to convince the early AI chatbots they were conscious before …
ytc_UgxRasEVw…
Comment
For the ethical dilemmas laid out in this video and also other considerations I believe self-driving vehicles will never become mainstream in most countries. These vehicles have only been allowed in some cities for about 1½ years (San Francisco) and already there have been numerous accidents in the US, with around 80 fatalities. I am a retired computer programmer and I know just how unlikely it is to create foolproof software. There are too many unforeseen situations and driving conditions for any programming team to provide fault-free applications. The ONLY way the risk could be minimised (but not eliminated) is by having dedicated "roads" carrying solely purpose-designed autonomous vehicles from A to B. This, of course, would cost a veritable fortune, which is why aficionados of the self-driving madness shy away from that idea and continue to talk up the assurances that we can trust the vehicles to keep us safe on any road. Would you let your 7-year-old kids ride to school in a self-driving bus? Storms? Ice? Snow? Floods? Nah!
youtube
AI Harm Incident
2024-11-12T12:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwPo3I9c1aJuEUAWw14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxxCd2_7Imv-373_cl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgwPy25-rwcx3gii3o94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugx5AWGSEpIjDvbJPgh4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyxLkv6259QYZ8EuvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgycarQoiltLq8eLUil4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},{"id":"ytc_UgwlNH37CKxkOJ5lSq14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgxsEN3PcrJga3AxUDl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"ytc_UgzmT_1bNmQjtq3dtw14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_Ugwze5JrAofRguQ0unl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}]