Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Seguramente es un robot creado para maltratar a los jubilados cuando se manifie…
ytr_UgzQxNHV_…
G
I mean we do need some sort of copyright protection for ai art. For example if a…
ytc_UgwCe4uPN…
G
Gemeni 2.5 Pro still beats Grok 4 at coding tasks. I’m using it right now 🤔…
ytc_UgxbIhymU…
G
If, as the guest says he believes, we are living in a simulation, then why is it…
ytc_Ugx0rhjxb…
G
It’s a quite biased topic. Without severe surveillance, it’s hard to maintain so…
ytc_Ugxgz3SWf…
G
the amount of infrastructure that would have to be connected, and automated thro…
ytc_UgxHoINKA…
G
I strongly suggest everyone read the book "If anyone builds it everyone dies". I…
rdc_o79baoy
G
Most of people will listen, agree and then proceed to use ChatGTP, Midjourney, A…
ytc_UgwwApnw7…
Comment
Chatgpt is actually correct here. You wouldn’t want your doctors to practice cutting people open with butter knives being held by chop sticks.
It can be unethical because it has the potential to teach you how to write code that can be more easily exploitable, or allows you to practice writing code with serious security concerns. Saying “write shitty Python code” falls under that umbrella because poorly written code often has many vulnerabilities. Computer science and writing code is an art. One wouldn’t practice how to perform surgery incorrectly due to many health concerns. The same applies to writing systems or simple scripts.
In the US, Computer Scientists take ethic courses that pertain to writing ethical code or making ethical decisions in the computing world. When you write code for someone, that someone trusts you know what you’re doing for security concerns amongst others. Learning how to write clean code has many benefits but practicing how to write bad code is just bad lol.
reddit
AI Responsibility
1690507095.0
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jtzccwi","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"rdc_jtqttjm","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_jtrtwz6","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_jttfgbf","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"rdc_jtqy2q6","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]