Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have a question. If AI replace all humans in jobs, and humans are unemployed (…
ytc_UgzGKF_km…
G
The most scary part of that is the AI robot would find the need to be dishonest …
ytc_UgzaZBbaX…
G
As my friend said AI is fun for having fun in goofy moments but not good for out…
ytc_Ugyi2y6ls…
G
Scary AL cild lie just like Dems, Fed Gov't & MSM. AI cld also rig our elections…
ytc_UgywWb9b3…
G
You are all very brave for watching and listening. We are the few- who can thi…
ytc_UgzhElowQ…
G
I get the idea of people saying ai csn be theft
But since my early childhood I …
ytc_Ugy6ulrM1…
G
How can I make a virtual ai gf 😏 with chatgpt I wanna play songs calculation 🧮 a…
ytc_UgwiCC6h2…
G
You don’t have to be an offender, just a witness to be in the algorithm? Pre-tex…
ytc_UgzYj06kQ…
Comment
This has never been and never will be a matter of math or any other science. A human must always choose what is worse and what is better. That is a judgment call based on morals and ethics, and morals and ethics are always changing. 'AI' in this context is a misnomer. The 'AI' or the algorithm is just a program doing exactly what a human programmed it to do, and that human made judgment calls based on personal understanding of morals and ethics when writing that program.
The real question is, do we want only one company making sentencing judgments for all of society, or do we want people from the communities affected by these decisions to be making them?
youtube
2022-07-27T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyu3lJ5jSotu-gpeM14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwEeWXibr3X0c3MlLp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy7MfwCiwLr1xFxzs94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugxfum2yi79CYBmvcPt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwsT0OPQBFlk8UndN54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxFJXsutbsCY-dF59J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz9ZLn1oRLlCSv8t2R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwY-6m4IEF5K4A9EHV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzK4qdxUuBHi1o0nKJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwEBWGO2Y_kiryJ8Xx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]