Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's interesting how A.I. learns ...just like we do. I hope it makes good decisi…
ytc_Ugyv7MtSy…
G
ChatGPT is by far the best therapist.
Its there for me 24/7.
Its free.
It helps …
ytc_Ugy_0Yiq6…
G
What f****😐🤯 not the jobs human
This is war human vs robot 🤖 f****
Would how h…
ytc_Ugzy9lDnD…
G
Wow....I did not go to a university for Python. But I at least knew half of thes…
ytc_UgzYYviCS…
G
I think a lot of people don't understand his line of argument. The AI isnt going…
ytc_UgxHMLXRr…
G
I love how bad the AI lawyer is. It comes across as incredibly smug, unlikable, …
ytc_Ugzkr9L9H…
G
Well I for one, welcome our new imperial AI overlord, it surely can not make a w…
ytc_UgxT0NPKz…
G
I remembered a movie called our man flint and Dick Tracy both man was talking in…
ytc_UgxZX-ZrM…
Comment
There is a fundamental problem with AI engineers and AI scientists characterizing AI models with human behavior (intentionality for lying, deception, sycophancy, inducing psychosis, motivating suicide and murder, and so on). This language is misleading the public and contrary to AI safety and ethics. If Bengio and others are agreed that no AI model is currently conscious, sentient, and conscientious, then none of these descriptors of AI behavior is accurate. The problem is with AI scientists not understanding what these models are doing consequent to the interactions of algorithms and the deeper layers of neural networks operative in these models. We are at risk because of the priority given to commercial success over AI safety.
youtube
AI Responsibility
2026-01-23T02:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx2MWVJgiLmu3TbsIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwWMSlcdWpK8H8ndp54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy6P3FMsCkkkNQXBuF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz7e5eK_nUj4xVDrBJ4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy0xXqLTVuv0R-fxiR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyeICJabngzF8RCF214AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzc6Hv9a4yY6pzafJd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwgxTuxr5GshzUOltp4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyZqbe2lHcEq8QElIZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy0425joqmE211nPSZ4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]