Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will replace you too Charlie, a robot will stand where you sit and just argue…
ytc_Ugzp3Xk5w…
G
Ahhhh, nice time to follow my dream job to be a software engineer, right where a…
ytc_Ugzjhfl5v…
G
Governments can’t even be trusted to balance a budget. No way can they get in fr…
ytc_UgzanOQPB…
G
You need to understand that if you're good. You can use Ai to make something eve…
ytr_UgwrIeMCb…
G
A.J. I watched your live feed and I think I have figured something out that coul…
ytc_UgzAj3r_b…
G
I wonder how AI takess account of the fact that at least half the world are stup…
ytr_UgwP97-PJ…
G
@ExiaLupus You mean the genuine art that gets cropped and resized to a 500 pixel…
ytr_UgxPuOLHp…
G
Makes me think that LLMs might be taking from fiction to generate cures or medic…
ytc_UgzOH7JkY…
Comment
if humans respond that way, of course the AI will lean the same way. I'm extremely nervous about AI being used in any kind of casual/not monitoring as if an infant in NICU seems disastrous and careless. Why not have EXTREME SUPERVISION at all times? I'd still be skeptical, as the AI could alter itself to override it's supervisor. Also , we need the jobs they want to change to AI. I feel we are in a rush to mess
youtube
AI Bias
2022-12-22T00:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwpfSK74g7QQYwS__B4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxsi6HN5fm_btMahNV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy_bWzivBdstxmphEJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzttizfCbDCoOEtuCZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwOUMcWnRQtDnxEnhZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwxzgdAmeeQNzGjath4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwb8y93KqzRbFEe1R54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5kgmsdBIJRAb84PF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzOnOehOWZgixJ7K5x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxTHhY72GoaWvaq8QN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"unclear"}
]