Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Btw if you want to damage AI, you can feed it fake feedback. If you're using deepL or any other translator, you can pick alternatives. Always click the WORST possible alternative, then "like" the translation. If you see a good translation, give it a thumbs down for incorrect translation. It's cool to have a tool you can use when you just want to translate some news article to get the hang of it, or you want to translate what your foreigner friend said to you, but if we let it become too good, real people will lose their jobs. In chatGPT it couldn't become easier. It asks for your feedback ALL the time. They want you to do the job for them, verifying their AI for free? Fine, train their AI to be the most useless piece of crap imaginable. You can also correct it, feeding it fake information. Nobody has the time to log in and deliberately waste time doing that, but when you're already using an AI translator or chatGPT, make sure to have that in mind. You have way more power than you think. You know the weird captcha questions? "Select all images with traffic lights"? They use it to train autonomous vehicles. Not only they waste your time, but they get rich asking you to do their job. AI may learn faster than us, but it has one flaw: it has to learn from someone. We are "someones" and we can mess this crap up really good. Pass it on, tell everyone who hates AI, together we can make sure these billionaire companies lose as much money as possible. They save money on professionals, firing people every single day, they deserve to go bankrupt.
youtube AI Governance 2026-04-07T17:4…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugw6NRFr44WEKesJfq54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxG-tlIEiDdnXBEHOx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzcBuDfvADFRUAZDyh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwcW084eUVFsIDyIsV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxUj3FWykCa35pFDY54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx4oQ7D07gt1CXaPwx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy_KQcndkDCVNucoUZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyJm1VdpDyQsro1-UR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz-fs01-LsYjutdEJ94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwj2tDLgXFOWOj7b-14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"} ]