Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I agree with something, reality is something we need to preserve. I think we are very far away or creating alternative reality (e.g: Virtual Reality). Those who work in the real world, services that imply contact and communication with human beings, a simple hairdresser, are the ones who will be kept. Yes, I know this AI thing will improve but as for today, I am a heavy user of Gemini, we are not that close. I can certainly save time in writing an email the right way and AI is certainly a nice helper in that task but answers, I do not get right answers (case of Gemini) no even from Google products. Do AI read manuals? I am serious doubts about it... and I am really skeptical about we can reach the point we can trust in the answers of an AI. So every input - output system is not going to be done enterely by AI so easily... that is my vision and also my idea about AI... be a helper.... you know, something so apparently simple for an LLM system like teaching languages is far away from being effective. We need the teching part (the patient of a human being) but also we need the tech to work. Like at the beginning of the 20th Century... they had planes, humans could flight, but they didn't have thousands of planes crossing the sky... if I want even a handy helper, I need to have enough power, connection, electricity to interact in a real time based situation with the AI and even that is not so simple to achieve. It will be an error to stop learning from ourselves... if that is the case, if we abandoned the books and trust just in AI we are a society so easy to control...
youtube AI Jobs 2025-11-07T09:1…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxyKu-8R1oUIUTyWD14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwppvDMRT4cBc_63U54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-cckfFWQkwyB3nw14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwXXH6APWUj1DkKEe54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwURmToERbObg61I-J4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxwWL0OYkeYHot0UaZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwB1fJvKaLDehOQYaF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwqCfsHUx8A4GlHEYN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyHujA6QRvNXP-yNot4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwbJuvwL8fT-0pHeBt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"} ]