Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I guess the core issue here is people failure in understanding why AI is being built and yes as Palki says it, it's about the tech companies and world leaders trying to soften the blow. I guess lessons here are in history, If we go back in time and look at how the personal computer started out. Steve Jobs envisioned the computer to be the bicycle for the mind and went out and launched the Apple 2 in classrooms in schools and although AI could do much more, that is exactly where AI should have started also. In Education. AI should master educating humans at becoming better than what they are (ie., from being the bicycle of the mind to bring the rocket ship of the mind). Research and implementation in making humans learn better should be AIs core focus and everything else should be a consequence of that. That way, humans would evolve from being mere clerks to being clerk operators or clerk managers where in this situation they work with AI for their clerical work. This is what happened at the start of the information age. Jobs evolved and made human beings better. People around the world should now be taught how to work with AI or the AI they have today to do their tasks and be educated by and through AI to become more than what they are and that is easily done by just conversing with AI and the role and responsibilities of tech giants and world leaders is a big one, to make sure AI doesn't turn on us in such a way but rather work with us. Hypothetically, call it iRobot if you want to but establish something like the 3 laws in iRobot or a given core criteria not to harm or be a problem for the human race. Tech giants are on the brink of creating a new organism (Artificial and Imperfect and evolving) but an organism nonetheless and they as a community with the rest of the world have to be responsible, the same we are about our children that they grow to be good citizens of planet earth.
youtube AI Jobs 2025-05-31T03:5… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxOgHfrby3nz5yEiWh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw_MXVbGJA1cAc5jqR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyTMn0KF7SldxUPCvZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz0krBh4aWmRIPJnsB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzR6HG42lGl3Bzx4WZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy22OlrBfWy-xJkPwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzgihqWKvf96mC0qD94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx0lsUvuBhkkWOibnB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyMsIc44p2E9jPiiNB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyPevhLCmLY4WnkrbN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})