Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Jobs that are generally not legal, ethical, or feasible for AI to perform entirely on its own involve high-stakes decision-making, direct physical care, or tasks requiring specialized legal or human-licensed authority. While AI can assist in many fields, the final responsibility must remain with a human in areas requiring empathy, accountability, and legal liability. Based on legal frameworks and industry standards, the following areas are not suitable for full AI automation: Practicing Law (Unauthorized Practice): AI can draft, analyze, and research, but it cannot provide legal advice, represent clients in court, or act as a licensed attorney. High-Stakes Medical Decisions: AI cannot replace doctors in diagnosing, treating, or performing surgeries on patients, as these require, at minimum, human oversight for safety and accountability. Psychotherapy and Counseling: AI chatbots providing therapeutic advice are considered operating without a license and have been flagged as dangerous. Human Resources/Hiring: Using AI to make final hiring or firing decisions is restricted in many jurisdictions due to bias risks and the legal requirement for human oversight. Law Enforcement and First Responders: Roles requiring split-second, ethical, and physical judgment, such as firefighters, police, and paramedics, cannot be replaced by algorithms. Company Registered Agents: AI cannot legally act as a registered agent for a company, as this role requires a human or legal entity with specific liability. Classroom Education (Primary Instruction): In some areas, legislation prohibits AI from acting as the primary instructor in higher education, requiring human teachers to remain in charge. Why These Jobs Are Restricted: Accountability & Liability: When an AI makes a mistake, there is no legal person to hold accountable. Empathy and Human Connection: Fields like healthcare, childcare, and social work require emotional intelligence that AI lacks. Nuanced Decision-Making: Unpredictable situations require human judgment rather than algorithmic outpu
youtube AI Jobs 2026-02-16T21:3… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyregulate
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgxojY08vRZkXd-SExN4AaABAg.ATAVzzxO0cbATCWtx-GW0k","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytr_UgxPMPVUihr625JqSMR4AaABAg.AT9uCpzpDXOATCWgF6pEod","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgyRnUiQuAQ3VXJnOLt4AaABAg.AT8UJ9xKdTAAT8Y23gv92P","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyRnUiQuAQ3VXJnOLt4AaABAg.AT8UJ9xKdTAATItMw_C47P","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytr_Ugxywt8GP-rIZkoPkGl4AaABAg.AT8LQPMlwUrATG1NqkoAwh","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytr_UgwSnuTQk-wdfrAMSM54AaABAg.AT8IeX86C7hAT8aEOd4dIT","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_UgwSnuTQk-wdfrAMSM54AaABAg.AT8IeX86C7hAT8k277MSiF","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugx4RSX2RoxEn95ggxF4AaABAg.AT8H_KMldmBAT9jOgS1m3F","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgzlqvdVleahYL6Mmgt4AaABAg.AT8HKUbUAqbAT9jeF3oC6G","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"}, {"id":"ytr_UgwLGGyde0nl1ue37jh4AaABAg.AT8FBDsDrngAT96ph944yR","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"} ]