Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
False Dichotomy [ fawls dahy-kot-uh-mee ] Phonetic (Standard) IPA noun a logical fallacy in which a spectrum of possible options is misrepresented as an either-or choice between two mutually exclusive things. **** Aurora, and other new wave, AI-centric corporate hustlers are using the oldest trick in the book: the False Dichotomy ruse. Simply put, Aurora are essentially saying either we have AI, self-automated semi's OR there is no future for the semi-haulage industry - so basically an out 'n out LIE. The reality and truth is, long haul haulage, as an industry, has multiple problems AND multiple solutions: Problems: * Long Haul Semi-Driving suffers from many very experienced drivers getting old: there are few replacements - with enough experience and requiste driving skills to replce the aging driver pool in the U.S.A - and this problem is visible, globally. * Not attractive career for youth: it is not seen as attractive to young people in the 20-30 age bracket, therefore it is harder for long haul trucking firms to replace older drivers who retire. AI-controlled Semi's are a Partial Solution (at best) * AI-based, self-automated semi's should be highly regulated - NOT given as a blanket replacement for human drivers (which they aren't). At best, laws need to be implemented (mostly safety based) to unsure that AI-controlled semi's are ONLY ever used as supplementary to fill human driver shortfalls. (One) Long Term: Hybrid Supplementary Fleet Makeup As a way forward, ONLY allow semi's like those developed and produced by Aurora a supplementary role in the U.S trucking fleet. They should ONLY be allowed in areas (i.e. across multiple counties within a State,and interconnecting States) where recruitment of experienced drivers is at a proven (i.e. with govt statistics) chronically low level, AND where all efforts to recruit suitable, experienced drivers is failing to bolster existing driver numbers.
youtube AI Jobs 2025-06-06T23:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzTx6QwOKZxDTFb_mN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyneOEKZcJFP8bXkul4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy-ajnBNvgqboTWBHh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxO5VgDFGne6s9miy14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyCwtTf49FdyjMhoTl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyoQqxe3oCQPHiV1Jh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxB5BCVIsKxfL5PATZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzR_eT8BgW1qLwTD2d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzDACOG-hFNodFKQ1h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxGaWUchCsQ63M-bt54AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]