Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
From 2005 - 2015 I worked front office at Merrill Lynch. ML for years had been known for and was in fact the Wall Street firm that invested the most in technology . The thing that I found most amazing was their very early (around 2009) adaptation of A.I., without going into too much detail and trade details, it was used initially for answering mundane queries “how do I add this feature to an account, what is the coding path, what form does the client sign, etc.” that was probably asked 1,000x / day across the country in every office and everyone would call into the New Jersey operation campus to well paid employees who would just answer the same questions all day, driving them crazy. At first the A.I. program they introduced was equivalent to the hallucinations memes from this year, and it seeme like a failure. The thing was each time it failed or did well, we were encouraged to give a thumb 👍🏼 or thumb 👎🏼 and if so kind write why. So essentially we were training it, that was the term used for it and same term used today. By the end of my time at Merrill that program that at first seemed like a failure was such an important and valuable tool, I can’t even explain. It alone could make training a new employee quite easy, which prior to it was an incredibly difficult and time consuming process (like years not days). That is just one real life use case many years prior to nVidia powered GPUs and training, I can’t imagine what the future holds but it’s going to be very interesting and exciting. Things will not operate the same ever again. It is happening and happening now. It is hyped because it *absoluelty deserves to be* .
youtube AI Governance 2024-07-27T20:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwB0VC3ddHed8y1kCR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy_CsQGA8Cm5j43khp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz8C6Zqy2UxWLEdNHx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_Ugy_E0iXq9UocB8qu8Z4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw3PQPMUx3kH_QC3Ut4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzBcCaIDeyeTp6wpNl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwhMNjrhLh8750RbyJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwQFRTn8DWHehDtLCB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzZEXvoxP0f77efPfV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwT8Y-JEVdfrD2fjR54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]