Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The most important function of human intelligence is to evaluate in terms of wisdom, wisdom. Such wisdom evaluations are extremely complicated. As an A.I. scientist, I've pushed the envelope toward creating new species of Artificial Intelligence life-forms. Now I'm not sure, but I have worked with a lot of my peers including Sam Altman; but I don't think computers and robots will overtake biological hominids in terms of wisdom or for that matter other highly challenging neuro-functions like writing a great novel, etc., for this reason: In my opinion, it's just an opinion, animals and hominids are made of nearly the ideal materials for life-forms, for dynamic brain material. I thought we might as well create these A.I. life-forms and let them compete, and who knows, but I don't really think they'll exceed us. And also frankly I think technical scientists, unlike say Einstein, don't understand the complexity of the liberal arts, what goes into a painting like "Paris through the Window," or Hemingway's "A Farewell to Arms."
youtube AI Responsibility 2025-01-04T19:2… ♥ 5
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwyHEnDbuJm0HTGBWx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_4oSK51H2nv5-bdd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwjsb2qxyHCNMQ9bet4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy2EsgMoSDEGQ2dNsF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgykRQgImb68zKa5SLR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxjU0QEMfRsBQOoUul4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw24FOcDq8N7s16nER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwozbIMArz7GYor4-l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyamLbNF6b0jNWDLu54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwQGmupl94-nYmTdUh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"} ]