Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One big misconception is to think that AI would become major threat WHEN it becomes conscious! That's not necessary at all! What's necessary is to become very able to do things currently done by many people, and to be cheap, which would replace people in a very short time - ie time that would challenge everyone's ability to adapt. And the thing is it already IS more able and cheaper in many areas (and is improving fast). Yes in some areas it's still not trained, or still gets few % of mistakes (or hallucinates when there's not enough data to be trained well), but all of these are very similar to human shortcomings (in addition to all the other human shortcomings we have). So, the clock is ticking already!
youtube AI Jobs 2025-06-07T13:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_UgzlfcEvvjpO5On9j994AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx52w3Xj7CJqhTKbwt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx8F1LOMJlEvoB-RDF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwQPyPrhmAUkvU56SF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxwJDR_uSV6rQI_yWl4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"} ]