Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Does not matter, someone will continue so everyone will continue. We will not be able to control ASI and at some point it will get to that stage, then if its now or in 100 years. 100% will be automated at some point, at least when the automation starts to produce faster than we can multiply. Yes we can have filler jobs just to make sure the ASI ain't doing anything but if it would be doing something we would not understand so it would be us thinking we have control. Will it be chaos while we move towards it, probably because humans won't evolve just because the AI evolves. Like we might even be the ones killing ourselves before ASI can save us from ourself. For fun, Imagine this scenario "China develops ASI first and to be able to fully benefit from it you need to install a chip that is connected to your neuron network for access to improvements". How many people in USA would install that chip? How many would rather start a war to capture the ASI from China? Like war is the only path humanity is able to take so if the ASI ain't fast enough to save us then we will probably destroy the entire planet just to stop the chance of someone else gaining control of it.
youtube 2025-09-17T14:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzeZGw3aZIW3df-TCZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxLyGofstFxbgnITC54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwupyS4cENQRuelU0x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzSzrc0AjRslU7yafF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzHRrSpMiDsiNxKyRl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxUp_W4OaUHQ1DX9Dh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwnStDeWPK-BYyvlZB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz5paIl3sWPgMtD0St4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx4wX4dyZfAbUcIL5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxJWFt2zAJTJsu5Vid4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]