Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Im sure theres way too much overhyping of AI(they need the cash inflow, urgently)... that said i think most if not all of them are doing a similar betting calculus in their heads: if its possible, if it could be soon, the only shot they have to influence or own any of it is betting it will. Like not doing it is the sure way to be out of it... And the race to the bottom i feel comes from 'inevibilatility': now that the cat is out of the bag that a race towards it will happen regardless. If they go slower or stop because of the dangers others wont - so they all hit the acelerator instead. Gotta keep in mind that amongst the possible future scenarios theres positive ones- including of a big winner, from AI imposing one super state or making someone a king. Since the chances of avoiding doom seem to be shot (because the race wont stop) they all hope for some shot of being the winner- also because for most another winner they dislike would be as bad as AI wiping us all. Like China for example... human biggotry amongst ourselves knows no limits and you bet theres way too many of us that would quickly push a 'destroy the world or eliminate your enemies' on a pinch... FGS- we kinda did that already with nuclear bombs... they were first tested under a 'non zero chance' it could start a chain reaction burning the entire atmosphere. A nuke that not only could not destroy us all but made someone the most rich? They will blown as many of those nukes as they can
youtube AI Moral Status 2025-10-31T00:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyindustry_self
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyjyMyY_O4NgeZAJjB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxmTHu1yRq14lt75Dd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyNWZ7nhSCvpXJJsDV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxT7RhFToA3B5KS5el4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwBfFsr_6_n16hJfed4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxm7-V2cw080X9sQZx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyCiYAK2ms5Q0A5qhx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_UgxBMJKI2GG-3mj8Qi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyKp43VLPuelxIF9Kx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxKjYNeaaZSElY40Qx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]