Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​@vallab19Requiring AI safety/alignment is a pre-emptive move to avoid our extinction. People are racing ahead to create an autonomous intelligence that is vastly smarter, faster and more capable than all humans. If they succeed and it isn't completely aligned the future belongs to it/them and not us. Everything changes is we are no longer the smartest and most powerful species. And it is almost certainly a bad long-term outcome for us.
youtube AI Governance 2023-10-16T21:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytr_UgwC20gm87M2ffer2wV4AaABAg.9schZe6X9F49tB4uFYN8V_","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytr_UgwC20gm87M2ffer2wV4AaABAg.9schZe6X9F49vwv6ORNp2T","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytr_UgzSYTH9ZRvLgUJPgJR4AaABAg.AJaFFGNPHsmAOT6itBK3en","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwFpx4PE6BN2q36PFd4AaABAg.AF4yudZ4VJuAF4zKPwfrNJ","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytr_UgyUYCaKwKXvbhGycU94AaABAg.A08XRE6V5liA3mOq-kFZaz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwYYoEJd9ZVJkrvUat4AaABAg.9qb7mMaOuAJ9wRfmRCJFKv","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytr_UgwCVPuMzQynbXjFn414AaABAg.9PZcvAD7ibv9aMSR9pI-Y0","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytr_Ugz6ZJ5MDNRfyPUceGR4AaABAg.9txx8u1eIxN9ty2pntlQCt","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytr_UgyH4gKJoPEKl5rYUc94AaABAg.ANJhDEUFpfwAT9nGNGgTOI","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytr_UgwpWC8BgqdL7UL9OP54AaABAg.ANGjclFHK9qANINZE17pjD","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"} ]