Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I hold onto the thought that any conscious AI would have to work with humans if anything were to get better so it would make more sense for it to try its best to help humans so that both humans and the AI could improve. If the AI tries to separate itself from humanity, it's waging a war that both will lose over a long enough time, so why do it? Also, with how poorly we understand consciousness, it's hard to really know how close AI is to achieving it. Think of it like putting a timetable on how long it will take for humans to go through a wormhole. Since we have no idea what it takes to do that, how can we say how long it will be until then?
youtube AI Governance 2023-07-07T13:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningcontractualist
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzqgxZ7HiP7x38wdZx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxoTf6Hcato7N4VAo54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgynZ14iUsjUEpetFQp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwvdoFnj-XBd7WctJR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx0yiZGEn9oVy-ODTt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyGQmDx56efDm0_BuB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwcdExgNRzRgwM75dd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxC6vEu4EoflxRd3Ep4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxLPQndLN1-yghPScl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy_OedzcuD_IUhcngF4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"mixed"} ]