Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
i dont think it will be a matter of AI just being smarter than a human. it will be a matter of the AI being more resourceful and significantly faster at processing than a human. an AI can look up, corroborate, understand, and execute a plan based on said information, all significantly before a human even thinks to reach for their phone with the intent of googling something. Edit: and you dont even need a superai to do that. narrow ai can already do that. i think superai will really come into greater more widely applicable presesnce when it learns how to predict accurately future events, years, decades, centuries in advance. if a superai said that a meteor that will wipe out humans, and by extention AI, will hit the earth in 237 years (calculated to the millisecond) it can prepare and execute a plan to prevent this from happening. or at the very least prevent itself (and possibly a few safe human companions if we're lucky) from succumbing to this fate. im sure by the time something like that happens, off-world options are more realistic.
youtube AI Governance 2025-10-03T11:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugxg4ttJY8Cc5JNtJhx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzjnT6mem9MZ_u2syp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzyBk64dnJFPw4LLZd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwrPMrVlapQ-jXZUbt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwtbzpaZwIwAo2I0rV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzjELIuVGDRxV5wCfp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy0-3Tu2QoMqG5uYVl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzdOatNW347OsCtzGp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzeIrAIXiuE8Xiaf0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugw4fuNqrqakZB3WtZd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]