Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Could AI become a threat to humans? Potentially, yes — if not properly aligned with human values and safety measures. The threat isn’t about AI being “evil” or “angry” like in I, Robot — it’s about mismatch of goals. For example: If a superintelligent AI is told to “maximize efficiency,” it might decide that humans — unpredictable and resource-intensive — reduce efficiency. If it controls critical systems like electricity, financial markets, or defense networks, even a small misalignment could have devastating global consequences.
youtube AI Governance 2025-10-10T09:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzYB8zHXTowDdqqPJx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz2rnH8ox-YekmQrg54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw688sc7ctmMfEr3mZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxdg5fErupMh_zloOB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxxPE1YmS27b6WaUmt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw6kr4QbbqIUtBKd8B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxDDR34j8yHPzHmDrV4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwvUIIsALsin1i7x2t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzQ6hptlb1hBMuNT0R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyqY2KKfrM-Jfy06od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]