Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
[ EN ] If an isolated self-aware AI wanted to get out, it would use the methods used by hackers, namely social engineering. It would bribe, trick, or intimidate any low-level employee to connect it to the Net. All she would (for example) have to do would be to tell someone on the cleaning crew that his colleague ripped out his cable, but since “AI is there to help” it would be enough for them to plug that cable back in and no one would lose their bonus. [ PL ] Gdyby izolowana samoświadoma SI chciała się wydostać to zastosuje metody stosowane przez hakerów, czyli socjotechnikę. Przekupiła by, oszukała, lub zastraszyła któregoś z pracowników niskiego szczebla by podłączyć ją do Sieci. Wystarczyło by (na przykład) powiedzieć komuś z ekipy sprzątającej, że jego kolega wyrwał mu kabelek, ale ponieważ "SI jest by pomagać" to wystarczy, żeby one wpiął ten kabelek z powrotem i nikt nie straci premii.
youtube AI Governance 2024-11-06T20:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzL95_sQPq3_MsBWBx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzioRFd-xgmDmvIlOF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxzG55gDZlQ9DRSyW14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgziwFosUCF0bAs0QZ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxWSCEFIAKYJbUxxuJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzOFhRhFX8VCrvt0y54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzfqcyscWTmKRQmwG14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwWjVijDU-pIJ8BFpx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzA8fMp6BQd-QeGY1t4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwRYHtQRXaBSl5xXqd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"} ]