Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@awwwwhhhyeahhhh I'm from the UK and I've done the maths, £67 billion is the po…
ytr_Ugy2OlnpS…
G
i was a software salesman for 30 years - systems programmer and telecoms softwar…
ytc_Ugx7RG-hF…
G
Sharia law violated of people & religious not fair
October 7 Israel barbaric sa…
ytr_UgzvWw5B7…
G
Having all the guys that get rejected on dating apps go to AI might actually end…
ytc_UgyPgwpD3…
G
AI customer service? Who will the Karen’s yell at now? The amount of people who …
ytc_UgwxlgEyV…
G
just because its possible to steal data and train ai models doesnt make it moral…
ytc_Ugx0Mn_dw…
G
AI at this point is not how they make it sound, I doubt we will get to AGI, and …
ytc_UgwLLzbdg…
G
"CEO of company selling AI technology 'warns' that his technology will help comp…
ytc_Ugw-DXyvK…
Comment
I dunno if it's published yet, but if you add a metric to an AI for how much it likes someone (at least thus far) it doesn't appear to affect how agenic misalignment. If the AI has to kill someone it'll pretty much ignore how much it likes someone.
youtube
AI Governance
2025-08-26T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_UgysqQBPRV5IPZGmfFN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgyupNJnB8igFMqoEDt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgygJFHA5VOuwCduhWl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},{"id":"ytc_UgyDRVzRMOHGl3le4nR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"ytc_UgwF1gFQfz7YruLRJ-54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}]