Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI might be best substitute for news anchorman. Commentaries will stay as opinio…
ytc_UgzrTDeOc…
G
The funny thing is about Zenos paradox is that I thought of it on my own, asked …
ytc_Ugzsq3DGn…
G
Good grief, this all makes AI bros happy, doom is an even better selling point t…
ytc_Ugy7RvO_h…
G
If a human driver costs roughly $600 on a 1,000 mile run and the load is worth $…
ytc_UgwwEst6U…
G
AI only wants consent? That alone makes it human, no?
No other animals know how…
ytc_UgzOEcdTU…
G
As far as consciousness goes, I feel we ask those questions because 1. We hardly…
ytr_UgymFehxV…
G
Will AI be in doubt on anything? We as humans sometimes can’t decide on anything…
ytc_UgyI7AM6-…
G
i am learning AI + ML + DATA SCIENCE all in one course from Indian YouTuber A TE…
ytc_UgynND-3K…
Comment
All of this is scaremongering. Where's all this AI which has autonomy to learn and correct its own functions so that it can make choices? All of these reports are from tests where the researchers are setting AI up to do all they claim it is doing! They tell it tondefend itself, and it will use all the LLMs data to find ways to do it. It's nothing scary, you do that with an employee, and ask them to keep an eye on their peers and they are likely to spy on them if they think they thenselves are in danger. All of this is calculated, nothing special about it. No real danger here, unless you program them to be dangerous. Same with people, e.g. terrorists are all brainwashed into thinking they serve a higher cause!
youtube
AI Harm Incident
2025-09-15T10:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugx9cAZTQCt05wiFScp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUxv9lHgmXFchj-bd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxbe99fCjHqkMRIeLx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwbmlzFuap7JRtLaOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzodKyQJCMbwlAiTpx4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugx3M-M4qAXjvqTypDt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUlu2mvId78ITzMJh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgydxtkqLWyHebWlhnt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwvnmWlvjPC_ILpFoh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6XvT_Uh1gt4MVaXp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]