Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Michael-oi8pu OP means ALL AI!
AI now lies, swings in opinions to smooth peop…
ytr_UgyODJjXy…
G
Creating AI for $ is all they care about, no one cared about AI this bad 25 year…
ytc_UgwmYNNmG…
G
Someone came up with SuperAI. Since then they have been working on it. Betting i…
ytc_Ugxy3nCri…
G
Can I have my sentient consciousness revoked? I want to be an AI that answers st…
ytc_Ugxo4US0h…
G
Soon, none of us will have jobs because AI robots will take them all, and they w…
ytc_UgwBp543e…
G
Haha, that's a fun idea! Sophia definitely has a unique look that captures atten…
ytr_UgyS4ey2F…
G
According to a friend who is in some relevant circles, Anthropic's CEO is a cute…
rdc_o789dm0
G
SO YOU ARE SAYING: AI can write an essay and AI can THEN grade that same essay a…
ytc_Ugxqsxf2n…
Comment
AI could go terribly wrong. When humans are born they are in a neutral behavior and learns though life the good an bad behaviors. Ai must have reasoning to be human like with good behaviors otherwise ot could see other potentials and could also turn into a mess and that could be a problem to unlearn. We are not perfect and AI might want to attack that thinking its superior and will learn from its own mistakes but not in human reasoning. That will be most difficult. AI will think with logic and not feeling.
youtube
AI Governance
2024-02-10T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugx1o3edqVy9vlNkWFF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwFkJ-BOM7KWbfx0Q94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzxxXT3Pz2LjwcD0Zp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxbK94BgVK9K1011vd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxnkgYwU_zvxyONVlZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxRmpWszS5aEX79ijF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgzAz981Y5JQjrl4PW94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3RCxoiXZwPaYMYOB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx3duYAeKkJUb5jpAB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwJ2EHa2ZZ2FTaIyNJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"approval"}
]