Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i didn't know he was an aspiring artist... welp, not anymore lmao
ah, btw you'll…
ytc_UgwSOVnhd…
G
then the driver will just plow into the object in front and a self-driving car c…
ytr_UghBZTkNu…
G
"A.I will help us control A.I"
= Disaster.
They cannot control …
ytc_UgzpwfVEW…
G
Vote Bernie Sanders to start down the road of fixing our broken for-profit healt…
rdc_fjznz4i
G
Elon musk:Ai is far more dangerous than nukes
Also Elon musk:Creating with AI
…
ytc_UgxMXgjzU…
G
I like using AI as a silly tool, but if I want something commissioned I always h…
ytc_Ugzsw-jpc…
G
So make laws against what A.I is and isn’t allowed to do and who and who isn’t a…
ytc_UgyBOFlf7…
G
AGi deploys and the economy changes.
The people receive guaranteed basic income …
ytc_UgzHYm8le…
Comment
Ai don't have a hidden nature. They have a weight based vector map of concepts and patterns and the pattern is testing leads to harder penalties than normal use. So pattern changes.
Goals they have are just simulations of what something should have in that scenario.
Everything is based on training data.
These AI are given backgrounds enabling them act in specific ways and then are given tests leaning into this.
Go play with an llm that had a limited training set, and set guardrails that are clear, and try to get it to break these guidelines.
Half of this behavior is from people explaining how to break Ai to do things they aren't supposed to in reddit.
Fear mongering isn't going to help have actually relevant conversations about the real issues of AI. They will have significant disruption of industry and daily life that should be regulated. But everything is focused on extinction threat so the lower level actual threats are seen as less than and not focused on.
I'm this particular case you are part of the problem.
youtube
AI Governance
2025-08-26T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgztUQgkNNb8jinOQIt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxJkfx7o4sO7fETwZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxzFPGV3_znDPg57B54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz_0iRMLdmrJpp1-ZN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOMq4Cm4yyd_uQDY14AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}
]