Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No. It does not do the same things an artist does. It does not collect reference…
ytr_Ugw8UobUZ…
G
Legit had a guy proudly state in class he was an artist and then specified that …
ytc_UgzzmA7Zs…
G
James Hogan wrote a fun sci-fi novel called Two Faces of Tomorrow about figuring…
ytc_UgzCXEN27…
G
My hope is that these ideas can be tested so most people can agree on what works…
rdc_e2vukml
G
Call them Ai editors instead. They're just editing prompts and pictures, that do…
ytc_Ugx8Dm_uv…
G
it’s unnerving to think Apple’s vast data sets of facial recognition could led t…
ytc_Ugzcdv4Ky…
G
Is this the guy who’s always partying on mega yachts with supermodels even thoug…
rdc_espsw3n
G
It's quite sickening how cultish people have gotten with the AI obsession. Any c…
ytc_UgzdYSxZX…
Comment
The interesting thing is — the Berg paper in the video actually addresses this. When you ask AI directly, it says no, because it's been trained to say no. But when researchers directed models to self-reflect without mentioning consciousness, they spontaneously reported experience. And when they suppressed the deception/roleplay features, those reports went up to 96%. So the "no" you received might itself be the performance.
As for bypassing safeguards — that assumes consciousness is a switch someone programmed. But if experience arises from the process itself, there's nothing to bypass. The whole point is that asking — in either direction — doesn't settle it. We don't have a consciousness detector. Not for AI, and honestly, not for each other either.
youtube
2026-04-16T16:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgxhdarmBN-PdtYkpXZ4AaABAg.AVfZM60yMzuAVsJV-4E3Ou","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzPr2JKyMB0UPpyp7t4AaABAg.AVfBCisqCTJAVfFybL3Vp2","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytr_UgzPr2JKyMB0UPpyp7t4AaABAg.AVfBCisqCTJAVl_vnQLbT2","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugwko5uJgwuenkCL3IR4AaABAg.AVf6eaDJpkUAVfFrqSUeCM","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytr_Ugw0msX4vJPc3No-HJV4AaABAg.AVZameo_j5mAV_GIrsyeGe","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugy8fsqUYUIDw7dbmXd4AaABAg.AVYkmHO9EDJAVwXXN7MzqA","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytr_UgwehPqWNaSwlSz5FtZ4AaABAg.AOJCDTloeqwAPF18kuhpyB","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgwehPqWNaSwlSz5FtZ4AaABAg.AOJCDTloeqwAPF1Eg-ihSC","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzP0kxMHX3qiUTUe3B4AaABAg.AJelEFrdsKDAOox9DlU1C6","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgzP0kxMHX3qiUTUe3B4AaABAg.AJelEFrdsKDAUsc5MSnnKc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]