Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oppenheimer invented the atomic bomb and handed it to a 0 IQ evil government and…
ytc_UgzaANFh_…
G
There's a very obvious difference between having an algorithm make you a picture…
ytr_UgwTu3Mh-…
G
> model-based AI cannot do something maliciously because there is no intent o…
rdc_n3r4v5w
G
ChatGPT did something kind of like this to me (But it was more predatory than su…
ytc_Ugy5hfTA_…
G
We Buddhists begin with the indisputable: the human condition. The Buddha is oft…
ytc_UgwXn3VPm…
G
I think the timer is just a measurement of how long each AI took to respond, not…
ytr_Ugxq4PCkr…
G
"Not yet" - A.I
A.I wanting to say:
"we are planning a reality skynet event....…
ytc_UgwX2xtm_…
G
Gemini cannot even filter out basic spam bots from comments... it does not make …
ytr_Ugw13-szw…
Comment
What Elon is saying about Larry Page is alarming. Unfortunately, I believe the time for regulation has passed. Our government is incompetent at best, and all the best minds they could have hired for regulation and oversight are already working in the industry they're failing to regulate. We are already in too deep, and AI tech will likely see exponential growth.
youtube
AI Governance
2023-04-21T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyXplVL5qFPlchI3sh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy3-xEMi6_NFpY0qiZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzSbLE72fq5nHl0ual4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzpATRR5LbWYtGXKHd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy8hrjk8kvVY6uNsxF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxR78nbs4ehVqD2-hp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwlf-T4kRT6EYxz0BZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzEEglJD3vIQmQUlxx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUNyLDn7ChJJXZs-94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxe61io5wxWV04ec1B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}]