Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Remember the news story recently where a company found out its AI was just Makin…
ytc_UgzyGBEvA…
G
i think my man bbno$ phrased it best. when responding to ai defenders, he said, …
ytc_UgwQzBDwa…
G
Because once the robots take over they are going to let us all starve to death a…
ytc_UgxtpVW1m…
G
I recommend the Star Trek: The Next Generation episode, "The Measure of Man" (Se…
ytc_UggZCcJUM…
G
I've been using Hosa AI companion to chat and practice social skills. It feels m…
rdc_ndmolkj
G
And our labour unions had to fight tooth and nail to bring those work hours/days…
ytr_UgzWwJrll…
G
From America. I work in education in Japan (no, not an English teacher). The kid…
ytc_UgyMkzbKd…
G
I suddenly have the urge to watch iRobot now.
Edit: I also feel like forming a …
ytc_UggfrWv14…
Comment
The real danger of AI is more like That Matrix than Terminator.
Consider "virtual reality" that is so advanced and so realistic that you can't tell the difference between the program and reality. Then multiply that by 10. Then add in Neuralink.
We will only be able to "regulate" AI for so long before a rogue element gets the upper hand, and runs away with the whole thing.
In a generation or two, nobody will even know it happened.
youtube
AI Governance
2023-04-20T09:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwsXvK-ztAmpiFfZFJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx9lJghmJP4Enqfgqh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxZVXeR8DDrFK532fp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzwhkpmOWfL4F-r7ad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzwcHpptZmd4BIUeZd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtjT-X2c0X6pCIR3B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxJ0o-NsErRNVYgckJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwNRyKLx9F8TihtHId4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUITQPzu43tJE84d14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKbTqoZKrAdRfvEjp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]