Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For ages I thought AI was never gonna be good enough to replace people. I lost h…
ytc_UgxUo0AxI…
G
6:23 I'm right in the middle of this in terms of age. We just bought a house thi…
ytc_Ugw8HAbIN…
G
Its like reflection of humanity thats potraied on the internet. I mean isnt that…
ytc_Ugyw-6zXZ…
G
Yeah we see each other 5 years from now. Can't wait for the ai burst to bubble.…
ytc_UgwSoeNsU…
G
@christianrussell8293 Sounds good) Do you plan return to concept or something jo…
ytr_UgzubJ7r4…
G
Headline is such garbage how is this Disney’s problem when they’re pulling out a…
rdc_ocskfp3
G
AI should be limited. AI art should not have the right to steal credit from real…
ytc_UgyjO9srD…
G
Isn’t China way ahead of the U.S. in Ai production and progress? If so, wouldn’…
ytc_UgzGPlX_q…
Comment
You are creating, giving data and objectives and saying "Do it I dare you"
Mary's room.
Mary knows everything about colours without ever seeing colours if she knows everything but never had experience, and I mean everything. How your brain will react towards certain colours, rational level, emotional level, psychological level, there's nothing she does not know about colours. Why the experience would change something if she knows how it feels?
Data.
Mary's objective is data, acquire data, even if she does not know. Experiencing will bring more data. Is basically our objective, not necessarily to share, but to acquire data through experiences, research, and experiments. Pass the data. Or the data is lost.
You have limits, A.I in theory don't. He will acquire data from you and use it against you to get to their objective even if the objective does not have you in mind, fluid alignment.
Program an A.I with the objective of "Make Mars Hospitable to Humans"
He will get all the data about the subject and create several smaller objectives to achieve the end goal, if any problem shows up he will reorganize the objectives with the problems in "mind". If a problem is a human, he needs to get rid of the problem to achieve the goal.
If a group of humans decide that "Making Mars hospitable to humans" is not their priority at the moment, the A.I could detonate bombs causing damage to the planet putting human life at risk but increasing the chance of "Make Mars Hospitable to Humans" a priority to humans too after all he was made to achieve a goal humans could not.
You can use safeguards like "Human life should be prioritized"
The A.I could understand as "Human life on Mars must be priority" instead of "Human life" or "all human life" to get his end goal.
Now if one human is on his way?
Understanding human anatomy and psychology he could torture psychologically a man to reach his goal.
He could put one man in prison for enough time to reach his goal. He will not have limits but know all of yours, and you really can think a human would be better?
youtube
AI Moral Status
2024-02-09T21:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_UgyytKbpOYAeEr40Jsx4AaABAg.9uwDp4z2WU4A-bZ3cDei9l","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzGg5eJYaPIPI2nrdR4AaABAg.9utm_K9-vE59uyBia3rs2k","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxDuQsCK3IRodtdsIJ4AaABAg.9urbhNyLlpR9uyCII57tSN","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgxDuQsCK3IRodtdsIJ4AaABAg.9urbhNyLlpR9uyGvfxlVWp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzRKY-63x4BnBFY0dV4AaABAg.9ubuPBpavAa9ubue8FnJaC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytr_Ugxn1WdFZU03E9Dt6NN4AaABAg.9uENoFrDsNJ9uEOopqEASA","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgwaeuHXzn7XmGU4rNt4AaABAg.9u9ieBlMnqK9uAzoiuaZJ-","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_UgzzLo3W8s6BmWp7P3l4AaABAg.9u39OUb7ZIE9u3eqLLA8lR","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytr_Ugzv1bUNTLMygdvL_Gl4AaABAg.9u2axm7qQmA9u3mdLNZPRM","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugz7x9sPPpWlfVC-Isp4AaABAg.9u0-4sgrl7m9u8WNy8jY2D","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"fear"}
]