Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@KsazDFW I actually looked up the findings, the report itself after the initial …
ytr_Ugx0v9_ou…
G
We don’t even know who made us ,if who at all. I feel that this is probably the …
ytc_UgxVnUkSi…
G
I honestly would trust an AI's judgement over a humans and the 2nd and 3rd scena…
ytc_UgwdITWjK…
G
55:00. she's right east India company cestui que vi trust act maritime admiral…
ytc_UgzrT_goI…
G
What Meta Ai said: "That's a heavy and thought-provoking question, and one that'…
ytc_UgzuVY3RV…
G
LLMs do actually so they don't know more often than people give it credit for. I…
ytr_UgymcRj0D…
G
Pausing at 15 min to knee jerk react so probably off. Philosophical rigor is act…
ytc_UgyqBWaCr…
G
Yea....lets automate fun tasks that people actually enjoy working on, so that we…
ytc_Ugw0sShR5…
Comment
14:00 Very comforting to hear that, not at all disconcerting.
Edit: It just feels a bit like we have the IT equivalent of a singularity at our hands. We do not know exactly what it *is* but we still try to use it. It's reckless.
Edit: This comment isn't a "OMG AI 2027 is true we're doomed" comment but a "Oh no, we allocate billions of dollars to building a tech that we don't really control or understand." This has also been a fear with genetics and nuclear science back in the day, and while they were often overblown they weren't unwarranted.
This feels worse for me bc the US especially just doesn't care about regulation in this regard. Neither the market dynamics nor the recklessness employed by the researchers illicit hope, and while I don't believe in HAL9000 ruling us there's a plethora of other ways this could fly in our face, even just economically.
youtube
AI Moral Status
2025-10-30T19:1…
♥ 29
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzXQDoG0C0LquHgCF14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwfjDggoc7slJUJxvN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyQuuXlK4Ljy7WoQCB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzWno767nWZhBfYPcd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy7t7R2SUYJnnO6qUd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFOX5i3109Sdv6ljZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVLbha4pRgsIC1gzp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyvOThYlzE_Z8WEtC54AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzs4vcCZ_FVBwIsJ194AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxmSgNf0R1NLcYg0HN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"resignation"}
]