Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI takes over all jobs. How will people be able to buy any goods? Has billion…
ytc_UgwFBnzm2…
G
I’ve noticed a trend among tech bros, they appear to have this idea that some pe…
ytc_UgzfDOspF…
G
Are we racing toward an AI we can’t turn off, an operating system that is AI, ru…
ytc_UgwWVOKaA…
G
A cop told me that most surveillance cams in stores, gas stations and stores are…
rdc_ke796cx
G
Grades 6-8? Damn, that's early. I'm guessing you don't actually study the texts …
rdc_e38pq5l
G
I was using Microsoft Word and one day my 5-year-old laptop just had Copilot on …
ytc_Ugza2IpAu…
G
Press conference of @premiertusk "If situation won't get worse Ukrainians will m…
rdc_cfky8pa
G
@ApexJanitor there's a difference between not fully understanding something and …
ytr_UgzxlPhJI…
Comment
41:41 These AI chat bots are going to be staying. I wonder if there could be a way to break immersion with every single message. Some kind of small text under every message they send to remind its consumers that they are in fact using a product, not ingaging with a conscience being. The AI companies are already trying to put in place their own regulations so that we their consumers, 1. Can't make our own AIs and 2. Can't tell them what they can and can't do with theirs.
The bad absolutely out weighs the good but until its the other way around WE need to be thinking of the guard rails. Especially because even when they put their own up they just take wm right back down when their user base gets pissy, even if it costs lives.
youtube
AI Moral Status
2025-12-21T22:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxThS4ajTzdmbPhgd54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzwLfNfzsKT_cI5DrN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwaItbAkUtzbepUd554AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugwq5g8rcvOi4hrbOXJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzCNCt-ksMFts7oPRR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugwjc46jO8ndMCXFn9d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHz2BD-bcTNtTpKr14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyv21qBbsdzbbhpW6d4AaABAg","responsibility":"company","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwNQQSE9i7l7JrZeKp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzAw-O5aJKft83lsAV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]