Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i think you and all these 'tech people' are not being very honest here. MOST peo…
ytc_UgyD62NW6…
G
AI Hallucinations are statistically reducing. closed resource AI chat agents wou…
ytc_UgyFTsHdt…
G
It’s crazy had this idea in the 1st year of high school. Teacher talks 40 mins,…
ytc_Ugw3W3sc7…
G
how do you walk up to this thing and not feel its tits? you know there is like a…
ytc_UgwypN_-R…
G
it's only searched based ai now after 3 or 4 years we will hv full consciousness…
ytc_UgxxyzhSo…
G
Yeah, there's no discernible way of teaching AI how to understand reality b/c we…
ytc_UgwMCab6s…
G
- " Press 1 for English, 2 for Spanish... " (A.I.)
Well, that's my experience s…
ytc_UgyvyELDF…
G
I'm finding ChatGPT really useful for translations. It's better than Google Tran…
ytc_UgxohI_36…
Comment
Well, this took an interesting turn halfway… during the first half of it I thought Roman was doing a good job explaining why AI is dangerous if handled carelessly and perhaps succeeded in convincing some skeptics (people that still believe AI is not a threat) that the danger is real. And then he said that we are likely in a simulation, and I thought oh boy, for all the people that almost got convinced it just discredited everything he said before as they deemed him a lunatic. I personally agree with most things said, including the probability of a simulation, I just know that while AI replacing us is already a hard enough concept to grasp for many, if not most people, the idea of this world being not as “real” is incomprehensible and laughable to them. Humans as a whole struggle to accept that we are not the center of all the meaning. Which means we took one step forward and two backwards in a mission to spread the AI awareness. Very thought provoking, interesting conversation nonetheless.
youtube
AI Governance
2025-09-19T18:2…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwmi4XKCFQ7zUuHKt54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUt8F0sc8wkBog5_Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzY7K9aBsAXMhrb0Vt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMwJ1Mw_s_TgQLriF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxmSpN1PxdJosw9qIp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwmLg0_F3HIyoXfXE14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzryg3nA2UklPysaTZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz9WPSudM0e3tXpRBR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzTjR6vfs5l9w8TiJd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwZauHZbk9Cu05p7JB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]