Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@chrisn7188Materialistic people are like npc's so we'll see who's the silly one…
ytr_UgyLgafr-…
G
4:40 News FLASH!! WOULD ANY SANE PARANT HAVE THEIR CHILD GO WILD ON THE INTERNET…
ytc_Ugz0oD7ke…
G
@kenkrouner5640exactly, too many people judging everyone's AI predictions based…
ytr_UgyjSgA3k…
G
I'm afraid they (AI) will win. Because if a person has a photographic memory and…
ytc_Ugz8jkgmL…
G
If an alien civilisation visited us, who would we send as a face of humanity? Re…
ytc_UgwNZq-yR…
G
I don’t think either AI or tech in general is exponential…. Seems more likely it…
ytc_Ugw-YQ6f-…
G
😅😂 ❤❤ Tara !!
i understand now the raeson why you had shut down all yr robat A…
ytc_UgyZWtoKM…
G
How aligned is it to our values? We don’t want an AI that is aligned to our actu…
ytc_UgzYXocmn…
Comment
He got shot because some people thought he was a a police informant because the police visited his house so much.
When it comes to anything more complicated than simple axioms, there is no "purely objective" information, the context in which it is presented is what gives it meaning. Machines simply cannot reliably spit out fully formed conclusions, because they do not think and do not understand language. You can make people believe anything with statistics if you use unclear metrics (the ai is a black box, we do not know how it works) and flawed data.
youtube
AI Bias
2022-12-18T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyfZvoIV14abrPBp9N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwj0lqoYueXAXGgChN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzWbS5WYyNY6h1NA4h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDGhC2y3EwF5waNZp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzV0eREhCsE5qdNBIt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUGCGvrQCk-FipBJF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw11UmBHSeoDu2y7Qd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxIkeoYXyG0tRH_Yr54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_1g9NyJee_E2C-UV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKlZZkPmt9nBwT--d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]