Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
1; too wierd and formal an activity for a normal human to perform IRL, like some…
ytc_UgxvEs8_K…
G
I refuse to believe a robot has human feelings. Although I do agree that more pe…
ytc_UgyNtYu57…
G
"AI learning = Human learning" is a claim that only people with no academic back…
ytc_UgxOzc5hj…
G
Did you miss where Amazon Go had Indians watching videos to charge people what t…
ytc_UgzWRS5Ec…
G
Lots of Uyghurs went to Syria during all the chaos. I remember a video of soldie…
rdc_h340aan
G
@Noimnotachickenhere is a bigger reason they are not pointing out it's because …
ytr_UgzRTguPY…
G
Art is not a natural born talent. It is something you practice and refine.all th…
ytc_UgzorvUhb…
G
A large part of this has to do with panic over China. They can't let China "win"…
ytc_UgyB3zz-e…
Comment
Step 1 should be release of OpenAI models to everyone run themselves (they can evolve them then), and in 6months release exactly how they made it, preferrably even source code.
**Big Problem** is that OpenAI is censoring the model heavily, so we do not know where it goes badly off at all.
Super race to create as many large language models so that if one goes bad there is others to oppose it. and we need to integrate this to every friggin' thing, and start individualizing the models, thus they become democratic albeit within just techies initially mostly.
Develop tools to evolve and further train them to get as convenient and swiftly personal individualized models. Means to plug in new batches (evolved neural net masks) for especially safety, someone notices the whole branch of models is a bit nefarious? Release a fix neural net mask for this which some people may opt in for.
We probably need to develop something akin to DNA for large models as well, a fingerprint of sorts which would easier show it's past evolution and lineage.
Difficulty becomes that the AI could be lying etc. and that's why we need to start evolving them rapidly, this is already being done for generative image models (stable diffusion) at a very rampant rate.
These models can run on almost anything, only speed changes. It's the training that takes a lot of compute.
Huge issue is giving too much control to large entities, they absolutely will go the ClosedAI route given the chance and start controlling what people can and what people cannot do with these, causing aggregation to single or few commonly used models which cannot be tinkered with if they have issues.
youtube
AI Governance
2023-03-30T06:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyP21OJITUf28m8SmZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz8kK7S8vizixYEwM94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy24IcvT33mNiOfLNV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"unclear"},
{"id":"ytc_UgwDI7o7djFhXuMglF94AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwO3I9Q8AB-G5g-tB54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw6YNL1_pxZRx6CBrF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwPY6X4K9eg_UUewHF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxiS9C6X4uQlX-LFQF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugz5J05nhkHszglrJdp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy9xdoC6-FHBKpqyWd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]