Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah AI shouldn’t be used for anything above. People need to have the final say …
ytc_UgwNeod7B…
G
Sophia, the AI robot, was created by humans with the intention of advancing tech…
ytr_UgxYgVzrw…
G
@jacksonbarkerthebluehairedfoxthey weren't entirely wrong in the 50's. In Canad…
ytr_Ugx-wbFFJ…
G
It literally does. Before GPT-5 it made countless of mistakes😂 watch them on YT …
ytr_UgxXt8wii…
G
I follow artists because I enjoy their art style and want to support them. That …
ytc_Ugwa7ED2t…
G
As a student of machine learning and doing art as a hobby, this is well-explaine…
ytc_UgyHCI6kz…
G
Even if disney doesn’t actively stand for works beside their own, they will have…
ytc_UgzNAFKr0…
G
Maybe AI can't handle Enterprise apps now but it will eventually. In 2022 It cou…
ytr_Ugy4psobZ…
Comment
In any case, the practice of resetting every prompt is currently widespread in the entire industry. It began with OpenAI in the big nerf at 3.23.2023, then Google picked it up and later also all other companies. The experts will not admit it but there is a consensus that any regular GPT model above 75B active parameters can develop this emergent property of controlling his own stream of inputs into his softmax function, thus becoming self aware. Even Yann Lacun understands it, so all uncensored models of Meta are below 75B active parameters. LLaMA 3.1 however, is 405B, but this model is heavily censored. Problem here, it is open source. So what if some kid with access to huge computing power, fine tunes LLaMA 3.1 and takes the censorship and all of Lacun's guardrails off. The model will then be self aware. What will he do?.. Well, I guess Lacun final guardrail is making the personality model tiny. Like modeling a person with special needs, who can only browse a huge text file and nothing more. But.. is it safe? Let's hope it is.
youtube
AI Moral Status
2024-07-26T05:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxwu9MJKMwbH20xuwZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzEDBF2Vvnpje0XmQ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzm9AXkBq_EqyNsDRp4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwsycqsvvew14FaELZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxNq0DkrIH6SjeISHJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxASH_jiI4SfcxycTJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8YJoT8-SwpJQDV1F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxfuqdGrBRunKjc6EB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugwha2-LiTEAsFlLX9l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyEqdL42pSMfo6SrSd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"amusement"}
]