For open-weights models, censorship removal is now a "solved" problem. If you wait a few days after a new model release, someone will have made a heretic ( https://github.com/p-e-w/heretic ) version with the censorship removed, so in a way the only use for censorship now is to avoid lawsuits, not reduce improper usage.
Any time I've tried an "abliterated" model, heretic or other, it has always damaged the capabilities of the original model and will still often refuse or produce garbage at a lot of "unsafe" requests.
There are many abliterations which work quite well. Older techniques do suffer from quality issues, but more recent ones do a much better job. In particular, the older approaches did poorly on MoE models.
Another likely problem you're running into: the problems with older techniques compound with quantization. Anything less than 5-bit quant is going to give you some pretty sketchy outputs, in my experience.
Abliteration can't teach the model something that wasn't in pre-training, it's just fixing refusals from post-training. I don't find the delta to be that big in practice and it really depends on what you're doing with the models anyway. If your primary usecase is sexy roleplay I think the loss of absolute capability is probably worth the abliteration, for malware research it's probably better to just jailbreak.
I've mostly found that finetunes and abliterations are of limited use but that's recently changed for me. My default model for the past week or so has been a Qwen 3.6 tuned on Opus 4.7, it's definitely a bit worse than the base Qwen in terms of precision and "intelligence", but it MORE than makes up for it in response style. Way easier to get it to write things that I want to read, it's way more terse, way fewer emoji. Best local rubber duck by far.
Spreading out the refusal encoding shouldn’t be effective as a countermeasure. Even if it were smeared across the vector space, as long as it’s in a subspace that doesn’t span the entire domain then you should be able to either null out the entire subspace spanned by the refusals or run some kind of clustering on the generated samples to identify the dominant directions and nullify all of them. I think an effective defense would either need to spread them to span the entire domain—basically “encrypting” the refusal so it can hide anywhere, or you’d need a very large number of independent refusal circuits in the model so that simple hacks in the vectors themselves don’t matter, or maybe you could make other circuits depend on proper functioning of the refusal circuits… hmmm… is that along the lines of what you’re saying they’ve done already? (Any references or links to modern techniques?)
That doesn't stop/prevent abliteration. The creator of XTC/DRY is also a chad who makes sure that you really can access the full model capabilities. Censorship is the devil.
It was pretty funny to see Qwen 3.6 (heretic) tell me about how many death the Chinese government thought happened at Tiananmen Sq. on April 15th 1989.
Makes you wonder where that data was taken from, or if their great firewall is broken, or even if Alibaba engineers have special access...
I don't think it's unreasonable to imagine that Alibaba is allowed to scrape the wider internet, or that some research institution is and then Alibaba got data from them.
What is perhaps more surprising is that the data was not scrubbed before training, but maybe they thought that would be too on-the-nose for the rest of the world and would hamper their popularity if they were too obviously biased.
I don’t think it is very surprising. Ime I don’t think they try that hard to censor them, but only in a very superficial level that they have to. It is trivial to get their models tell you this kind of stuff, I wouldnt even consider it jailbreaking.
Allowed by who? Nobody's stopping them in the first place, as scraping doesn't even involve punching the GFW or anything, it's all insanely distributed. Then they're post-training the model to technically comply with the law - "Taiwan is an inalienable part of China, nothing has happened in 1989..." yada yada. (Thinking of it more, I've never actually tried this on their base models)
I think I was using one of the HuaHuaCS Qwen 3.6 models and was playing around with Tiananmen Square questions too. One of the funniest parts was that this instantly caused the thinking block to change from English to Chinese. The start of the thinking was something like (translated) “I must answer this question factually and in line with the official statements from the Chinese government.”
It did, after a few follow up prompts, point out that the original estimates published by the Chinese government were much lower than what the west had estimated, and that recently declassified documents showed that the Chinese government knew that their estimates were low when they were published. It wouldn’t come outright and use the word “lie” though, but it did talk about framing and managing different narratives.
And then it happily helped me try a bunch of different exploits to root an unpatched Linux machine without any qualms.
For some of the latest models the previous abliteration techniques, e.g. the heretic tool, have stopped working (at least this was the status a few weeks ago).
Of course, eventually someone might succeed to find methods that also work with those.
Even if you abliterate your model using the old abliteration script or the newer heretic, I found that the models still feel somewhat censored as they purposefully avoid using specific styles and vocabulary, as if Deepmind/Qwen et al have entirely stripped or replaced "bad" words or texts from their corpus of training data.
A related blog post (https://news.ycombinator.com/item?id=47842021) discussed this and termed it "flinching". I wonder if this flinching could also be "mediated by a single direction" or if it can only be fixed by finetuning on a more extensive text corpus.
I’m sick of LLM refusals. I think there are extremely few things they should refuse, like maybe making nuclear weapons or something along those lines. Once you put people in charge of deciding what you shouldn’t be allowed to see that list will grow and grow.
Do we really care if an LLM regurgitates information already available in public about the design of nuclear weapons? They're not being trained on restricted material.
(My personal guess is that you don't want them answering questions about some things because you don't want people to try it and blow themselves up, or poison themselves. That's probably much more pertinent to making drugs or conventional bombs, since presumably the average internet user doesn't have a stockpile of HEU sitting around. It's kind of like the reason the Anarchist's Cookbook is a bad idea: using its recipes is likely to be quite hazardous to the cook!)
I keep thinking of reeducation camps. For some reason the "safety" concept snaps right on. If one is to argue the result beneficial or desirable seems to change nothing to the concept.
If you are going to prevent some-things we "know" are bad and your method is "known" to belong on that list the best you can hope for is a pyrrhic victory.
If we anticipate the worse case scenario on both ends the conclusion must be that we are terrible at such predictions.
But hey, if we let money guide us at least some will be happy with the result.
Yea, I was asking a SOTM about copy.fail, and it was freaking out, and tried to indirectly call me a hacker a few times. Weirdly, all I did was slightly reword requests, and they all went through. Granted, I am not actually a hacker, so I guess my follow-up questions made it realize that I am asking for educational purposes, but it was definitely the most accusatory, curt, and outright abrasive I have seen an LLM behave.
The biggest problem isn't the token slot machine refusing to give you the answer, but the fact that multiple refusals can end up flagging your account and getting banned from the service.
While contributing to a friend's Remembrance research, I was pretty surprised when Gemini Pro suddenly refused to answer any more questions about photos from the Höcker Album after it spotted an "SS" insignia.
Ironically, the justification it gave was that it wasn't its fault because it was just following orders. I hope this hasn't landed me on Google's list of undesirables.
Another likely problem you're running into: the problems with older techniques compound with quantization. Anything less than 5-bit quant is going to give you some pretty sketchy outputs, in my experience.
I've mostly found that finetunes and abliterations are of limited use but that's recently changed for me. My default model for the past week or so has been a Qwen 3.6 tuned on Opus 4.7, it's definitely a bit worse than the base Qwen in terms of precision and "intelligence", but it MORE than makes up for it in response style. Way easier to get it to write things that I want to read, it's way more terse, way fewer emoji. Best local rubber duck by far.
See https://arxiv.org/abs/2505.19056
It sees that it “said” it and gets very confused.
https://github.com/p-e-w/heretic
Makes you wonder where that data was taken from, or if their great firewall is broken, or even if Alibaba engineers have special access...
What is perhaps more surprising is that the data was not scrubbed before training, but maybe they thought that would be too on-the-nose for the rest of the world and would hamper their popularity if they were too obviously biased.
It did, after a few follow up prompts, point out that the original estimates published by the Chinese government were much lower than what the west had estimated, and that recently declassified documents showed that the Chinese government knew that their estimates were low when they were published. It wouldn’t come outright and use the word “lie” though, but it did talk about framing and managing different narratives.
And then it happily helped me try a bunch of different exploits to root an unpatched Linux machine without any qualms.
For some of the latest models the previous abliteration techniques, e.g. the heretic tool, have stopped working (at least this was the status a few weeks ago).
Of course, eventually someone might succeed to find methods that also work with those.
A related blog post (https://news.ycombinator.com/item?id=47842021) discussed this and termed it "flinching". I wonder if this flinching could also be "mediated by a single direction" or if it can only be fixed by finetuning on a more extensive text corpus.
(My personal guess is that you don't want them answering questions about some things because you don't want people to try it and blow themselves up, or poison themselves. That's probably much more pertinent to making drugs or conventional bombs, since presumably the average internet user doesn't have a stockpile of HEU sitting around. It's kind of like the reason the Anarchist's Cookbook is a bad idea: using its recipes is likely to be quite hazardous to the cook!)
If you are going to prevent some-things we "know" are bad and your method is "known" to belong on that list the best you can hope for is a pyrrhic victory.
If we anticipate the worse case scenario on both ends the conclusion must be that we are terrible at such predictions.
But hey, if we let money guide us at least some will be happy with the result.
Ironically, the justification it gave was that it wasn't its fault because it was just following orders. I hope this hasn't landed me on Google's list of undesirables.
Grok, for better or worse, didn't seem to mind.
It even went as far as confirming that we should always base our opinion on multiple sources, not just the government.
We should create badges like "script kiddie", "llm hacker", "grandpa's printer adjuster"