Tag: Glm

GLM-4.7-Flash Abliteration Benchmarked: Heretic vs HauhauCS vs Huihui vs Abliterix

GLM-4.7-Flash is a 59 billion parameter reasoning model from Zhipu AI that uses 64 specialist expert modules per layer. I ran four different abliteration techniques against it and discovered something unexpected in the maths benchmarks. The raw scores look terrible for some variants, but the models can actually still do the maths. They just overthink and run out of tokens before writing their answer. And the weight forensics on one of those variants led to a bigger story about plagiarism.

May 1, 2026