Zhipu Open-Sources GLM-5.1, First Open Model to Top SWE-Bench Pro

Read the original article →

Z.ai Open-Sources GLM-5.1

Z.ai, the company formerly known as Zhipu AI, released GLM-5.1 as an open-source model on April 7, 2026. The weights are published on Hugging Face under the MIT license.

Model Details

According to Z.ai's model documentation, GLM-5.1 is a Mixture-of-Experts model with 744 billion total parameters and about 40 billion active per forward pass. The context window is 200K tokens. Readers should rely on the official Hugging Face model page for exact architecture and spec details, since secondary sources have reported slightly different figures.

Benchmark Claims

Z.ai reports that GLM-5.1 scores 58.4 on SWE-Bench Pro, a benchmark focused on real-world software repair tasks. The company says this leads the public leaderboard ahead of reported scores for GPT-5.4 and Claude Opus 4.6. These comparisons come from Z.ai and leaderboard-style tracking, not from independent adjudicated testing, and should be cited as such.

Long-Running Tasks

Z.ai describes GLM-5.1 as designed for long autonomous coding sessions, including planning, writing, testing, fixing, and optimizing code over extended runs. The exact session length and reliability should be checked against Z.ai's release notes.

Licensing

The weights are on Hugging Face under the MIT license. Companies and individual developers can use the model commercially, modify it, and redistribute it.

Why It Matters

Open-source AI has been closing the gap with closed models for a year. If Z.ai's SWE-Bench Pro claim holds up in independent testing, GLM-5.1 would be an important open option for teams that need strong coding performance but cannot rely on closed third-party APIs.

References

Discussion

  • Loading…

← Back to News