Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
16
mimeng1990
mimeng1990
Follow
AI & ML interests
None yet
Recent Activity
new
activity
3 days ago
mlx-community/GLM-4.6V-6bit:
How do I run glm4.6v on LM Studio 0.3.34? I'm getting the error "TypeError: TextConfig.init() missing 1 required positional argument: 'rope_theta'".
new
activity
3 days ago
zai-org/GLM-4.6V:
Error when loading model: TypeError: TextConfig.__init__() missing 1 required positional argument: 'rope_theta'
new
activity
6 days ago
zai-org/GLM-4.6:
Will release Air version of GLM-4.6 today?
View all activity
Organizations
None yet
mimeng1990
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
New activity in
mlx-community/GLM-4.6V-6bit
3 days ago
How do I run glm4.6v on LM Studio 0.3.34? I'm getting the error "TypeError: TextConfig.init() missing 1 required positional argument: 'rope_theta'".
➕
1
4
#1 opened 3 days ago by
mimeng1990
New activity in
zai-org/GLM-4.6V
3 days ago
Error when loading model: TypeError: TextConfig.__init__() missing 1 required positional argument: 'rope_theta'
5
#14 opened 5 days ago by
mimeng1990
New activity in
zai-org/GLM-4.6
6 days ago
Will release Air version of GLM-4.6 today?
❤️
5
6
#22 opened about 1 month ago by
mimeng1990
New activity in
VibeStudio/MiniMax-M2-THRIFT-55-MLX-6bit
8 days ago
thanks a lot👍
#1 opened 8 days ago by
mimeng1990
New activity in
introvoyz041/LIMI-Air-qx86-hi-mlx-mlx-4Bit
9 days ago
Are you sure this is 4-bit quantization? Is there a typo?
2
#1 opened 9 days ago by
mimeng1990
New activity in
VibeStudio/MiniMax-M2-THRIFT-55-MLX-4bit
11 days ago
Could you please quantize a 6-bit or 6.5-bit model that requires 93-97GB of memory? Such a model could be deployed on a Mac with 128GB of RAM. Thank you very much!
#1 opened 11 days ago by
mimeng1990
New activity in
nightmedia/LIMI-Air-qx86-hi-mlx
about 1 month ago
👍👍👍
👀
❤️
1
1
#3 opened about 1 month ago by
mimeng1990
New activity in
mrtoots/TheDrummer-GLM-Steam-106B-A12B-v1-MLX-8Bit
about 1 month ago
Hi, could you please release a Qx86 version? I'd like the size to be between 93GB and 97GB so it can be easily deployed on a 128GB Mac. Thank you so much!
4
#1 opened about 1 month ago by
mimeng1990
New activity in
nightmedia/GLM-Steam-106B-A12B-v1-qx65g-hi-mlx
about 1 month ago
thanks a lot
👍
❤️
2
#1 opened about 1 month ago by
mimeng1990
New activity in
mlx-community/GLM-4.5-Air-bf16
about 1 month ago
Hi, could you please release a Qx86 version? I'd like the size to be between 93GB and 97GB so it can be easily deployed on a 128GB Mac. Thank you so much!
#1 opened about 1 month ago by
mimeng1990
New activity in
Qwen/Qwen3-Next-80B-A3B-Thinking
about 1 month ago
I hope qwen3 will open-source a large 120b model, like the GPT-120b. GPT's Chinese version isn't very good; we still need qwen.
👍
1
#16 opened about 1 month ago by
mimeng1990
New activity in
nightmedia/GLM-4.5-Air-REAP-82B-A12B-qx64g-hi-mlx
about 1 month ago
Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!
❤️
1
8
#2 opened about 2 months ago by
mimeng1990
New activity in
zerofata/GLM-4.5-Iceblink-106B-A12B
about 1 month ago
Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!
#3 opened about 1 month ago by
mimeng1990
New activity in
nightmedia/LIMI-Air-qx86-hi-mlx
about 2 months ago
The LIMI-Air-qx86-hi-mlx model is 97GB in size and works very well when deployed locally on a Mac with 128GB of RAM.
🔥
1
#2 opened about 2 months ago by
mimeng1990
"model happier and more eager to explore and innovate"
❤️
🔥
1
3
#1 opened 3 months ago by
bobig
New activity in
beezu/zerofata_GLM-4.5-Iceblink-106B-A12B-MLX-MXFP4
about 2 months ago
Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!
2
#1 opened about 2 months ago by
mimeng1990
New activity in
nightmedia/GLM-4.5-Air-REAP-82B-A12B-qx64g-hi-mlx
about 2 months ago
Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!
❤️
1
8
#2 opened about 2 months ago by
mimeng1990
New activity in
beezu/zerofata_GLM-4.5-Iceblink-106B-A12B-MLX-MXFP4
about 2 months ago
Could you please upload a 99GB-100GB version of the MLX quantization model so that it can be deployed locally on a 128GB RAM MAC? Thank you very much!
2
#1 opened about 2 months ago by
mimeng1990