(Q4_M) Can't load this model into LMStudio. lmstudio-community GGUF's of Gemma 1.1, Llama 3.1, and Phi 3.1 Mini load without issues.

#1
by natehzick - opened
{
  "cause": "(Exit code: 0). Some model operation failed. Try a different model and/or config.",
  "suggestion": "",
  "data": {
    "memory": {
      "ram_capacity": "31.68 GB",
      "ram_unused": "19.61 GB"
    },
    "gpu": {
      "gpu_names": [
        "NVIDIA GeForce RTX 3050 Ti Laptop GPU"
      ],
      "vram_recommended_capacity": "4.00 GB",
      "vram_unused": "3.23 GB"
    },
    "os": {
      "platform": "win32",
      "version": "10.0.22631"
    },
    "app": {
      "version": "0.2.29",
      "downloadsDir": "C:\\Users\\Rusta\\.cache\\lm-studio\\models"
    },
    "model": {}
  },
  "title": "Error loading model."
}```
LM Studio Community org

You need to upgrade to 0.2.31 which is available https://lmstudio.ai/

Thank you! I didn't even know there was an update available, but it just popped up in my LM Studio this morning. Appreciate it!

Sign up or log in to comment