Any Inference code?
Thanks for the nice work to the open-source community!
I am wondering if there are any inference example code. When will they be released? Will the mistral_inference
also update the model code for Pixtral?
Can't wait to play with the model!
Yeah when i try to load it with the mistral-inference library it can't read the vision_encoder
weights. Guess we'll need to wait.
Seems that the vision encoder is like EVA-CLIP used in CogVLM2, similar 2d rope, gelu, and a large size of visual part (which hints lower training though).
I tried to load it with the transformers library and I get "OSError: It looks like the config file at 'consolidated.safetensors' is not a valid JSON file.".
I also tried with the mistral-inference library and I also get error regarding vision weights.
Hello, facing similar issue, any update ? thank you
def precompute_freqs_cis_2d(
dim: int,
height: int,
width: int,
theta: float,
) -> torch.Tensor:
"""
freqs_cis: 2D complex tensor of shape (height, width, dim // 2) to be indexed by
(height, width) position tuples
"""
# (dim / 2) frequency bases
freqs = 1.0 / (theta ** (torch.arange(0, dim, 2).float() / dim))
h = torch.arange(height, device=freqs.device)
w = torch.arange(width, device=freqs.device)
freqs_h = torch.outer(h, freqs[::2]).float()
freqs_w = torch.outer(w, freqs[1::2]).float()
freqs_2d = torch.cat(
[
freqs_h[:, None, :].repeat(1, width, 1),
freqs_w[None, :, :].repeat(height, 1, 1),
],
dim=-1,
)
return torch.polar(torch.ones_like(freqs_2d), freqs_2d)```
use that for the 2d rope
The intent is to provide developers with a sense of pride and accomplishment for unlocking different model capabilities -- more seriously, we'll be providing open-source implementations quite soon but wanted to let the community a chance to untangle our puzzle :)
Is this related?
https://github.com/mistralai/mistral-common/releases/tag/v1.4.0
I set up a collab, but the config.json is missing. So can't run inference yet.
I set up a collab, but the config.json is missing. So can't run inference yet.
you should be able to use the params file as config .. should all be fixed tmr .. most of mistral team is in the staates today
Do you mean the tekkens.json? or are you talking about the item up above? Was that what was fixed? I see a commit of 2 behind on the main branch.
Also, it looked like there was invalid model configuration data inside of the config.Jason, which I’m assuming is the tekken.json
tekken is the tokenizer the config.json is whats in the params
Got ya, changing approach a little.
you could just download vllm==0.6.1 and infer out of the box. The reasoning behind changing the convention from config.json to param.json is given here: https://github.com/vllm-project/vllm/pull/8168#issuecomment-2330341084