This is excellent, really impressive! Are there any plans to release smaller model sizes? For example 3B or 7B trained against the same data set could be useful for speculative decoding optimization. TIA
Β· Sign up or log in to comment