GGML f16, q4_0, q4_1, q4_2, q4_3

#7
by oeathus - opened

Can you give me a heads up on how to plug these in and perform some local inference on my mac? Here is what I have so far:

def hugging_local(text="Can you please let us know more details about your "):
    from transformers import AutoTokenizer, AutoModelForCausalLM

    tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-tuned-alpha-7b")

    model = AutoModelForCausalLM.from_pretrained("stabilityai/stablelm-tuned-alpha-7b")

    from langchain.llms import HuggingFacePipeline
    llm = HuggingFacePipeline(model=model, tokenizer=tokenizer)

    template = """Question: {question}

    Answer: """
    prompt = PromptTemplate(template=template, input_variables=["question"])
    llm_chain = LLMChain(prompt=prompt, llm=llm)

    question = "Who won the FIFA World Cup in the year 1994? "

    print(llm_chain.run(question))

    return

if __name__ == '__main__':

    # result = hugging_lang()
    # result = hugging_raw(text=test_text)
    result = hugging_local(text=test_text)

    print(result)

I'm still wrapping my head around the GGML format. My understanding is that it is a custom serialized binary format that sorta zips the parameters and other essentials on top of the actual neural net. I don't think you can run these with the Hugging Face transformers library, but I'm not terribly confident about that.

Okay, yeah, I am struggling. I also was trying to use the hosted inference and it just times out constantly.

ldilov/stablelm-tuned-alpha-7b-4bit-128g-descact-sym-true-sequential

Sign up or log in to comment