--- language: - it license: apache-2.0 library_name: transformers tags: - text-generation-inference - unsloth - gemma - gemma2 - trl - word-game - rebus - italian - word-puzzle - crossword datasets: - gsarti/eureka-rebus base_model: unsloth/gemma-2-2b-bnb-4bit model-index: - name: gsarti/gemma-2-2b-rebus-solver-fp16 results: - task: type: verbalized-rebus-solving name: Verbalized Rebus Solving dataset: type: gsarti/eureka-rebus name: EurekaRebus config: llm_sft split: test revision: 0f24ebc3b66cd2f8968077a5eb058be1d5af2f05 metrics: - type: exact_match value: 0.43 name: First Pass Exact Match - type: exact_match value: 0.36 name: Solution Exact Match --- # Gemma-2 2B Verbalized Rebus Solver 🇮🇹 This model is a parameter-efficient fine-tuned version of Gemma-2 2B trained for verbalized rebus solving in Italian, as part of the [release](https://huggingface.co/collections/gsarti/verbalized-rebus-clic-it-2024-66ab8f11cb04e68bdf4fb028) for our paper [Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses](https://arxiv.org/abs/2408.00584). The task of verbalized rebus solving consists of converting an encrypted sequence of letters and crossword definitions into a solution phrase matching the word lengths specified in the solution key. An example is provided below. The model was trained in 4-bit precision for 5070 steps on the verbalized subset of the [EurekaRebus](https://huggingface.co/datasets/gsarti/eureka-rebus) using QLora via [Unsloth](https://github.com/unslothai/unsloth) and [TRL](https://github.com/huggingface/trl). This version has merged adapter weights in half precision, enabling out-of-the-box for usage with the `transformers` library. We also provide [adapter checkpoints through training](https://huggingface.co/gsarti/gemma-2-2b-rebus-solver-adapters) and [8-bit GGUF](https://huggingface.co/gsarti/gsarti/gemma-2-2b-rebus-solver-Q8_0-GGUF) versions of this model for analysis and local execution. ## Using the Model The following example shows how to perform inference using Unsloth: ```python # With Unsloth (efficient, requires GPU) from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "gsarti/gemma-2-2b-rebus-solver-fp16", max_seq_length = 1248, load_in_4bit = True, ) # Inference verbalized_rebus = "[Materiale espulso dai vulcani] R O [Strumento del calzolaio] [Si trovano ai lati del bacino] C I [Si ingrassano con la polenta] E I N [Contiene scorte di cibi] B [Isola in francese]" solution_key = "1 ' 5 6 5 3 3 1 14" template = """user Risolvi gli indizi tra parentesi per ottenere una prima lettura, e usa la chiave di lettura per ottenere la soluzione del rebus. Rebus: {rebus} Chiave risolutiva: {key} model""" input = template.format(rebus=verbalized_rebus, key=solution_key) inputs = tokenizer(input, return_tensors="pt")["input_ids"] outputs = model.generate(input_ids = inputs, max_new_tokens = 500, use_cache = True) model_generations = tokenizer.batch_decode(outputs) print(model_generations[0]) # Procediamo alla risoluzione del rebus passo per passo: # - [Materiale espulso dai vulcani] = lava # - R O = R O # - [Strumento del calzolaio] = lesina # - [Si trovano ai lati del bacino] = anche # - C I = C I # - [Si ingrassano con la polenta] = oche # - E I N = E I N # - [Contiene scorte di cibi] = dispensa # - B = B # - [Isola in francese] = ile # # Prima lettura: lava R O lesina anche C I oche E I N silos B ile # # Ora componiamo la soluzione seguendo la chiave risolutiva: # 1 = L # ' = ' # 5 = avaro # 6 = lesina # 5 = anche # 3 = ciò # 3 = che # 1 = è # 14 = indispensabile # # Soluzione: L'avaro lesina anche ciò che è indispensabile ``` See the official [code release](https://github.com/gsarti/verbalized-rebus) for more examples. ### Local usage with Ollama A ready-to-use local version of this model is hosted on the [Ollama Hub](https://ollama.com/gsarti/gemma2-2b-rebus-solver) and can be used as follows: ```shell ollama run gsarti/gemma2-2b-rebus-solver "Rebus: [Materiale espulso dai vulcani] R O [Strumento del calzolaio] [Si trovano ai lati del bacino] C I [Si ingrassano con la polenta] E I N [Contiene scorte di cibi] B [Isola in francese]\nChiave risolutiva: 1 ' 5 6 5 3 3 1 14" ``` ## Limitations **Lexical overfitting**: As remarked in the related publication, the model overfitted the set of definitions/answers for first pass words. As a result, words that were [explicitly witheld](https://huggingface.co/datasets/gsarti/eureka-rebus/blob/main/ood_words.txt) from the training set cause significant performance degradation when used as solutions for verbalized rebuses' definitions. You can compare model performances between [in-domain](https://huggingface.co/datasets/gsarti/eureka-rebus/blob/main/id_test.jsonl) and [out-of-domain](https://huggingface.co/datasets/gsarti/eureka-rebus/blob/main/ood_test.jsonl) test examples to verify this limitation. ## Model curators For problems or updates on this model, please contact [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com). ### Citation Information If you use this model in your work, please cite our paper as follows: ```bibtex @article{sarti-etal-2024-rebus, title = "Non Verbis, Sed Rebus: Large Language Models are Weak Solvers of Italian Rebuses", author = "Sarti, Gabriele and Caselli, Tommaso and Nissim, Malvina and Bisazza, Arianna", journal = "ArXiv", month = jul, year = "2024", volume = {abs/2408.00584}, url = {https://arxiv.org/abs/2408.00584}, } ``` ## Acknowledgements We are grateful to the [Associazione Culturale "Biblioteca Enigmistica Italiana - G. Panini"](http://www.enignet.it/home) for making its rebus collection freely accessible on the [Eureka5 platform](http://www.eureka5.it). [](https://github.com/unslothai/unsloth)