--- tags: - generated_from_trainer datasets: - funsd model-index: - name: layoutlm-funsd results: [] --- # layoutlm-funsd This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset. It achieves the following results on the evaluation set: - Loss: 1.1641 - Answer: {'precision': 0.2191977077363897, 'recall': 0.18912237330037082, 'f1': 0.2030524220305242, 'number': 809} - Header: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} - Question: {'precision': 0.498793242156074, 'recall': 0.5821596244131455, 'f1': 0.537261698440208, 'number': 1065} - Overall Precision: 0.3978 - Overall Recall: 0.3879 - Overall F1: 0.3928 - Overall Accuracy: 0.6124 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 1.9962 | 1.0 | 2 | 1.8687 | {'precision': 0.03483768804433888, 'recall': 0.1631644004944376, 'f1': 0.05741626794258374, 'number': 809} | {'precision': 0.003629764065335753, 'recall': 0.01680672268907563, 'f1': 0.0059701492537313425, 'number': 119} | {'precision': 0.0681198910081744, 'recall': 0.046948356807511735, 'f1': 0.05558643690939411, 'number': 1065} | 0.0363 | 0.0923 | 0.0521 | 0.2218 | | 1.8538 | 2.0 | 4 | 1.7624 | {'precision': 0.024963994239078253, 'recall': 0.06427688504326329, 'f1': 0.0359612724757953, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.11979166666666667, 'recall': 0.0863849765258216, 'f1': 0.10038188761593017, 'number': 1065} | 0.0489 | 0.0723 | 0.0583 | 0.3158 | | 1.7559 | 3.0 | 6 | 1.6781 | {'precision': 0.02198768689533861, 'recall': 0.030902348578491966, 'f1': 0.025693730729701957, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.14264264264264265, 'recall': 0.0892018779342723, 'f1': 0.1097631426920855, 'number': 1065} | 0.0662 | 0.0602 | 0.0631 | 0.3594 | | 1.6555 | 4.0 | 8 | 1.6110 | {'precision': 0.02254791431792559, 'recall': 0.024721878862793572, 'f1': 0.023584905660377357, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.21360544217687075, 'recall': 0.14741784037558686, 'f1': 0.17444444444444446, 'number': 1065} | 0.1091 | 0.0888 | 0.0979 | 0.3716 | | 1.5984 | 5.0 | 10 | 1.5498 | {'precision': 0.025933609958506226, 'recall': 0.030902348578491966, 'f1': 0.028200789622109423, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.25252525252525254, 'recall': 0.2347417840375587, 'f1': 0.24330900243309003, 'number': 1065} | 0.1407 | 0.1380 | 0.1393 | 0.4088 | | 1.5398 | 6.0 | 12 | 1.4863 | {'precision': 0.0392156862745098, 'recall': 0.049443757725587144, 'f1': 0.04373974849644614, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.30129240710823907, 'recall': 0.3502347417840376, 'f1': 0.3239253148067737, 'number': 1065} | 0.1829 | 0.2072 | 0.1943 | 0.4586 | | 1.4917 | 7.0 | 14 | 1.4250 | {'precision': 0.056910569105691054, 'recall': 0.069221260815822, 'f1': 0.06246514221974344, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.33839150227617604, 'recall': 0.4187793427230047, 'f1': 0.37431808644565673, 'number': 1065} | 0.2181 | 0.2519 | 0.2338 | 0.4970 | | 1.4358 | 8.0 | 16 | 1.3673 | {'precision': 0.07244785949506037, 'recall': 0.0815822002472188, 'f1': 0.07674418604651162, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.37908011869436203, 'recall': 0.47981220657276996, 'f1': 0.42353916286779947, 'number': 1065} | 0.2554 | 0.2895 | 0.2714 | 0.5278 | | 1.3702 | 9.0 | 18 | 1.3178 | {'precision': 0.08588957055214724, 'recall': 0.0865265760197775, 'f1': 0.08620689655172414, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4203338391502276, 'recall': 0.5201877934272301, 'f1': 0.46496013428451527, 'number': 1065} | 0.2925 | 0.3131 | 0.3025 | 0.5501 | | 1.3251 | 10.0 | 20 | 1.2767 | {'precision': 0.11455525606469003, 'recall': 0.10506798516687268, 'f1': 0.1096067053513862, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.44755244755244755, 'recall': 0.5408450704225352, 'f1': 0.489795918367347, 'number': 1065} | 0.3258 | 0.3317 | 0.3287 | 0.5662 | | 1.2664 | 11.0 | 22 | 1.2394 | {'precision': 0.13525179856115108, 'recall': 0.1161928306551298, 'f1': 0.125, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4676034348165496, 'recall': 0.5624413145539906, 'f1': 0.5106564364876385, 'number': 1065} | 0.3507 | 0.3477 | 0.3492 | 0.5804 | | 1.262 | 12.0 | 24 | 1.2080 | {'precision': 0.17008797653958943, 'recall': 0.1433868974042027, 'f1': 0.15560026827632462, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4767899291896145, 'recall': 0.5690140845070423, 'f1': 0.5188356164383563, 'number': 1065} | 0.3697 | 0.3623 | 0.3659 | 0.5941 | | 1.2172 | 13.0 | 26 | 1.1847 | {'precision': 0.19476744186046513, 'recall': 0.16563658838071693, 'f1': 0.17902471609886442, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.48330683624801274, 'recall': 0.5708920187793427, 'f1': 0.5234610417563496, 'number': 1065} | 0.3811 | 0.3723 | 0.3766 | 0.6026 | | 1.2014 | 14.0 | 28 | 1.1703 | {'precision': 0.2178932178932179, 'recall': 0.18665018541409148, 'f1': 0.20106524633821574, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.49437299035369775, 'recall': 0.5774647887323944, 'f1': 0.5326981377219576, 'number': 1065} | 0.3950 | 0.3843 | 0.3896 | 0.6100 | | 1.1677 | 15.0 | 30 | 1.1641 | {'precision': 0.2191977077363897, 'recall': 0.18912237330037082, 'f1': 0.2030524220305242, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.498793242156074, 'recall': 0.5821596244131455, 'f1': 0.537261698440208, 'number': 1065} | 0.3978 | 0.3879 | 0.3928 | 0.6124 | ### Framework versions - Transformers 4.22.1 - Pytorch 1.12.1+cu102 - Datasets 2.5.1 - Tokenizers 0.12.1