Edit model card

Big-Bird-base Span-detection model (based off extractive QA) for source detection. Similar to alex2awesome/quote-attribution__qa-model, except it's trained on a lot more data.

Scores this accuracy on the original gold-label training dataset mentioned in: https://arxiv.org/pdf/2305.14904.pdf

{'QUOTE_loss': 2.02297043800354, 'QUOTE_f1': 0.6217389042457407, 'QUOTE_e': 0.6541929666366095, 'BACKGROUND_loss': 1.966481328010559, 'BACKGROUND_f1': 0.6724386796945259, 'BACKGROUND_e': 0.7012658227848101, 'PUBLISHED WORK_loss': 3.9905383586883545, 'PUBLISHED WORK_f1': 0.5010885605008405, 'PUBLISHED WORK_e': 0.5612244897959183, 'STATEMENT_loss': 2.5369534492492676, 'STATEMENT_f1': 0.5674802724861037, 'STATEMENT_e': 0.7133757961783439, 'DOCUMENT_loss': 5.588709831237793, 'DOCUMENT_f1': 0.29, 'DOCUMENT_e': 0.35, 'SOCIAL MEDIA POST_loss': 8.032715797424316, 'SOCIAL MEDIA POST_f1': 0.07142857142857142, 'SOCIAL MEDIA POST_e': 0.2857142857142857, 'PRESS REPORT_loss': 5.140520095825195, 'PRESS REPORT_f1': 0.4225235404896422, 'PRESS REPORT_e': 0.5084745762711864, 'DECLINED COMMENT_loss': 2.6083762645721436, 'DECLINED COMMENT_f1': 0.375, 'DECLINED COMMENT_e': 0.5625, 'PROPOSAL/ORDER/LAW_loss': 3.4194986820220947, 'PROPOSAL/ORDER/LAW_f1': 0.3006695221007098, 'PROPOSAL/ORDER/LAW_e': 0.47058823529411764, 'PRICE SIGNAL_loss': 6.789737701416016, 'PRICE SIGNAL_f1': 0.28421052631578947, 'PRICE SIGNAL_e': 0.3684210526315789, 'NARRATIVE_loss': 1.577209711074829, 'NARRATIVE_f1': 0.6666666666666667, 'NARRATIVE_e': 0.7559055118110236, 'Other: DIRECT OBSERVATION_loss': 6.768198013305664, 'Other: DIRECT OBSERVATION_f1': 0.0, 'Other: DIRECT OBSERVATION_e': 0.125, 'COMMUNICATION, NOT TO JOURNO_loss': 3.517451524734497, 'COMMUNICATION, NOT TO JOURNO_f1': 0.5239760062200494, 'COMMUNICATION, NOT TO JOURNO_e': 0.578125, 'PUBLIC SPEECH, NOT TO JOURNO_loss': 1.3050371408462524, 'PUBLIC SPEECH, NOT TO JOURNO_f1': 0.3125, 'PUBLIC SPEECH, NOT TO JOURNO_e': 0.8125, 'PROPOSAL_loss': 8.1436128616333, 'PROPOSAL_f1': 0.18582375478927204, 'PROPOSAL_e': 0.27586206896551724, 'TWEET_loss': 4.489321231842041, 'TWEET_f1': 0.17647058823529413, 'TWEET_e': 0.23529411764705882, 'VOTE/POLL_loss': 5.416122913360596, 'VOTE/POLL_f1': 0.36228119706380574, 'VOTE/POLL_e': 0.34782608695652173, 'LAWSUIT_loss': 5.852302551269531, 'LAWSUIT_f1': 0.28687970379587147, 'LAWSUIT_e': 0.4791666666666667, 'DIRECT OBSERVATION_loss': 6.070340156555176, 'DIRECT OBSERVATION_f1': 0.02, 'DIRECT OBSERVATION_e': 0.2, 'Other: PROPOSAL_loss': 1.81886887550354, 'Other: PROPOSAL_f1': 0.6, 'Other: PROPOSAL_e': 0.6, 'Other: LAWSUIT_loss': 7.36419677734375, 'Other: LAWSUIT_f1': 0.17647058823529413, 'Other: LAWSUIT_e': 0.17647058823529413, 'Other: Campaign filing_loss': 1.9355627298355103, 'Other: Campaign filing_f1': 0.5009827476998723, 'Other: Campaign filing_e': 0.5, 'Other: Campaign Filing_loss': 1.6711688041687012, 'Other: Campaign Filing_f1': 0.375, 'Other: Campaign Filing_e': 0.375, 'Other: Data Analysis_loss': 5.298102855682373, 'Other: Data Analysis_f1': 0.0, 'Other: Data Analysis_e': 0.0, 'Other: Evaluation_loss': 2.1996216773986816, 'Other: Evaluation_f1': 0.0, 'Other: Evaluation_e': 0.6666666666666666, 'full_loss': 2.4775373935699463, 'full_f1': 0.5740516850389779, 'full_e': 0.632161089052751}

Downloads last month
6
Inference API
Unable to determine this model’s pipeline type. Check the docs .