hlab
/

SocialiteLlama / README.md
gourabdey's picture
Update README.md
494ce2d verified
metadata
license: llama2
datasets:
  - hlab/SocialiteInstructions
language:
  - en
library_name: transformers

Socialite-Llama Model Card

Model Details

Model Type : Socialite-Llama is an open-source, instruction tuned Llama2 7B on SocialiteInstructions. On a suite of 20 social science tasks, Socialite-Llama improves upon the performance of Llama as well as matches or improves upon the performance of a state-of-the-art, multi-task finetuned model on a majority of them. Further, Socialite-Llama also leads to improvement on 5 out of 6 related social tasks as compared to Llama, suggesting instruction tuning can lead to generalized social understanding.

Model date: Socialite-Llama was trained on October, 2023.

Paper Details: [https://arxiv.org/abs/2402.01980]

Github: [https://github.com/humanlab/socialitellama]

License

Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Training Dataset

  • 108k datapoints of <Instruction, Input, Output> triplets
  • 20 diverse datasets covering a broad range of social scientific tasks

Evaluation Dataset

  • 20 tasks which the model has seen during training
  • 6 related social tasks which the model has not seen during its training

Intended Use

The primary intended use of Socialite Llama is research on Computation Social Science and Cultural Analytics.

Citation Information

@inproceedings{
  dey-etal-2024-socialite,
  title={{SOCIALITE}-{LLAMA}: An Instruction-Tuned Model for Social Scientific Tasks},
  author={Dey, Gourab and V Ganesan, Adithya and Lal, Yash Kumar and Shah, Manal and Sinha, Shreyashee and Matero, Matthew and Giorgi, Salvatore and Kulkarni, Vivek and Schwartz, H. Andrew},
  address = "St. Julian’s, Malta",
  booktitle={18th Conference of the European Chapter of the Association for Computational Linguistics},
  year={2024},
  publisher = {Association for Computational Linguistics} 
  }

image/png