PeepDaSlan9
commited on
Commit
•
cf18d25
1
Parent(s):
70689b2
B2BMGMT_3.5_Fine-tuning
Browse files
LLMs.csv
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name,owner,trained on x billion parameters,date,note / * = parameters undisclosed,link
|
2 |
+
BERT,Google,0.34,Oct 2018,,https://en.wikipedia.org/wiki/BERT_(language_model)
|
3 |
+
GPT-2,OpenAI,1.5,Feb 2019,trained on Reddit only,https://en.wikipedia.org/wiki/GPT-2
|
4 |
+
T5,Google,11,Oct 2019,,https://arxiv.org/abs/1910.10683
|
5 |
+
Megatron-11B,Meta / Facebook,11,Apr 2020,,https://github.com/pytorch/fairseq/tree/main/examples/megatron_11b
|
6 |
+
BlenderBot1,Meta / Facebook,9.4,Apr 2020,,https://cobusgreyling.medium.com/meta-ais-blender-bot-3-0-is-an-open-source-chatbot-with-long-term-memory-internet-search-ce024a5fe8aa
|
7 |
+
GPT-3,OpenAI,175,May 2020,,https://en.wikipedia.org/wiki/GPT-3
|
8 |
+
Wu Dao 2.0,Beijing Academy of AI,1750,Jan 2021,,https://en.wikipedia.org/wiki/Wu_Dao
|
9 |
+
GPT-J,EleutherAI,6,Jun 2021,,https://huggingface.co/EleutherAI/gpt-j-6b
|
10 |
+
PanGu-Alpha,Huawei,200,Apr 2021,,https://arxiv.org/abs/2104.12369
|
11 |
+
LaMDA,Google,137,Jun 2021,,https://en.wikipedia.org/wiki/LaMDA
|
12 |
+
BlenderBot2.0,Meta / Facebook,9.4,Jul 2021,,https://cobusgreyling.medium.com/meta-ais-blender-bot-3-0-is-an-open-source-chatbot-with-long-term-memory-internet-search-ce024a5fe8aa
|
13 |
+
Jurassic-1,AI21,178,Aug 2021,,https://www.ai21.com/blog/announcing-ai21-studio-and-jurassic-1
|
14 |
+
Codex,OpenAI,12,Aug 2021,Generates programming code,https://arxiv.org/abs/2107.03374
|
15 |
+
FLAN,Google,137,Sep 2021,,https://arxiv.org/abs/2109.01652
|
16 |
+
PLATO-XL,Baidu,11,Sep 2021,chatbot,https://arxiv.org/abs/2109.09519
|
17 |
+
WeLM,WeChat,10,Sep 2022,87% chinese language,https://arxiv.org/abs/2209.10372
|
18 |
+
xlarge,Cohere,52.4,Sep 2021,"Trained on ""ebooks and webpages""",https://arxiv.org/abs/2108.07790
|
19 |
+
Megatron-Turing NLG,Meta / Facebook,530,Oct 2021,,https://developer.nvidia.com/megatron-turing-natural-language-generation
|
20 |
+
MT-NLG,Microsoft,530,Oct 2021,,https://arxiv.org/abs/2201.11990
|
21 |
+
BERT-200,Google,200,Nov 2021,,https://cloud.google.com/blog/topics/tpus/google-showcases-cloud-tpu-v4-pods-for-large-model-training (same as above)
|
22 |
+
BERT-480,Google,480,Nov 2021,,https://cloud.google.com/blog/topics/tpus/google-showcases-cloud-tpu-v4-pods-for-large-model-training
|
23 |
+
Luminous,Aleph Alpha,200,Nov 2021,German-language,https://www.aleph-alpha.de/pricing
|
24 |
+
Ernie 3.0 Titan,Baidu,260,Dec 2021,,https://www.marktechpost.com/2021/12/29/baidu-and-pcl-team-introduce-ernie-3-0-titan-a-pre-training-language-model-with-260-billion-parameters/
|
25 |
+
GLaM,Google,1200,Dec 2021,,https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html
|
26 |
+
Gopher,Google Deepmind,280,Dec 2021,,https://www.deepmind.com/blog/language-modelling-at-scale-gopher-ethical-considerations-and-retrieval
|
27 |
+
GPT-NeoX,EleutherAI,20,Feb 2022,,https://huggingface.co/docs/transformers/model_doc/gpt_neox
|
28 |
+
GPT Neo,EleutherAI,2.7,Feb 2022,,https://huggingface.co/docs/transformers/model_doc/gpt_neo
|
29 |
+
Chinchilla,DeepMind,70,Mar 2022,,https://arxiv.org/abs/2203.15556v1
|
30 |
+
CodeGen,Salesforce,16,Mar 2022,Generates programming code,https://arxiv.org/abs/2203.13474
|
31 |
+
InCoder,Meta,6.7,Apr 2022,generates python and javascript,https://arxiv.org/abs/2204.05999
|
32 |
+
mGPT,Sber,13,Apr 2022,60 languages,https://arxiv.org/abs/2204.07580
|
33 |
+
PaLM,Google,540,Apr 2022,,https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html
|
34 |
+
OPT-IML,Meta AI,175,May 2022,,https://arxiv.org/abs/2212.12017
|
35 |
+
Minerva,Google,540,Jun 2022,,https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html
|
36 |
+
YaLM 100B,Yandex,100,Jun 2022,Russian / English,https://huggingface.co/yandex/yalm-100b
|
37 |
+
BLOOM,BigScience,175,Jul 2022,,https://huggingface.co/bigscience/bloom
|
38 |
+
FIM 6.9B,OpenAI,6.9,Jul 2022,,https://arxiv.org/pdf/2207.14255.pdf
|
39 |
+
NLLB-200,Meta AI,54.5,Jul 2022,200 language translation ,https://ai.facebook.com/blog/nllb-200-high-quality-machine-translation/
|
40 |
+
GLM-130B,Tsinghua & Zhipu,130,Aug 2022,,https://huggingface.co/spaces/THUDM/GLM-130B
|
41 |
+
Atlas,Meta,11,Aug 2022,,https://arxiv.org/abs/2208.03299
|
42 |
+
BlenderBot3,Meta / Facebook,175,Aug 2022,,https://cobusgreyling.medium.com/meta-ais-blender-bot-3-0-is-an-open-source-chatbot-with-long-term-memory-internet-search-ce024a5fe8aa
|
43 |
+
AlexaTM,Amazon,20,Aug 2022,trained on Wikipedia and mC4 only,https://www.amazon.science/blog/20b-parameter-alexa-model-sets-new-marks-in-few-shot-learning
|
44 |
+
PaLI,Google,17,Sep 2022,Vision model,https://arxiv.org/abs/2209.06794
|
45 |
+
Sparrow,Google,70,Sep 2022,powered by Chincilla,https://en.wikipedia.org/wiki/Sparrow_(bot)
|
46 |
+
MT5,Google,13,Oct 2022,101 languages,https://huggingface.co/google/mt5-base
|
47 |
+
Galactica,Meta / Facebook,120,Nov 2022,scientific only,https://www.technologyreview.com/2022/11/18/1063487/meta-large-language-model-ai-only-survived-three-days-gpt-3-science/
|
48 |
+
ChatGPT,OpenAI,12,Nov 2022,,https://en.wikipedia.org/wiki/ChatGPT
|
49 |
+
RL-CAI,Anthropic,52,Dec 2022,,https://lifearchitect.ai/anthropic/
|
50 |
+
Exaone,LG,300,Dec 2022,,https://sourceforge.net/software/product/EXAONE/
|
51 |
+
GPT 3.5,OpenAI,175,Dec 2022,,https://openai.com/blog/chatgpt
|
52 |
+
WebGPT,Open AI / Microsoft,175,Jan 2023,,https://openai.com/research/webgpt
|
53 |
+
Claude,Anthropic,52,Jan 2023,,https://arstechnica.com/information-technology/2023/03/anthropic-introduces-claude-a-more-steerable-ai-competitor-to-chatgpt/
|
54 |
+
LLaMa,Meta / Facebook,65,Feb 2023,,https://ai.facebook.com/blog/large-language-model-llama-meta-ai/
|
55 |
+
Luminous Supreme,Aleph Alpha,70,Feb 2023,German-language,https://docs.aleph-alpha.com/docs/introduction/prompting_and_completion/#zero-shot-learning-with-luminous-supreme-control
|
56 |
+
PanGu-Sigma,Huawei,1085,Mar 2023,,https://arxiv.org/abs/2303.10845
|
57 |
+
Bard*,Google,0.7,Feb 2023,powered by LaMDA,https://techmonitor.ai/technology/ai-and-automation/google-i-o-bard-chatbot-llm-palm2-gemini
|
58 |
+
Alpaca,Stanford,7,Mar 2023,,https://github.com/tatsu-lab/stanford_alpaca
|
59 |
+
BloombergGPT,Bloomberg,50,Mar 2023,Finance-focussed (of course),https://arxiv.org/abs/2303.17564
|
60 |
+
Cerebras-GPT,Cerebras,13,Mar 2023,open-source,https://www.cerebras.net/blog/cerebras-gpt-a-family-of-open-compute-efficient-large-language-models/
|
61 |
+
Ernie Bot,Baidu,200,Dec 2021,,https://www.prnewswire.com/news-releases/baidu-unveils-ernie-bot-the-latest-generative-ai-mastering-chinese-language-and-multi-modal-generation-301774240.html
|
62 |
+
GPT-4*,OpenAI,1000,Mar 2023,,https://en.wikipedia.org/wiki/GPT-4
|
63 |
+
GPT4All-LoRA,Nomic,7,Mar 2023,open source chatbot based on LLaMa,https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf
|
64 |
+
Jurassic-2*,AI21,200,Mar 2023,,https://thenewstack.io/ai21-labs-releases-jurassic-2-its-new-large-language-model/
|
65 |
+
Koala-13B,Berkeley,13,Apr 2023,Based on LLaMA,https://bair.berkeley.edu/blog/2023/04/03/koala/
|
66 |
+
StableLM,Stability AI,65,Apr 2023,open-source from the makers of Stable Diffusion,https://github.com/stability-AI/stableLM/
|
67 |
+
Dolly 2.0,Databricks,12,Apr 2023,open-source,https://arstechnica.com/information-technology/2023/04/a-really-big-deal-dolly-is-a-free-open-source-chatgpt-style-ai-model/
|
68 |
+
SenseChat,SenseTime,200,Apr 2023,,https://www.silicon.co.uk/e-innovation/artificial-intelligence/sensetime-ai-505764
|
69 |
+
Titan,Amazon,350,Apr 2023,,https://aws.amazon.com/bedrock/titan/
|
70 |
+
Tongyi Qianwen,Alibaba,200,Apr 2023,"name roughly translates to “truth from a thousand questions,”",https://www.theregister.com/2023/04/11/alibaba_tongyi_qianwen_llm/
|
71 |
+
Hugging Chat,LAION,30,Apr 2023,,https://techcrunch.com/2023/04/25/hugging-face-releases-its-own-version-of-chatgpt/?guccounter=1&guce_referrer=aHR0cHM6Ly9uZXdzLnNsYXNoZG90Lm9yZy8&guce_referrer_sig=AQAAAAykGMvXCA4mB45v7uwolZNOHKsD8v0oCXuvA_ODzNeQYDZSu_-gosaiEklXgzcJrzmgiNapj8m3WQ7gmE8auQxFEIKokjxYpdx7TXhOimIuz0Dww2I7ceB29AYZHtxkD4wfgA8BN4aB5CR3L9aVOLjXXiiCHDmCvhBr9I8xwLAo
|
72 |
+
BingChat*,Microsoft / OpenAI,1000,Apr 2023,Microsoft's version of ChatGPT,https://www.zdnet.com/article/how-to-use-the-new-bing-and-how-its-different-from-chatgpt/
|
73 |
+
PaLM2,Google,540,May 2023,"Trained on 100 languages and 20 programming languages. Google says the new model is better at common sense reasoning, mathematics and logic",https://techcrunch.com/2023/05/10/google-launches-palm-2-its-next-gen-large-language-model/
|
74 |
+
Vicuna-13B,Vicuna Team,65,Mar 2023,an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT,https://lmsys.org/blog/2023-03-30-vicuna/
|
75 |
+
Falcon LLM,Technology Innovation Institute,40,Jun 2023,foundational large language model (LLM) with 40 billion parameters trained on one trillion tokens,https://falconllm.tii.ae/
|
76 |
+
Sail-7B,Open Language Safety Research,7,Jun 2023,search engine-grounded large language model based on LLama-7B,https://openlsr.org/sail-7b
|
77 |
+
Web LLM,Independent,7,Jun 2023,Browser-based LLM Chatbot,https://simonwillison.net/2023/Apr/16/web-llm/
|
78 |
+
OpenLLM,Independent,13,Jun 2023,,https://huggingface.co/openlm-research/open_llama_13b_easylm
|
79 |
+
Ernie Bot 3.5,Baidu,200,July 2023,Surpassing ChatGPT (3.5) in comprehensive ability scores and outperforming GPT-4 in several Chinese language capabilities - and supporting plugins.,http://research.baidu.com/Blog/index-view?id=185
|
80 |
+
Claude 2,Anthropic,52,July 2023,"Expanded input and output length (up to 100,00 tokens) allowing the AI model to analyze long documents such as technical guides or entire books",https://arstechnica.com/information-technology/2023/07/new-chatgpt-rival-claude-2-launches-for-open-beta-testing/
|
81 |
+
LLaMa2,Facebook,70,July 2023,"Open source LLM comes in 3 parameter sizes - 7, 30, and 70 bn",https://venturebeat.com/ai/facebook-parent-meta-unveils-llama-2-open-source-ai-model-for-commercial-use/
|