Zero-Shot Image Classification
TiC-CLIP
vision
fartashf commited on
Commit
e3d1642
1 Parent(s): 193613f

Add files using large-upload tool

Browse files
Files changed (1) hide show
  1. README.md +137 -0
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: custom-apple-license
4
+ license_link: https://github.com/apple/ml-tic-clip/blob/main/LICENSE
5
+ tags:
6
+ - vision
7
+ - zero-shot-image-classification
8
+ datasets:
9
+ - apple/TiC-DataComp
10
+ ---
11
+ # Model Card for Model ID
12
+
13
+ <!-- Provide a quick summary of what the model is/does. -->
14
+
15
+ This repository contains TiC-CLIP models trained on TiC-DataComp-Yearly with data from 2014 to 2022 using our modified OpenCLIP code.
16
+ For additional information refer to our [GitHub repo](https://github.com/apple/ml-tic-clip).
17
+
18
+ ## Model Details
19
+
20
+ ### Model Description
21
+
22
+ Keeping large foundation models up to date on latest data is inherently expensive.
23
+ To avoid the prohibitive costs of constantly retraining, it is imperative to continually train these models.
24
+ This problem is exacerbated by the lack of any large scale continual learning benchmarks or baselines.
25
+ We introduce the first set of web-scale Time-Continual (TiC) benchmarks for training vision-language models:
26
+ TiC-DataComp, TiC-YFCC, and TiC-Redcaps. TiC-DataComp, our largest dataset,
27
+ contains over 12.7B timestamped image-text pairs spanning 9 years (2014-2022).
28
+ We first use our benchmarks to curate various dynamic evaluations to measure temporal robustness of existing models.
29
+ We show OpenAI's CLIP (trained on data up to 2020) loses ≈8% zero-shot accuracy on our curated retrieval task from 2021-2022 compared with more recently trained models in OpenCLIP repository.
30
+ We then study how to efficiently train models on time-continuous data.
31
+ We demonstrate that a simple rehearsal-based approach that continues training from the last checkpoint and replays old data reduces compute by 2.5× when compared to the standard practice of retraining from scratch.
32
+ Code is available at [this https URL](https://github.com/apple/ml-tic-clip).
33
+
34
+
35
+
36
+ - **Developed by:** Apple
37
+ - **License:** See [LICENSE](https://github.com/apple/ml-tic-clip/blob/main/LICENSE)
38
+
39
+ ### Model Sources [optional]
40
+
41
+ <!-- Provide the basic links for the model. -->
42
+
43
+ - **Repository:** [ml-tic-clip GitHub repo](https://github.com/apple/ml-tic-clip)
44
+ - **Paper:** [TiC-CLIP: Continual Training of CLIP Models, Garg, S., Farajtabar, M., Pouransari, H., Vemulapalli, R., Mehta, S., Tuzel, O., Shankar, V. and Faghri, F., International Conference on Learning Representations (ICLR), 2024.](https://arxiv.org/abs/2310.16226)
45
+
46
+ ## Uses
47
+
48
+ Researchers can use TiC-CLIP pretrained models for faster design of continual learning methods by start from a pretrained checkpoint and continually train on the next year or next month data.
49
+
50
+ ## How to Get Started with the Model
51
+
52
+ The models are compatible with DataComp evaluation suite and our patched version of DataComp for evaluation on TiC-DataComp-Retrieval and TiC-DataCompNet.
53
+ The models can also be used to resume a training or as initialization for new training using OpenCLIP code.
54
+ Please follow instructions in our [GitHub repo](https://github.com/apple/ml-tic-clip) to create the evaluation sets or follow [DataComp](https://github.com/mlfoundations/datacomp) for the standard evaluations on 38 datasets.
55
+
56
+ ## Training Details
57
+
58
+ ### Training Data
59
+
60
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Training Procedure
65
+
66
+ Please refer to Sections 2-3 of our [TiC-CLIP](https://github.com/apple/ml-tic-clip) paper.
67
+
68
+ #### Preprocessing [optional]
69
+
70
+ [More Information Needed]
71
+
72
+
73
+ #### Training Hyperparameters
74
+
75
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
76
+
77
+ ## Evaluation
78
+
79
+ <!-- This section describes the evaluation protocols and provides the results. -->
80
+
81
+ ### Testing Data, Factors & Metrics
82
+
83
+ #### Testing Data
84
+
85
+ <!-- This should link to a Dataset Card if possible. -->
86
+
87
+ [More Information Needed]
88
+
89
+ #### Metrics
90
+
91
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
92
+
93
+ [More Information Needed]
94
+
95
+ ### Results
96
+
97
+ [More Information Needed]
98
+
99
+ #### Summary
100
+
101
+
102
+
103
+ ## Environmental Impact
104
+
105
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
106
+
107
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
108
+
109
+ - **Hardware Type:** [More Information Needed]
110
+ - **Hours used:** [More Information Needed]
111
+ - **Carbon Emitted:** [More Information Needed]
112
+
113
+ ## Technical Specifications [optional]
114
+
115
+ ### Model Architecture and Objective
116
+
117
+ [More Information Needed]
118
+
119
+ ### Compute Infrastructure
120
+
121
+ [More Information Needed]
122
+
123
+ #### Hardware
124
+
125
+ [More Information Needed]
126
+
127
+ #### Software
128
+
129
+ [More Information Needed]
130
+
131
+ ## Citation [optional]
132
+
133
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
134
+
135
+ **BibTeX:**
136
+
137
+ [More Information Needed]