Divyasreepat commited on
Commit
d9bf76a
1 Parent(s): 0b4ccdd

Update README.md with new model card content

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: keras-hub
3
  ---
4
- ### Model Overview
5
  BERT (Bidirectional Encoder Representations from Transformers) is a set of language models published by Google. They are intended for classification and embedding of text, not for text-generation. See the model card below for benchmarks, data sources, and intended use cases.
6
 
7
  Weights and Keras model code are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
@@ -41,7 +41,7 @@ The following model checkpoints are provided by the Keras team. Full code exampl
41
  | `bert_large_en_uncased` | 335.14M | 24-layer BERT model where all input is lowercased. |
42
  | `bert_large_en` | 333.58M | 24-layer BERT model where case is maintained. |
43
 
44
- ### Example Usage
45
  ```python
46
  import keras
47
  import keras_hub
 
1
  ---
2
  library_name: keras-hub
3
  ---
4
+ ## Model Overview
5
  BERT (Bidirectional Encoder Representations from Transformers) is a set of language models published by Google. They are intended for classification and embedding of text, not for text-generation. See the model card below for benchmarks, data sources, and intended use cases.
6
 
7
  Weights and Keras model code are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
 
41
  | `bert_large_en_uncased` | 335.14M | 24-layer BERT model where all input is lowercased. |
42
  | `bert_large_en` | 333.58M | 24-layer BERT model where case is maintained. |
43
 
44
+ ## Example Usage
45
  ```python
46
  import keras
47
  import keras_hub