k050506koch commited on
Commit
9e99f4d
1 Parent(s): c99c451

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -7
README.md CHANGED
@@ -1,7 +1,4 @@
1
- ---
2
- {}
3
- ---
4
- # Hugging Face v2 Models README & Model Card
5
 
6
  ## Overview
7
 
@@ -35,7 +32,7 @@ To use the models for inference, you can send a POST request to the `/generate/<
35
 
36
  ### Example Request
37
 
38
- ```
39
  {
40
  "input_text": "[Ivan Ivanov, Lead Software Engineer, Superhero for Justice, Writing code, fixing issues, solving problems, Masculine, Long Hair, Adult]<|endoftext|>"
41
  }
@@ -47,7 +44,7 @@ To use the models for inference, you can send a POST request to the `/generate/<
47
 
48
  You can load a model and its tokenizer as follows:
49
 
50
- ```
51
  from transformers import GPT2LMHeadModel, GPT2Tokenizer
52
  model_name = "v2/story/small" # Change to your desired model path
53
  model = GPT2LMHeadModel.from_pretrained(model_name)
@@ -59,7 +56,7 @@ tokenizer = GPT2Tokenizer.from_pretrained(model_name)
59
 
60
  To generate text using the loaded model, use the following code:
61
 
62
- ```
63
  input_text = "Once upon a time"
64
  input_ids = tokenizer.encode(input_text, return_tensors="pt")
65
  output = model.generate(input_ids, max_length=50, do_sample=True)
 
1
+ # v2 Model Card
 
 
 
2
 
3
  ## Overview
4
 
 
32
 
33
  ### Example Request
34
 
35
+ ```json
36
  {
37
  "input_text": "[Ivan Ivanov, Lead Software Engineer, Superhero for Justice, Writing code, fixing issues, solving problems, Masculine, Long Hair, Adult]<|endoftext|>"
38
  }
 
44
 
45
  You can load a model and its tokenizer as follows:
46
 
47
+ ```python
48
  from transformers import GPT2LMHeadModel, GPT2Tokenizer
49
  model_name = "v2/story/small" # Change to your desired model path
50
  model = GPT2LMHeadModel.from_pretrained(model_name)
 
56
 
57
  To generate text using the loaded model, use the following code:
58
 
59
+ ```python
60
  input_text = "Once upon a time"
61
  input_ids = tokenizer.encode(input_text, return_tensors="pt")
62
  output = model.generate(input_ids, max_length=50, do_sample=True)