pronics2004 bhatta1 commited on
Commit
1236f52
1 Parent(s): 4021a5a

Update README.md (#1)

Browse files

- Update README.md (84c428863a567f76413dac7edcf8671501ab6aae)


Co-authored-by: Bishwaranjan Bhattacharjee <[email protected]>

Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -42,6 +42,10 @@ with torch.no_grad():
42
  probability = torch.softmax(logits, dim=1).detach().numpy()[:,1].tolist() # Probability of toxicity.
43
 
44
  ```
 
 
 
 
45
  ## Performance Comparison with Other Models
46
  The model outperforms most popular models with significantly lower inference latency. If a better F1 score is required, please refer to IBM's 12-layer model [here](https://huggingface.co/ibm-granite/granite-guardian-hap-125m).
47
 
 
42
  probability = torch.softmax(logits, dim=1).detach().numpy()[:,1].tolist() # Probability of toxicity.
43
 
44
  ```
45
+
46
+ ## Cookbook on Model Usage as a Guardrail
47
+ This recipe illustrates the use of the model either in a prompt, the output, or both. This is an example of a “guard rail” typically used in generative AI applications for safety.
48
+ [Guardrail Cookbook](https://github.com/ibm-granite-community/granite-code-cookbook/blob/main/recipes/Guard-Rails/HAP.ipynb)
49
  ## Performance Comparison with Other Models
50
  The model outperforms most popular models with significantly lower inference latency. If a better F1 score is required, please refer to IBM's 12-layer model [here](https://huggingface.co/ibm-granite/granite-guardian-hap-125m).
51