Spaces:
Sleeping
Sleeping
Luke Stanley
commited on
Commit
•
f2e80c9
1
Parent(s):
83e4d57
Document serverless setup
Browse files- serverless.md +56 -0
serverless.md
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Fast severless GPU inference with RunPod
|
2 |
+
==============================
|
3 |
+
|
4 |
+
This partly GPT-4 generated document explains the integration of Runpod with Docker, including testing the Runpod Dockerfile with Docker Compose, building and pushing the image to Docker Hub, and how `app.py` makes use of it. I skimmed it and added stuff to it, as a note to myself and others.
|
5 |
+
|
6 |
+
## Testing with Docker Compose
|
7 |
+
|
8 |
+
To test the Runpod Dockerfile, you can use Docker Compose which simplifies the process of running multi-container Docker applications. Here's how you can test it:
|
9 |
+
|
10 |
+
1. Ensure you have Docker and Docker Compose installed on your system.
|
11 |
+
2. Navigate to the directory containing the `docker-compose.yml` file.
|
12 |
+
3. Run the following command to build and start the container:
|
13 |
+
```
|
14 |
+
docker-compose up --build
|
15 |
+
```
|
16 |
+
4. The above command will build the image as defined in `runpod.dockerfile` and start a container with the configuration specified in `docker-compose.yml`, it will automatically run a test, that matches the format expected from the llm_stream_serverless client (in utils.py), though without the network layer in play.
|
17 |
+
|
18 |
+
|
19 |
+
# Direct testing with Docker, without Docker-Compose:
|
20 |
+
|
21 |
+
Something like this worked for me:
|
22 |
+
|
23 |
+
```sudo docker run --gpus all -it -v "$(pwd)/.cache:/runpod-volume/.cache/huggingface/" lukestanley/test:translate2 bash```
|
24 |
+
Note the cache mount. This saves re-downloading the LLMs!
|
25 |
+
|
26 |
+
|
27 |
+
## Building and Pushing to Docker Hub
|
28 |
+
|
29 |
+
After testing and ensuring that everything works as expected, you can build the Docker image and push it to Docker Hub for deployment. Here are the steps:
|
30 |
+
|
31 |
+
1. Log in to Docker Hub from your command line using `docker login --username [yourusername]`.
|
32 |
+
2. Build the Docker image with a tag:
|
33 |
+
```
|
34 |
+
docker build -t yourusername/yourimagename:tag -f runpod.dockerfile .
|
35 |
+
```
|
36 |
+
3. Once the image is built, push it to Docker Hub:
|
37 |
+
```
|
38 |
+
docker push yourusername/yourimagename:tag
|
39 |
+
```
|
40 |
+
4. Replace `yourusername`, `yourimagename`, and `tag` with your Docker Hub username, the name you want to give to your image, and the tag respectively.
|
41 |
+
|
42 |
+
# Runpod previsioning:
|
43 |
+
You'll need an account on Runpod with credit.
|
44 |
+
You'll need a serverless GPU endpoint setting up using your Docker image.
|
45 |
+
|
46 |
+
It has a Flashboot feature that seems like Firecracker with GPU support, it might be using Cloud Hypervisor under the hood, currently Firecracker has no GPU support. Fly.io also has something similar, with Cloud Hypervisor.
|
47 |
+
|
48 |
+
## Runpod Integration in `app.py`
|
49 |
+
|
50 |
+
The `app.py` file is a Gradio interface that makes use of the Runpod integration to perform inference. It checks for the presence of a GPU and installs the appropriate version of `llama-cpp-python`. Depending on the environment variable `LLM_WORKER`, it uses either the Runpod serverless API, an HTTP server, or loads the model into memory for inference.
|
51 |
+
|
52 |
+
The `greet` function in `app.py` calls `improvement_loop` from the `chill` module, which based on an environment variable, will use the Runpod worker, that is used to process the input text and generate improved text based on the model's output.
|
53 |
+
|
54 |
+
The Gradio interface is then launched with `demo.launch()`, making the application accessible via a web interface, which can be shared publicly.
|
55 |
+
|
56 |
+
Note: Ensure that the necessary environment variables such as `LLM_WORKER`, `REPO_ID`, and `MODEL_FILE` are set correctly for the integration to work properly.
|