@@ -11,187 +11,63 @@ The worker uses the [inswapper_128.onnx](
1111https://huggingface.co/deepinsight/inswapper/resolve/main/inswapper_128.onnx )
1212model by [ InsightFace] ( https://insightface.ai/ ) .
1313
14- ## Local Testing (not required if you don't want to test locally)
15-
16- ### Clone the repo, create a venv and install the requirements
17-
18- ``` bash
19- git clone https://github.com/ashleykleynhans/runpod-worker-inswapper.git
20- cd runpod-worker-inswapper
21- python3 -m venv venv
22- source venv/bin/activate
23- pip3 install -r requirements.txt
24- ```
25-
26- ### Start the local RunPod Handler API
27-
28- Use ` --rp_serve_api ` command line argument to serve the API locally.
29-
30- ``` bash
31- python3 -u rp_handler.py --rp_serve_api
32- ```
33-
34- ** NOTE:** You need to keep the RunPod Handler API running in order to
35- run the tests, so open a new terminal window to run the tests.
36-
37- ### Set your test data files
38-
39- You can either overwrite the ` data/src.png ` and ` data/target.png ` image
40- files with your own source and target files, or alternatively, you can
41- edit the` tests/test_local_endpoint.py ` to reference the source and
42- target images somewhere else on your system.
43-
44- ### Run a local test
45-
46- 1 . Ensure that the RunPod Handler API is still running.
47- 2 . Go the directory containing this worker code, activate the venv,
48- change directory to the ` tests ` directory and run the
49- ` test_local_endpoint.py ` script.
50- ``` bash
51- cd runpod-worker-inswapper
52- source venv/bin/activate
53- cd tests
54- python3 test_local_endpoint.py
55- ```
56- 3 . This will display the HTTP status code and the filename
57- of the output image, for example:
58- ```
59- Status code: 200
60- Saving image: 792a7e9f-9c36-4d35-b408-0d45d8e2bbcb.jpg
61- ```
62-
63- You can then open the output image (in this case
64- ` 792a7e9f-9c36-4d35-b408-0d45d8e2bbcb.jpg ` ) to view the
65- results of the face swap.
66-
67- ## Building the Worker
68-
69- ### Option 1: Network Volume
70-
71- This will store your application on a Runpod Network Volume and
72- build a light weight Docker image that runs everything
73- from the Network volume without installing the application
74- inside the Docker image.
75-
76- 1 . [ Create a RunPod Account] ( https://runpod.io?ref=2xxro4sy ) .
77- 2 . Create a [ RunPod Network Volume] ( https://www.runpod.io/console/user/storage ) .
78- 3 . Attach the Network Volume to a Secure Cloud [ GPU pod] ( https://www.runpod.io/console/gpu-secure-cloud ) .
79- 4 . Select a light-weight template such as RunPod Pytorch.
80- 5 . Deploy the GPU Cloud pod.
81- 6 . Once the pod is up, open a Terminal and install the required dependencies:
82- ``` bash
83- cd /workspace
84- git clone https://github.com/ashleykleynhans/runpod-worker-inswapper.git
85- cd runpod-worker-inswapper
86- python3 -m venv venv
87- source venv/bin/activate
88- pip3 install -r requirements.txt
89- mkdir checkpoints
90- wget -O ./checkpoints/inswapper_128.onnx https://huggingface.co/deepinsight/inswapper/resolve/main/inswapper_128.onnx
91- apt update
92- apt -y install git-lfs
93- git lfs install
94- git clone https://huggingface.co/spaces/sczhou/CodeFormer
95- ```
96- 7 . Edit the ` create_test_json.py ` file and ensure that you set ` SOURCE_IMAGE ` to
97- a valid image to upscale (you can upload the image to your pod using
98- [ runpodctl] ( https://github.com/runpod/runpodctl/releases ) ).
99- 8 . Create the ` test_input.json ` file by running the ` create_test_json.py ` script:
100- ``` bash
101- python3 create_test_json.py
102- ```
103- 9 . Run an inference on the ` test_input.json ` input so that the models can be cached on
104- your Network Volume, which will dramatically reduce cold start times for RunPod Serverless:
105- ``` bash
106- python3 -u rp_handler.py
107- ```
108- 10 . Sign up for a Docker hub account if you don't already have one.
109- 11 . Build the Docker image and push to Docker hub:
110- ``` bash
111- docker build -t dockerhub-username/runpod-worker-inswapper:1.0.0 -f Dockerfile.Network_Volume .
112- docker login
113- docker push dockerhub-username/runpod-worker-inswapper:1.0.0
114- ```
115-
116- ### Option 2: Standalone
117-
118- This is the simpler option. No network volume is required.
119- The entire application will be stored within the Docker image
120- but will obviously create a more bulky Docker image as a result.
121-
122- ``` bash
123- docker build -t dockerhub-username/runpod-worker-inswapper:1.0.0 -f Dockerfile.Standalone .
124- docker login
125- docker push dockerhub-username/runpod-worker-inswapper:1.0.0
126- ```
127-
128- ## Dockerfile
129-
130- There are 2 different Dockerfile configurations
131-
132- 1 . Network_Volume - See Option 1 Above.
133- 2 . Standalone - See Option 2 Above (No Network Volume is required for this option).
134-
135- The worker is built using one of the two Dockerfile configurations
136- depending on your specific requirements.
137-
138- ## API
139-
140- The worker provides an API for inference. The API payload looks like this:
141-
142- ``` json
143- {
144- "input" : {
145- "source_image" : " base64 encoded source image content" ,
146- "target_image" : " base64 encoded target image content" ,
147- }
148- }
149- ```
14+ ## Testing
15+
16+ 1 . [ Local Testing] ( docs/testing/local.md )
17+ 2 . [ RunPod Testing] ( docs/testing/runpod.md )
18+
19+ ## Building the Docker image that will be used by the Serverless Worker
20+
21+ There are two options:
22+
23+ 1 . [ Network Volume] ( docs/building/with-network-volume.md )
24+ 2 . [ Standalone] ( docs/building/without-network-volume.md ) (without Network Volume)
25+
26+ ## RunPod API Endpoint
27+
28+ You can send requests to your RunPod API Endpoint using the ` /run `
29+ or ` /runsync ` endpoints.
30+
31+ Requests sent to the ` /run ` endpoint will be handled asynchronously,
32+ and are non-blocking operations. Your first response status will always
33+ be ` IN_QUEUE ` . You need to send subsequent requests to the ` /status `
34+ endpoint to get further status updates, and eventually the ` COMPLETED `
35+ status will be returned if your request is successful.
36+
37+ Requests sent to the ` /runsync ` endpoint will be handled synchronously
38+ and are blocking operations. If they are processed by a worker within
39+ 90 seconds, the result will be returned in the response, but if
40+ the processing time exceeds 90 seconds, you will need to handle the
41+ response and request status updates from the ` /status ` endpoint until
42+ you receive the ` COMPLETED ` status which indicates that your request
43+ was successful.
44+
45+ ### RunPod API Examples
46+
47+ * [ Swap a face in a target image that has a single face] ( docs/api/single-face-target.md )
48+ * [ Swap all the faces in the target image with the source face] ( docs/api/all-faces.md )
49+ * [ Swap a specific face in the target image with the source face] ( docs/api/specific-face.md )
50+
51+ ### Endpoint Status Codes
52+
53+ | Status | Description |
54+ | -------------| ---------------------------------------------------------------------------------------------------------------------------------|
55+ | IN_QUEUE | Request is in the queue waiting to be picked up by a worker. You can call the ` /status ` endpoint to check for status updates. |
56+ | IN_PROGRESS | Request is currently being processed by a worker. You can call the ` /status ` endpoint to check for status updates. |
57+ | FAILED | The request failed, most likely due to encountering an error. |
58+ | CANCELLED | The request was cancelled. This usually happens when you call the ` /cancel ` endpoint to cancel the request. |
59+ | TIMED_OUT | The request timed out. This usually happens when your handler throws some kind of exception that does return a valid response. |
60+ | COMPLETED | The request completed successfully and the output is available in the ` output ` field of the response. |
15061
15162## Serverless Handler
15263
15364The serverless handler (` rp_handler.py ` ) is a Python script that handles
154- inference requests. It defines a function handler(event) that takes an
155- inference request, runs the inference using the [ inswapper] (
65+ the API requests to your Endpoint using the [ runpod] ( https://github.com/runpod/runpod-python )
66+ Python library. It defines a function ` handler(event) ` that takes an
67+ API request (event), runs the inference using the [ inswapper] (
15668https://huggingface.co/deepinsight/inswapper/tree/main ) model (and
157- CodeFormer where applicable), and returns the output as a JSON response in
158- the following format:
159-
160- ``` json
161- {
162- "output" : {
163- "status" : " ok" ,
164- "image" : " base64 encoded output image"
165- }
166- }
167- ```
168-
169- ## Testing your RunPod Endpoint
170-
171- ### Configure your RunPod Credentials
172-
173- 1 . Copy the ` .env.example ` file to ` .env ` :
174- ``` bash
175- cd tests
176- cp .env.example .env
177- ```
178- 2 . Edit the ` .env ` file and add your RunPod API key to
179- ` RUNPOD_API_KEY ` and your RunPod Endpoint ID to
180- ` RUNPOD_ENDPOINT_ID ` .
181- 3 . Run the test script:
182- ``` bash
183- python3 test_runpod_endpoint.py
184- ```
185- 4 . This will display the HTTP status code and the filename
186- of the output image, for example:
187- ```
188- Status code: 200
189- Saving image: 792a7e9f-9c36-4d35-b408-0d45d8e2bbcb.jpg
190- ```
191-
192- You can then open the output image (in this case
193- ` 792a7e9f-9c36-4d35-b408-0d45d8e2bbcb.jpg ` ) to view the
194- results of the face swap.
69+ CodeFormer where applicable) with the ` input ` , and returns the ` output `
70+ in the JSON response.
19571
19672## Acknowledgements
19773
0 commit comments