PyHARP is a companion package for HARP, an application which enables the seamless integration of machine learning models for audio into Digital Audio Workstations (DAWs). This repository provides a lightweight wrapper to embed arbitrary Python code for audio processing into Gradio endpoints which are accessible through HARP. In this way, HARP supports offline remote processing with algorithms or models that may be too resource-hungry to run on common hardware. HARP can be run as a standalone or from within DAWs that support external sample editing, such as REAPER, Logic Pro X, or Ableton Live. Please see our main repository for more information and instructions on how to install and run HARP.
If you plan on running or debugging a PyHARP app locally, you will need to install pyharp:
git clone https://github.com/TEAMuP-dev/pyharp
pip install -e pyharp
cd pyharpWe provide several examples of how to create a PyHARP app under the examples/ directory. You can also find a list of models already deployed as PyHARP apps on our website.
In order to run an app, you will need to install its corresponding dependencies. For example, to install the dependences for our pitch shifter example:
pip install -r examples/pitch_shifter/requirements.txtThe app can then be run with app.py:
python examples/pitch_shifter/app.pyThis will create a local Gradio endpoint at the URL http://localhost:<PORT>, as well as a forwarded public Gradio endpoint at the URL https://<RANDOM_ID>.gradio.live/.
Below, you can see example command line output after running app.py. Both the local endpoint (local URL) and the forwarded endpoint (public URL) are shown:
The Gradio app can be loaded in HARP as a custom path using either the local or public URL, as shown below.
Automatically generated Gradio endpoints are only available for 72 hours. If you'd like to keep the endpoint active and share it with other users, you can use HuggingFace Spaces (similar hosting services are also available) to host your PyHARP app indefinitely:
- Create a new HuggingFace Space
- Clone the initialized repository locally:
git clone https://huggingface.co/spaces/<USERNAME>/<SPACE_NAME>- Add your files to the repository, commit, then push to the
mainbranch:
git add .
git commit -m "initial commit"
git push -u origin mainYour PyHARP app will then begin running at https://huggingface.co/spaces/<USERNAME>/<SPACE_NAME>. The shorthand <USERNAME>/<SPACE_NAME> can also be used within HARP to reference the endpoint.
Here are a few tips and best-practices when dealing with HuggingFace Spaces:
- Spaces operate based off of the files in the
mainbranch - An access token may be required to push commits to HuggingFace Spaces
- A
README.mdfile with metadata will be created automatically when a Space is initialized- This file also controls the Gradio version used for the Space
- HARP may not work with the very latest or earlier versions of Gradio
- We recommend using 5.28.0 at this time
- A
requirements.txtfile specifying all dependencies must be included for a Space to work properly - A
.gitignorefile should be added to maintain repository orderliness (e.g., to ignoresrc/_outputs)
PyHARP apps can be accessed from within HARP through the local or forwarded URL corresponding to their active Gradio endpoints (see above), or the URL corresponding to their dedicated hosting service (see above), if applicable.
A model card defines various attributes of a PyHARP app and helps users understand its intended use. This information is also parsed within HARP and displayed when the model is selected.
The following processing code corresponds to our pitch shifter example:
from pyharp import ModelCard
model_card = ModelCard(
name="Pitch Shifter",
description="A pitch shifting example for HARP v3.",
author="TEAMuP",
tags=["example", "pitch shift", 'v3'],
)In PyHARP, arbitrary audio processing code is wrapped within a single function process_fn for use with Gradio. The function arguments and return values should match the input and output Gradio Components defined under the main Gradio code block (see below).
The following processing code corresponds to our pitch shifter example:
from pyharp import load_audio, save_audio
import torchaudio
import torch
@torch.inference_mode()
def process_fn(
input_audio_path: str,
pitch_shift_amount: int
)-> str:
pitch_shift_amount = int(pitch_shift_amount)
sig = load_audio(input_audio_path)
ps = torchaudio.transforms.PitchShift(
sig.sample_rate,
n_steps=pitch_shift_amount,
bins_per_octave=12,
n_fft=512
)
sig.audio_data = ps(sig.audio_data)
output_audio_path = str(save_audio(sig))
return output_audio_pathThe function takes two arguments:
input_audio_path: the filepath of the audio to processpitch_shift_amount: the amount to pitch shift by (in semitones)
and returns:
output_audio_path: the filepath of the processed audio
Note that this code uses the audiotools library from Descript (installation instructions can be found here).
The main Gradio code block for a PyHARP app consists of defining the input and output Gradio Components and launching the endpoint. Our build_endpoint function connects these components to the I/O of process_fn and extracts HARP-readable metadata from the model card and components to include in the endpoint. Currently, HARP supports the Slider, Checkbox, Number, Dropdown, and Textbox components as GUI controls.
The following endpoint code corresponds to our pitch shifter example:
from pyharp import build_endpoint
import gradio as gr
# Build Gradio endpoint
with gr.Blocks() as demo:
# Define input Gradio Components
input_components = [
gr.Audio(type="filepath",
label="Input Audio A")
.harp_required(True),
gr.Slider(
minimum=-24,
maximum=24,
step=1,
value=7,
label="Pitch Shift (semitones)",
info="Controls the amount of pitch shift in semitones"
),
]
# Define output Gradio Components
output_components = [
gr.Audio(type="filepath",
label="Output Audio")
.set_info("The pitch-shifted audio."),
]
# Build a HARP-compatible endpoint
app = build_endpoint(
model_card=model_card,
input_components=input_components,
output_components=output_components,
process_fn=process_fn,
)
demo.queue().launch(share=True, show_error=False, pwa=True) # see the third NOTE belowNOTE (1): All of the gr.Audio components MUST have type="filepath" in order to work with HARP.
NOTE (2): Make sure the order of the inputs / output matches the order of the arguments / return values in process_fn.
NOTE (3): In order to be able to cancel an ongoing processing job within HARP, queueing in Gradio needs to be enabled by calling demo.queue().
NOTE (4): Input Audio and File components can be registered as optional in HARP using .harp_required(False).
NOTE (5): All Audio and File components can be extended with the info attribute to define instructions to display in HARP using our set_info function.
In order to display output labels in HARP, you must define an output JSON component and return our custom LabelList object in process_fn:
from pyharp import LabelList, AudioLabel, MidiLabel, OutputLabel, ...
import gradio as gr
...
@torch.inference_mode()
def process_fn(...):
...
output_labels = LabelList()
output_labels.labels.extend(
[
AudioLabel(
t=0.0, # seconds
label="audio label",
# The following are optional:
duration=1.0, # seconds
description="long description",
color=OutputLabel.rgb_color_to_int(255, 255, 255, 0.5),
amplitude=0 # vertical positioning
),
...,
MidiLabel(
t=0.0, # seconds
label="MIDI label",
# The following are optional:
duration=1.0, # seconds
description="long description",
link="https://github.com/TEAMuP-dev/pyharp",
pitch=60 # vertical positioning
),
...
]
)
return ..., output_labels
with gr.Blocks() as demo:
...
output_components = [
...,
gr.JSON(label="Output Labels")
]
...If you want to build an endpoint that utilizes a pre-trained model, we recommend the following:
- Load the model outside of
process_fnso that it is only initialized once - Store model weights within your app repository using Git Large File Storage

