-
Notifications
You must be signed in to change notification settings - Fork 8
Description
Related:
- YouTube Live Streaming API - minimal working example #202 (focused on getting the API set up and pulling stream URL automatically)
- Download self-owned YouTube videos #212
- Post-processing videos (still frame removal and speedup) #223
Based on discussion with @Neil-YL:
How should the following tasks be handled (e.g., same manager, different managers):
- Close the current stream and start a new one every 8 hours
- Log the recent status of each stream into a CSV every 10-15mins
- Report the latest download link to a database
Or is it better to separate these into different scripts or services?
Maybe the monitor (task 2) as a separate service. Maybe the auto-restart and download links as a schedule-based service (e.g., scheduled task with Prefect) to trigger the workflow: first retrieve the download link for the latest completed video (not the currently active stream), then close the current broadcast and create a new one.
Another idea is to have a monitor create new stream when device_name is not in active stream list, and an 8-hr schedule to end all active streams
8-hr schedule to end all active streams
This one could also restart the stream, to minimize downtime (i.e., loop through each stream one at a time --> [end stream, restart stream]), but maybe this would cause issues if the monitor and this both try to restart the same stream.
It could be one code/service to both end/restart and monitor, when executing ending and restart, try to move out of the loop of the checking and when the restart completed back to the monitor loop.
There also needs to be a service that auto-downloads the videos and post-processes them (high bandwidth, so probably don't want to mix with the scripts that ensure uptime). This might be long-running processes on the order of 1+ hrs for each video, where GPUs would significantly accelerate the process. If there are only a few streams, a single service might be able to handle this. If there are 50+ streams, then there would need to be multiple machines doing the processing. This might justify some kind of trigger-based and on-demand mechanism.. (instead of having a few dedicated machines doing the processing). This might still require a schedule and monitor to see if new videos are available in a playlist (or if new entries are available in MongoDB).
Sterling Baird
42 minutes ago
@Marvin
how do you handle on-demand spinup of resources based on monitoring some kind of condition? (e.g., new entry added to a MongoDB database)
Marvin
APP 41 minutes ago
To handle on-demand resource spinup based on a condition like a new entry in a MongoDB database using Prefect, you can follow these general steps:
- Monitoring and Triggering: Use Prefect's event-driven triggers to monitor conditions. For instance, you can create a custom trigger that listens for specific events such as changes in your MongoDB database. Prefect allows you to define custom events and triggers that can start flows based on these events. More details can be found in the Prefect custom triggers documentation.
- Infrastructure Provisioning: Prefect can automatically provision the necessary infrastructure for running your flows. This is done using Prefect's CLI with the --provision-infra flag, which allows for serverless resource provisioning. This requires proper authentication and setup with your cloud provider, such as AWS. You can find more information in the Prefect infrastructure examples documentation.
- Flow Definition: Define your flow using Prefect's flow decorator, which is part of the core API. Prefect supports imperative programming styles, which should suit your preference.
For more specific implementations, you may need to integrate with Prefect's Python API to define tasks that interact with MongoDB and trigger flows based on the results.
If you have more specific requirements or need further clarification, feel free to ask!