-
Notifications
You must be signed in to change notification settings - Fork 162
proposal: daemon process #769
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Keming <[email protected]>
This is related to the following: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where the stdout log will be ?
Some ideas:
|
manually redirect to files which we record it in documents LGTM |
Do we need to store the logs in files? I think we can just print in STDOUT. and show them in |
@gaocegege Since there're multiple processes at the same time, it's hard to put everything in the same stdout |
## API | ||
|
||
```python | ||
runtime.daemon(commands=[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then how to support services like tensorboard with the help of this feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to use multiple features.
- run a daemon tensorboard process (this proposal)
- expose the port to host
- specify the log dir mount
Currently, the stdout looks like:
|
SSHD is also a deamon service. One stdout makes it hard to track the logs. Let's say if user launch an envd container for training job with tensorboard launched also, envd logs should only show the stdout of the |
|
||
## Goals | ||
|
||
* able to run multiple daemon processes controlled by `tini` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the implementation plan? Any architectural considerations that we should discuss here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- plan: can add the commands to
tini
Lines 166 to 176 in b9f0af8
ep := []string{ "tini", "--", "bash", "-c", } template := `set -e /var/envd/bin/envd-ssh --authorized-keys %s --port %d --shell %s & %s wait -n` - I'd like to discuss if this is a general approach (need to work with other features like
mount
andexpose
) to solving the issues like feat(lang): Support TensorBoard #527 . Or if we should do it in another way?
the stdin/stdout/stderr still returns the problems what file descriptor the super process pass to its daemon subprocess as fd 0,1,2. We could force them gather into a file or split them out. |
Here is an example to demonstrate how to use it for def jupyter_lab():
expose(local_port=8888, host_port=8888, svc="jupyter")
runtime.daemon(commands=["jupyter-lab"])
def build():
base(os="ubuntu20.04", language="python")
install.pip_packages(["numpy", "jupyterlab"])
jupyter_lab() |
SGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Is it better to put expose
under runtime
namespace also?
Agree. BTW, |
I am merging this to move forward. But feel free to comment if there is any problem. |
Signed-off-by: Keming [email protected]
cc @aseaday @terrytangyuan