-
Notifications
You must be signed in to change notification settings - Fork 246
fix(run): Ensure 'run' subcommand works with host proxy settings. #1430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(run): Ensure 'run' subcommand works with host proxy settings. #1430
Conversation
Reviewer's GuideThis PR updates engine.py to prevent host proxy settings from affecting the internal client when using the 'run' subcommand with '--network none' by injecting correct NO_PROXY values for loopback addresses and unsetting other proxy variables, and it enhances the pull logic to pass the debug flag and surface a detailed notice on failures. Sequence Diagram:
|
Change | Details | Files |
---|---|---|
Introduce _add_proxy_settings_for_run to isolate container from host proxies |
|
ramalama/engine.py |
Enhance add_pull_newer to include debug output on pull failure |
|
ramalama/engine.py |
Tips and commands
Interacting with Sourcery
- Trigger a new review: Comment
@sourcery-ai review
on the pull request. - Continue discussions: Reply directly to Sourcery's review comments.
- Generate a GitHub issue from a review comment: Ask Sourcery to create an
issue from a review comment by replying to it. You can also reply to a
review comment with@sourcery-ai issue
to create an issue from it. - Generate a pull request title: Write
@sourcery-ai
anywhere in the pull
request title to generate a title at any time. You can also comment
@sourcery-ai title
on the pull request to (re-)generate the title at any time. - Generate a pull request summary: Write
@sourcery-ai summary
anywhere in
the pull request body to generate a PR summary at any time exactly where you
want it. You can also comment@sourcery-ai summary
on the pull request to
(re-)generate the summary at any time. - Generate reviewer's guide: Comment
@sourcery-ai guide
on the pull
request to (re-)generate the reviewer's guide at any time. - Resolve all Sourcery comments: Comment
@sourcery-ai resolve
on the
pull request to resolve all Sourcery comments. Useful if you've already
addressed all the comments and don't want to see them anymore. - Dismiss all Sourcery reviews: Comment
@sourcery-ai dismiss
on the pull
request to dismiss all existing Sourcery reviews. Especially useful if you
want to start fresh with a new review - don't forget to comment
@sourcery-ai review
to trigger a new review!
Customizing Your Experience
Access your dashboard to:
- Enable or disable review features such as the Sourcery-generated pull request
summary, the reviewer's guide, and others. - Change the review language.
- Add, remove or edit custom review instructions.
- Adjust other review settings.
Getting Help
- Contact our support team for questions or feedback.
- Visit our documentation for detailed guides and information.
- Keep in touch with the Sourcery team by following us on X/Twitter, LinkedIn or GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @melodyliu1986 - I've reviewed your changes - here's some feedback:
- The proxy-clearing logic currently applies to all
run
invocations—consider scoping it to only when--network none
is specified so you don’t unintentionally strip proxies in other network modes. - It might be helpful to extract the NO_PROXY/no_proxy merging and environment-setting logic into a standalone utility function for better reuse and easier testing.
Here's what I looked at during the review
- 🟢 General issues: all looks good
- 🟢 Security: all looks good
- 🟢 Testing: all looks good
- 🟡 Complexity: 1 issue found
- 🟢 Documentation: all looks good
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
ramalama/engine.py
Outdated
"""Helper to add arguments to the execution list.""" | ||
self.exec_args.extend(newargs) | ||
|
||
def _add_proxy_settings_for_run(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (complexity): Consider refactoring proxy and environment variable handling into helper methods and simplifying error handling to reduce repetition and centralize logic.
# 1. Factor NO_PROXY parsing into a small helper and an add_env convenience:
def _get_no_proxy_str(self):
essentials = {"localhost", "127.0.0.1"}
parts = {
p.strip()
for var in ("NO_PROXY", "no_proxy")
for p in os.getenv(var, "").split(",")
if p.strip()
}
return ",".join(sorted(parts | essentials))
def add_env(self, key, value):
self.add(["--env", f"{key}={value}"])
# 2. Collapse the big _add_proxy_settings_for_run into a few lines:
def _add_proxy_settings_for_run(self):
if getattr(self.args, "subcommand", None) != "run":
return
no_proxy = self._get_no_proxy_str()
if no_proxy:
for var in ("NO_PROXY", "no_proxy"):
self.add_env(var, no_proxy)
for var in ("http_proxy", "https_proxy", "HTTP_PROXY", "HTTPS_PROXY"):
self.add_env(var, "")
# 3. Simplify add_pull_newer by letting run_cmd handle debug‐only errors:
def add_pull_newer(self):
if not self.args.dryrun and self.use_docker and self.args.pull == "newer":
if not self.args.quiet:
print(f"Checking for newer image {self.args.image}")
run_cmd(
[self.args.engine, "pull", "-q", self.args.image],
ignore_all=True,
debug=self.args.debug
)
else:
self.exec_args += ["--pull", self.args.pull]
# 4. (If needed) In run_cmd, consolidate debug‐only error output:
def run_cmd(cmd, ignore_all=False, debug=False):
try:
subprocess.check_call(cmd, **...)
except Exception as e:
if debug:
perror(f"'{' '.join(cmd)}' failed: {e}")
if not ignore_all:
raise
This preserves all functionality, removes repetition, and centralizes both proxy‐string logic and debug messaging.
ramalama/engine.py
Outdated
"""Helper to add arguments to the execution list.""" | ||
self.exec_args.extend(newargs) | ||
|
||
def _add_proxy_settings_for_run(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (code-quality): We've found these issues:
- Add guard clause (
last-if-guard
) - Use named expression to simplify assignment and conditional (
use-named-expression
)
b3487cd
to
cef86f2
Compare
Could we put all this in libexec/ramalama/ramalama-run-core and instead? It's cleaner no if statement required since at this point we know we are in the run code for sure |
Would it be better/easier to just set podman run --network=none --http_proxy=false? Although this command does not exist with Docker. |
The thing is if we put it in libexec/ramalama/ramalama-run-core and solve it in python it should work with docker/podman/nocontainers |
cef86f2
to
f19eb33
Compare
I reverted the engine.py and switched to libexec/ramalama/ramalama-run-core, can you please review it? TY |
Since the ramalama-run-core is happening in a container, then are we sure the environment variables from the host are getting passed into the command? |
I built a ramalama image using my changes locally. Here is my test:
Using the quay.io/ramalama/ramalama:0.8:
Using my build image
I don't think I need to push the ramalama image to quay.io, you have an official CI/CD system that handles building and pushing official images to quay.io/ramalama/, correct? |
libexec/ramalama/ramalama-run-core
Outdated
@@ -60,6 +60,25 @@ def initialize_args(): | |||
|
|||
|
|||
def main(args): | |||
# --- START OF PROXY FIX --- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this line.
libexec/ramalama/ramalama-run-core
Outdated
for proxy_var in ['http_proxy', 'https_proxy', 'HTTP_PROXY', 'HTTPS_PROXY']: | ||
if proxy_var in os.environ: | ||
del os.environ[proxy_var] | ||
# --- END OF PROXY FIX --- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this line.
Yes we rebuild on Mondays usually, so once you remove those two lines I pointed out, I will merge and this fix should show up in next release. |
When 'ramalama run' is used with '--network none', it was inheriting host proxy environment variables. This caused the interanl client to fail when connecting to the internal llama-server on 127.0.0.1, as it tried to route loopback traffic through the unreachable proxy. This change modifies engine.py to: - Correctly set NO_PROXY/no_proxy for localhost and 127.0.0.1. - Explicitly unset http_proxy, https_proxy, HTTP_PROXY, and HTTPS_PROXY variables for the container when the 'run' subcommand is invoked. This allows the internal client to connect directly to the internal server, resolving the connection error. Fixes: containers#1414 Signed-off-by: Song Liu <[email protected]>
f19eb33
to
206d669
Compare
cool, please take a look again, thank you |
LGTM |
Thanks @melodyliu1986 |
Thing is, ramalama-run-core should never use a proxy, it should always be localhost. ramalama-client-core called without ramalama-run-core could potentially use a proxy for a remote server. |
When 'ramalama run' is used with '--network none', it was inheriting host proxy environment variables. This caused the interanl client to fail when connecting to the internal llama-server on 127.0.0.1, as it tried to route loopback traffic through the unreachable proxy.
This change modifies engine.py to:
This allows the internal client to connect directly to the internal server, resolving the connection error.
Fixes: #1414
Testings before the change:
Testings after the change:
Summary by Sourcery
Ensure 'run' subcommand bypasses host proxy settings to allow internal client to connect to the local llama server
Bug Fixes:
Enhancements: