Skip to content

Conversation

WhyNotHugo
Copy link
Contributor

@WhyNotHugo WhyNotHugo commented Sep 25, 2022

The existing implementation expects alternating messages from each
source. If either side delivered two consecutive messages, these were
not handled properly and communication ended up out-of-sync.

This new approach runs two threads, each one handling data in one
direction.

I replaced the BUFFER_SIZE because the caret (^) symbol means bitwise or
in rust, so:

1024 ^ 2 = 1026

The read timeout has been removed, since we now actually WANT to block
indefinitely if there is no traffic in a given direction.

Fixes: #13
Fixes: #14
Closes: #16

Hugo Osvaldo Barrera added 2 commits September 25, 2022 16:46
@WhyNotHugo
Copy link
Contributor Author

A side note / remaining caveat (not a new issue, rather an existing one that's not very common).

This is how a native messaging app should communicate:

On the application side, you use standard input to receive messages and standard output to send them.

Each message is serialized using JSON, UTF-8 encoded and is preceded with an unsigned 32-bit value containing the message length in native byte order.

KeePassXC doesn't seem to be doing this; its output is not prefixed with the u32. The proxy currently reads data, and prefixes it the length.

Generally, when reading, exactly one message is returned, but if for some reason the message is only partially written (a signal interrupts writing?), then the proxy will assume that chunk is a single message and send it. This bogus message will then fail to parse.

I'm not entirely sure what happens when the KPXC calles write() twice in a row and we call read(). I think (hope?) that the both buffer will never be returned in a single read() call, otherwise that could also mess up communication.

I think this needs to be fixed is KPXC itself. It'll be a breaking change, however, I see no other obvious way to work around this.

The existing implementation expects alternating messages from each
source. If either side delivered two consecutive messages, these were
not handled properly and communication ended up out-of-sync.

This new approach runs two threads, each one handling data in one
direction.

I replaced the BUFFER_SIZE because the caret (^) symbol means bitwise or
in rust, so:

    1024 ^ 2 = 1026

The read timeout has been removed, since we now actually WANT to block
indefinitely if there is no traffic in a given direction.

Fixes: varjolintu#13
Fixes: varjolintu#14
Trying to reconnect to KeePassXC would be great, but the only way to do
that cleanly is to rewrite the whole thing to use async. Otherwise
there's no clean way of "stopping" the other thread while it's doing a
blocking read.
@varjolintu
Copy link
Owner

A side note / remaining caveat (not a new issue, rather an existing one that's not very common).

This is how a native messaging app should communicate:

On the application side, you use standard input to receive messages and standard output to send them.
Each message is serialized using JSON, UTF-8 encoded and is preceded with an unsigned 32-bit value containing the message length in native byte order.

KeePassXC doesn't seem to be doing this; its output is not prefixed with the u32. The proxy currently reads data, and prefixes it the length.

Generally, when reading, exactly one message is returned, but if for some reason the message is only partially written (a signal interrupts writing?), then the proxy will assume that chunk is a single message and send it. This bogus message will then fail to parse.

I'm not entirely sure what happens when the KPXC calles write() twice in a row and we call read(). I think (hope?) that the both buffer will never be returned in a single read() call, otherwise that could also mess up communication.

I think this needs to be fixed is KPXC itself. It'll be a breaking change, however, I see no other obvious way to work around this.

This is exactly how this should work.

If messages are send or retrieved as chunks, those are definitely not parsed correctly at KeePassXC side. This is why the buffer is set quite large just in case.

The prefixed 32-bit values are only used with stdin/stdout.

@Tibladar
Copy link

Tibladar commented Sep 26, 2022

Dunno if relevant, but thread::scope requires Rust 1.63 which Ubuntu doesn't provide yet.

Ubuntu 22.04: Rust 1.59
Ubuntu 22.10: Rust 1.61

Maybe a binary release on Github and a note in the readme on the required Rust version would be good

@WhyNotHugo
Copy link
Contributor Author

This is exactly how this should work.

Right, it's a bug in how KPXC expects the communication to happen. Any signal or similar interruption on other side will result in broken messages. It would be idea for KPXC's socket to expect the message length before the message, but it's not something that we can fix here.

If messages are send or retrieved as chunks, those are definitely not parsed correctly at KeePassXC side. This is why the buffer is set quite large just in case.

I just realised that the buffer is now 1KiB, not 1 MiB. Still up from 1026 bytes. Should i extend it to 1MiB, or is 1KiB enough for all cases?

@WhyNotHugo
Copy link
Contributor Author

If you prefer, I can use crossbeam::scope so that can be built by older Rust versions (and potentially packaged by distros with it too).

It's your choice, lemme know what you prefer, @varjolintu.

@varjolintu
Copy link
Owner

This is exactly how this should work.

Right, it's a bug in how KPXC expects the communication to happen. Any signal or similar interruption on other side will result in broken messages. It would be idea for KPXC's socket to expect the message length before the message, but it's not something that we can fix here.

If messages are send or retrieved as chunks, those are definitely not parsed correctly at KeePassXC side. This is why the buffer is set quite large just in case.

I just realised that the buffer is now 1KiB, not 1 MiB. Still up from 1026 bytes. Should i extend it to 1MiB, or is 1KiB enough for all cases?

It should be 1 MiB.

It's your choice, lemme know what you prefer, @varjolintu.

I'd probably go for a newer Rust version. This proxy is just a side project, so at least I'm not even expecting it to support some LTS releases etc.

@varjolintu
Copy link
Owner

varjolintu commented Sep 26, 2022

Right, it's a bug in how KPXC expects the communication to happen. Any signal or similar interruption on other side will result in broken messages. It would be idea for KPXC's socket to expect the message length before the message, but it's not something that we can fix here.

What the Qt proxy does, it just reads the whole message from the Unix socket, checks its length and writes the length + message to stdout. Do you think some extra checks are still needed?

@WhyNotHugo
Copy link
Contributor Author

🤦 1024 * 1024 = 1MiB. Haven't had coffee this Monday morning, sorry.

@WhyNotHugo
Copy link
Contributor Author

What the Qt proxy does, it just reads the whole message from the Unix socket, checks its length and writes the length + message to stdout.

It would also have this issue; if a signal interrupts KPXC mid-write, the single message would be written as two chunks and two bogus lengths would be prepended (one to each).

It's unlikely and racey, but possible.

@varjolintu varjolintu self-requested a review September 27, 2022 10:00
@WhyNotHugo
Copy link
Contributor Author

It seems that this does not exit properly. After having my laptop run for some hours (and KPXC having been locked at some point), I found two instances of the proxy running, one of them with very high CPU usage.

Not quite sure what's going on.

My rudimentary logging mechanism shows just this:

thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: Kind(BrokenPipe)', src/main.rs:66:61
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

I don't think that's related though; that's probably just when KPXC got closed.

I'm logging using this wrapper around the proxy:

tee -a /tmp/proxy.in | /usr/bin/keepassxc-proxy 2> /tmp/proxy.err | tee /tmp/proxy.out
echo "proxy existed $?" >> /tmp/proxy.code

@varjolintu
Copy link
Owner

@WhyNotHugo Is it possible that this happens because the read socket is blocking?

@WhyNotHugo
Copy link
Contributor Author

I've been using this version since posting this PR. It's been working pretty well, #13 and #14 are gone, and general experience has been a lot better. 👌

@WhyNotHugo
Copy link
Contributor Author

WhyNotHugo commented Oct 1, 2022

Is it possible that this happens because the read socket is blocking?

No, a blocking read means that the process thread pauses completely if there's nothing to read. It be un-paused by the kernel as soon as no data is available.

@WhyNotHugo
Copy link
Contributor Author

I managed the repro the 100% CPU usage scenario.

It happens when Firefox exits; one thread crashes, but the other one loops at 100% CPU. I need to change the flow so that the whole program exits when one thread crashes (I originally misunderstood that thread::scope did this, but it doesn't).

@varjolintu
Copy link
Owner

I managed the repro the 100% CPU usage scenario.

It happens when Firefox exits; one thread crashes, but the other one loops at 100% CPU. I need to change the flow so that the whole program exits when one thread crashes (I originally misunderstood that thread::scope did this, but it doesn't).

Good catch! Thanks for all the work btw :) I appreciate it.

@WhyNotHugo
Copy link
Contributor Author

WhyNotHugo commented Oct 5, 2022

When Firefox exits, we need to exit, but if keepassxc exits, we can try and stay alive and wait for it to re-appear.

If Firefox exits, reconnection is impossible, so it might make sense to just panic exit in that situation.

So the main thread should be the one blocking on read from firefox/stdin, and the other never-joined-thread that should never exit, just wait for the socket to re-appear in any valid location.

@WhyNotHugo
Copy link
Contributor Author

WhyNotHugo commented Oct 11, 2022 via email

@WhyNotHugo
Copy link
Contributor Author

Latest push seems to have been stable for a few days now; the CPU spike issues are gone.

@varjolintu
Copy link
Owner

Latest push seems to have been stable for a few days now; the CPU spike issues are gone.

Good to know. I'll test this soon.

@varjolintu
Copy link
Owner

Sorry for the delay. Just tested it for a while and it works perfectly.

@varjolintu varjolintu merged commit 57d99bf into varjolintu:master Nov 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

KeePassXC's broadcast messages are not supported Disconnects when multiple entries are available [Firefox]
3 participants