|
1 | 1 | # librdkafka v2.9.0
|
2 | 2 |
|
| 3 | +librdkafka v2.9.0 is a feature release: |
| 4 | + |
3 | 5 | * Identify brokers only by broker id (#4557, @mfleming)
|
4 | 6 | * Remove unavailable brokers and their thread (#4557, @mfleming)
|
5 |
| - |
| 7 | + * Fix for librdkafka yielding before timeouts had been reached (#) |
| 8 | + * Removed a 500ms latency when a consumer partition switches to a different |
| 9 | + leader (#) |
| 10 | + * The mock cluster implementation removes brokers from Metadata response |
| 11 | + when they're not available, this simulates better the actual behavior of |
| 12 | + a cluster that is using KRaft (#). |
| 13 | + * Doesn't remove topics from cache on temporary Metadata errors but only |
| 14 | + on metadata cache expiry (#). |
| 15 | + * Doesn't mark the topic as unknown if it had been marked as existent earlier |
| 16 | + and `topic.metadata.propagation.max.ms` hasn't passed still (#). |
| 17 | + * Doesn't update partition leaders if the topic in metadata |
| 18 | + response has errors (#). |
| 19 | + * Only topic authorization errors in a metadata response are considered |
| 20 | + permanent and are returned to the user (#). |
| 21 | + * The function `rd_kafka_offsets_for_times` refreshes leader information |
| 22 | + if the error requires it, allowing it to succeed on |
| 23 | + subsequent manual retries (#). |
| 24 | + * Deprecated `api.version.request`, `api.version.fallback.ms` and |
| 25 | + `broker.version.fallback` configuration properties (#). |
| 26 | + * When consumer is closed before destroying the client, the operations queue |
| 27 | + isn't purged anymore as it contains operations |
| 28 | + unrelated to the consumer group (#). |
| 29 | + * When making multiple changes to the consumer subscription in a short time, |
| 30 | + no unknown topic error is returned for topics that are in the new subscription but weren't in previous one (#). |
| 31 | + * Fix for the case where a metadata refresh enqueued on an unreachable broker |
| 32 | + prevents refreshing the controller or the coordinator until that broker |
| 33 | + becomes reachable again (#). |
6 | 34 |
|
7 | 35 | ## Fixes
|
8 | 36 |
|
|
20 | 48 | temporarily or permanently so we always remove it and it'll be added back when
|
21 | 49 | it becomes available again.
|
22 | 50 | Happens since 1.x (#4557, @mfleming).
|
| 51 | + * Issues: # |
| 52 | + librdkafka code using `cnd_timedwait` was yielding before a timeout occurred |
| 53 | + without the condition being fulfilled because of spurious wake-ups. |
| 54 | + Solved by verifying with a monotonic clock that the expected point in time |
| 55 | + was reached and calling the function again if needed. |
| 56 | + Happens since 1.x (#). |
| 57 | + * Issues: # |
| 58 | + Doesn't remove topics from cache on temporary Metadata errors but only |
| 59 | + on metadata cache expiry. It allows the client to continue working |
| 60 | + in case of temporary problems to the Kafka metadata plane. |
| 61 | + Happens since 1.x (#). |
| 62 | + * Issues: # |
| 63 | + Doesn't mark the topic as unknown if it had been marked as existent earlier |
| 64 | + and `topic.metadata.propagation.max.ms` hasn't passed still. It achieves |
| 65 | + this property expected effect even if a different broker had |
| 66 | + previously reported the topic as existent. |
| 67 | + Happens since 1.x (#). |
| 68 | + * Issues: # |
| 69 | + Doesn't update partition leaders if the topic in metadata |
| 70 | + response has errors. It's in line with what Java client does and allows |
| 71 | + to avoid segmentation faults for unknown partitions. |
| 72 | + Happens since 1.x (#). |
| 73 | + * Issues: # |
| 74 | + Only topic authorization errors in a metadata response are considered |
| 75 | + permanent and are returned to the user. It's in line with what Java client |
| 76 | + does and avoids returning to the user an error that wasn't meant to be |
| 77 | + permanent. |
| 78 | + Happens since 1.x (#). |
| 79 | + * Issues: # |
| 80 | + Fix for the case where a metadata refresh enqueued on an unreachable broker |
| 81 | + prevents refreshing the controller or the coordinator until that broker |
| 82 | + becomes reachable again. Given the request continues to be retried on that |
| 83 | + broker, the counter for refreshing complete broker metadata doesn't reach |
| 84 | + zero and prevents the client from obtaining the new controller or group or transactional coordinator. |
| 85 | + It causes a series of debug messages like: |
| 86 | + "Skipping metadata request: ... full request already in-transit", until |
| 87 | + the broker the request is enqueued on is up again. |
| 88 | + Solved by not retrying these kinds of metadata requests. |
| 89 | + Happens since 1.x (#). |
| 90 | + |
| 91 | +### Consumer fixes |
| 92 | + |
| 93 | + * Issues: # |
| 94 | + When switching to a different leader a consumer could wait 500ms |
| 95 | + (`fetch.error.backoff.ms`) before starting to fetch again. The fetch backoff wasn't reset when joining the new broker. |
| 96 | + Solved by resetting it, given it's not needed to backoff |
| 97 | + the first fetch on a different node. This way faster leader switches are |
| 98 | + possible. |
| 99 | + Happens since 1.x (#). |
| 100 | + * Issues: # |
| 101 | + The function `rd_kafka_offsets_for_times` refreshes leader information |
| 102 | + if the error requires it, allowing it to succeed on |
| 103 | + subsequent manual retries. Similar to the fix done in 2.3.0 in |
| 104 | + `rd_kafka_query_watermark_offsets`. Additionally, the partition |
| 105 | + current leader epoch is taken from metadata cache instead of |
| 106 | + from passed partitions. |
| 107 | + Happens since 1.x (#). |
| 108 | + * Issues: # |
| 109 | + When consumer is closed before destroying the client, the operations queue |
| 110 | + isn't purged anymore as it contains operations |
| 111 | + unrelated to the consumer group. |
| 112 | + Happens since 1.x (#). |
| 113 | + * Issues: # |
| 114 | + When making multiple changes to the consumer subscription in a short time, |
| 115 | + no unknown topic error is returned for topics that are in the new subscription |
| 116 | + but weren't in previous one. This was due to the metadata request relative |
| 117 | + to previous subscription. |
| 118 | + Happens since 1.x (#). |
23 | 119 |
|
24 | 120 |
|
25 | 121 |
|
|
0 commit comments