-
Notifications
You must be signed in to change notification settings - Fork 6
Description
There's a desire to have extension side message consumer to be able to test to see if previously produced messages in a topic could be consumed with a revised / alternative schema.
To that end, it appears that the least-churn path would be for extension side to specify an additional consume request datapoint, the specific schema id to use, either as a request header (pro: can use existing OpenAPI document. Con: is more subtle) or optional request body parameter (Pro: is directly documented, Con: sidecar and extension then needs to carry patched version of whatever openapi specification describes the consume route, regenerate clients and request models, ...).
Then:
- Update
KafkaConsumeResource::messageViewer()
to handle the header or request body payload optional parameter, pass it along intocreateMessageViewerContext()
. - Update / make
Optional<>
overload forKafkaConsumeResource::createMessageViewerContext ()
handling the overriddenschemaId
. MakeMessageViewerContext
carry it as an optional value. - Then within
RecordDeserializer::serialize()
, spice up the determination ofint schemaId
to pivot off of the presence of a provided schemaId instead of always determining from the record's preamble / magic bytes.
That said, this seems incomplete. The fetched schema (var parsedSchema
) is not passed into the actual deserialize strategy (handleAvro()
, ...), but they drive the deserialization using the whole bytes of the message, and the deserializer itself implicitly drives re-fetching of the schema from the SR client given the magic bytes?
Could then be tested with some curl invocations. If promising, then consider the UI-side issues in how the user should gesture to drive consume with an arbitrary schema id within the same cluster's schema registry. When that is solved, then also consider wash/rinse/repeating this, expanding out to being able to specify a schema from an alternate connection/environment/schema registry, then supporting the larger desire for "If I revise the schema and have the revised one in my local docker schema registry, can a consumer then use it to consume those records from the real topic over in the real kafka cluster?"