Skip to content

Supporting message consumption using alternative schemas #318

@jlrobins

Description

@jlrobins

There's a desire to have extension side message consumer to be able to test to see if previously produced messages in a topic could be consumed with a revised / alternative schema.

To that end, it appears that the least-churn path would be for extension side to specify an additional consume request datapoint, the specific schema id to use, either as a request header (pro: can use existing OpenAPI document. Con: is more subtle) or optional request body parameter (Pro: is directly documented, Con: sidecar and extension then needs to carry patched version of whatever openapi specification describes the consume route, regenerate clients and request models, ...).

Then:

  • Update KafkaConsumeResource::messageViewer() to handle the header or request body payload optional parameter, pass it along into createMessageViewerContext().
  • Update / make Optional<> overload for KafkaConsumeResource::createMessageViewerContext () handling the overridden schemaId. Make MessageViewerContext carry it as an optional value.
  • Then within RecordDeserializer::serialize(), spice up the determination of int schemaId to pivot off of the presence of a provided schemaId instead of always determining from the record's preamble / magic bytes.

That said, this seems incomplete. The fetched schema (var parsedSchema) is not passed into the actual deserialize strategy (handleAvro(), ...), but they drive the deserialization using the whole bytes of the message, and the deserializer itself implicitly drives re-fetching of the schema from the SR client given the magic bytes?

Could then be tested with some curl invocations. If promising, then consider the UI-side issues in how the user should gesture to drive consume with an arbitrary schema id within the same cluster's schema registry. When that is solved, then also consider wash/rinse/repeating this, expanding out to being able to specify a schema from an alternate connection/environment/schema registry, then supporting the larger desire for "If I revise the schema and have the revised one in my local docker schema registry, can a consumer then use it to consume those records from the real topic over in the real kafka cluster?"

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions