Skip to content

Conversation

LUK3ARK
Copy link
Contributor

@LUK3ARK LUK3ARK commented Jun 21, 2024

Feature or Problem

  • Updated to the latest version of NATS.
  • Updated to the latest version of Wasmtime.
  • Updated to the latest version of wRPC.
  • Updated outgoing HTTP function to accept request options parameter.
  • Updated bus call function to work with the new wRPC multiplexed streams and sessions error handling.
  • Added get_func_paths to the bus to share path indexing from host to runtime for wRPC.
  • Created an internal IncomingInputStream type for compatibility as wRPC no longer exports it.
  • Added trappable imports setting, which is not default in the new version.
  • Added paths variable to components to avoid carrying the resolve on objects intended to be cheaply cloneable.
  • Updated the method of managing functions among instances.
  • Updated to use wRPC transport NATS client. A client builder was made but it may be better to revert to a thinner wrapper or remove it entirely.

Related Issues

  1. Added TODO note to double-check wasi mapping - I think these started raising an error now that unmapped bindings generate errors...

  2. Temporarily commented out some trait implementations for capability providers to allow for compilation. These need to be implemented back in before the end of this pull request to support first-hand interfaces in wasmCloud @ricochet . Need to implement with newest WRPC API, so these will likely change in form anyway.
    To resolve this, we need to determine the best way to map different invocation types. Options include:

    • Finding a way to fulfill these requirements now that wRPC does not generate the functions for serving directly.
    • Continuing to use wrpc_interface_http, wrpc_interface_blobstore, etc., crate dependencies in parallel.
  3. Update provider sdk after wrpc implementation has been signed off

  4. Implement resources on top of new wrpc integration, which can move forward once @rvolosatovs is able to finalize resource support natively in wrpc.

Consumer Impact

Make sure we have support for wrpc and new bindings for providers or port to new style. This may have some breaking changes as @brooksmtownsend

Other improvements

  • Reassess the client builder for wRPC transport NATS client to determine if a thinner wrapper or removal is better

@LUK3ARK LUK3ARK requested review from a team as code owners June 21, 2024 03:16
@LUK3ARK
Copy link
Contributor Author

LUK3ARK commented Jun 29, 2024

Second go round, big thanks to @rvolosatovs for the help:

  • There is so much friction moving over, therefore I took a different approach and kept usage of both the previous wrpc client (now referred to as LegacyClient) and the new WrpcClient. The provider sdk still relies on the legacy client, and any messages that need to be directly handled for http and wasmcloud messaging custom use cases will be handled by the legacy client, ensuring the unresourced api remains used.

  • I have removed the call function from the bus, as this was only for controlling middleware routing, now this is completely encapsulated by wrpc. All custom components will fallback to using the new wrpc client which allows the usage of resources and streaming more effectively. The only invocations that should be delegated to the runtime are the unresourced definitions for blobstore, keyvalue, etc. Everything else is binded by wrpc directly.

  • We might need to revisit the provider-sdk for any resources exported by providers.

  • to iterate... any instances that need to be explicitly served by the legacy client for any component will do so

  • I had to move around some of the logic and i don't like the fact that the instantiate function now required a lattice string. it works though, config structs have to be remade for these processes anyway

  • Because we now store a lattice property on the handler for each component, can someone confirm the expected behaviour for 'copy_for_new' ? we can even change lattice to an option if need be :)

@rvolosatovs
Copy link
Member

I believe this is now addressed given that #2459 is merged?

@brooksmtownsend
Copy link
Member

Closing because of #2459 , but @LUK3ARK please re-open if there's a piece that hasn't been addressed!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants