You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: www/apps/book/app/learn/best-practices/third-party-sync/page.mdx
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,7 +31,7 @@ Before diving into best practices, it's important to understand the general appr
31
31
32
32
[Workflows](../../fundamentals/workflows/page.mdx) are special functions designed for long-running, asynchronous tasks. They provide features like [compensation](../../fundamentals/workflows/compensation-function/page.mdx), [retries](../../fundamentals/workflows/retry-failed-steps/page.mdx), and [async execution](../../fundamentals/workflows/long-running-workflow/page.mdx) that are essential for reliable data syncing.
33
33
34
-
When defining your syncing logic, such as pushing product data to a third-party API or pulling inventory data into Medusa, you should define a workflow that encapsulates this logic.
34
+
When defining your syncing logic, such as pushing product data to a third-party service or pulling inventory data into Medusa, you should define a workflow that encapsulates this logic.
35
35
36
36
Medusa also exposes [built-in workflows](!resources!/medusa-workflows-reference) for common commerce operations, like creating or updating products, that you can leverage in your syncing logic.
37
37
@@ -67,9 +67,9 @@ If you've set up [server and worker instances](../../production/worker-mode/page
67
67
68
68
</Note>
69
69
70
-
In the scheduled job or subscriber, you retrieve the data to be synced from the third-party API or from Medusa itself. Then, you execute the workflow, passing it the data to be synced.
70
+
In the scheduled job or subscriber, you retrieve the data to be synced from the third-party service or from Medusa itself. Then, you execute the workflow, passing it the data to be synced.
71
71
72
-
For example, the following scheduled job fetches products from a third-party API and syncs them to Medusa using a workflow:
72
+
For example, the following scheduled job fetches products from a third-party service and syncs them to Medusa using a workflow:
@@ -392,7 +392,7 @@ First, install the `stream-json` library in your Medusa project:
392
392
npm install stream-json @types/stream-json
393
393
```
394
394
395
-
Then, use it in your scheduled job or subscriber to stream and parse JSON data from the third-party API:
395
+
Then, use it in your scheduled job or subscriber to stream and parse JSON data from the third-party service:
396
396
397
397
exportconst streamDataHighlights = [
398
398
["19", "nodeStream", "Create a Node.js Readable stream from the response body"],
@@ -510,9 +510,9 @@ In the above snippet, you catch stream errors and check for specific error codes
510
510
511
511
### Retrieve Only Necessary Fields
512
512
513
-
A common performance pitfall when syncing data is retrieving more fields than necessary from third-party APIs or Medusa's [Query](../../fundamentals/module-links/query/page.mdx). This leads to increased data size, slower performance, and higher memory usage.
513
+
A common performance pitfall when syncing data is retrieving more fields than necessary from third-party services or Medusa's [Query](../../fundamentals/module-links/query/page.mdx). This leads to increased data size, slower performance, and higher memory usage.
514
514
515
-
When retrieving data from third-party APIs or with Medusa's Query, only request the necessary fields. Then, to efficiently group existing data for updates, use a [Map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) for quick lookups.
515
+
When retrieving data from third-party services or with Medusa's Query, only request the necessary fields. Then, to efficiently group existing data for updates, use a [Map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map) for quick lookups.
516
516
517
517
For example, don't retrieve all product fields like this:
In the above snippet, you define two async generators:
657
657
658
-
1.`streamProductsFromApi`: Yields individual products from the third-party API one at a time.
658
+
1.`streamProductsFromApi`: Yields individual products from the third-party service one at a time.
659
659
2.`batchProducts`: Takes an async generator of products and yields them in batches of a specified size.
660
660
661
661
Then, in your scheduled job, you consume these generators using [for await...of](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for-await...of) loops to process product batches incrementally.
@@ -724,7 +724,7 @@ This approach keeps memory usage controlled and prevents the system from being o
724
724
725
725
Errors can occur during data syncing due to transient network issues, rate limiting, or temporary unavailability of third-party services. To improve reliability, implement retry logic with exponential backoff for transient errors.
726
726
727
-
For example, implement a custom function that fetches data with retry logic, then use it to fetch data from the third-party API:
727
+
For example, implement a custom function that fetches data with retry logic, then use it to fetch data from the third-party service:
728
728
729
729
exportconst retryHighlights = [
730
730
["1", "MAX_RETRIES", "Maximum number of retry attempts"],
0 commit comments