You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -158,7 +160,11 @@ DataLoader implementations support flexible data sources, from database queries
158
160
159
161
### 2. Compose Business Models
160
162
161
-
Based on a specific business logic, create domain-specific data structures through selective schemas and relationship dataloader (stable, reusable across use cases)
163
+
Based on a our business logic, create domain-specific data structures through selective schemas and relationship dataloader
164
+
165
+
We need to extend `tasks`, `assignee` and `reporter` for `Story`, extend `user` for `Task`
166
+
167
+
Extending new fields is dynamic, all based on business requirement, but the relationships / loader are restricted by the definition from step 1.
> Once business models are validated, consider optimizing with specialized queries to replace DataLoader for enhanced performance.
208
+
> Once this combination is stable, you can consider optimizing with specialized queries to replace DataLoader for enhanced performance, eg ORM's join relationship
203
209
204
210
### 3. Implement View-Layer Transformations
205
211
206
-
Apply presentation-specific modifications and data aggregations (flexible, context-dependent)
212
+
Dataset from data-persistent layer can not meet all requirements, we always need some extra computed fields or adjust the data structure.
207
213
208
-
Leverage post_field methods for ancestor data access, node transfers, and in-place transformations.
214
+
post method could read fields from ancestor, collect fields from descendants or modify the data fetched by resolve method.
post methods are executed after all resolve_methods are resolved, so we can use it to calculate extra fields.
256
+
247
257
```python
248
258
classStory(BaseStory):
249
259
tasks: list[Task] = []
@@ -266,6 +276,12 @@ class Story(BaseStory):
266
276
267
277
### Case 3: Propagate ancestor data through ancestor_context
268
278
279
+
`__pydantic_resolve_expose__` could expose specific fields from current node to it's descendant.
280
+
281
+
alias_names should be global unique inside root node.
282
+
283
+
descendant nodes could read the value with `ancestor_context[alias_name]`.
284
+
269
285
```python
270
286
from pydantic_resolve import Loader
271
287
@@ -294,7 +310,7 @@ class Story(BaseStory):
294
310
return loader.load(self.report_to)
295
311
```
296
312
297
-
### 4. Execute Resolution
313
+
### 4. Execute Resolver().resolve()
298
314
299
315
```python
300
316
from pydantic_resolve import Resolver
@@ -303,38 +319,8 @@ stories = [Story(**s) for s in await query_stories()]
303
319
data =await Resolver().resolve(stories)
304
320
```
305
321
306
-
Complete!
307
-
308
-
## Technical Architecture
309
-
310
-
The framework significantly reduces complexity in data composition by maintaining alignment with entity-relationship models, resulting in enhanced maintainability.
311
-
312
-
> Utilizing an ER-oriented modeling approach delivers 3-5x development efficiency gains and 50%+ code reduction.
313
-
314
-
Leveraging pydantic's capabilities, it enables GraphQL-like hierarchical data structures while providing flexible business logic integration during data resolution.
315
-
316
-
Seamlessly integrates with FastAPI to construct frontend-optimized data structures and generate TypeScript SDKs for type-safe client integration.
317
-
318
-
The core architecture provides `resolve` and `post` method hooks for pydantic and dataclass objects:
319
-
320
-
-`resolve`: Handles data fetching operations
321
-
-`post`: Executes post-processing transformations
322
-
323
-
This implements a recursive resolution pipeline that completes when all descendant nodes are processed.
324
-
325
-

326
-
327
-
Consider the Sprint, Story, and Task relationship hierarchy:
Upon object instantiation with defined methods, pydantic-resolve traverses the data graph, executes resolution methods, and produces the complete data structure.
332
-
333
-
DataLoader integration eliminates N+1 query problems inherent in multi-level data fetching, optimizing performance characteristics.
334
-
335
-
DataLoader architecture enables modular class composition and reusability across different contexts.
322
+
`query_stories()` returns `BaseStory` list, after we transformed it into `Story`, resolve and post fields are initialized as default value, after `Resolver().resolve()` finished, all these fields will be resolved and post-processed to what we expected.
336
323
337
-
Additionally, the framework provides expose and collector mechanisms for sophisticated cross-layer data processing patterns.
0 commit comments