You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
pydantic-resolve is a general-purpose data composition tool that supports multi-level data fetching, node-level post-processing, and cross-node data transmission.
5
2
6
-
pydantic-resolve is a sophisticated framework for composing complex data structures with an intuitive, resolver-based architecture that eliminates the N+1 query problem.
3
+
It organizes and manages data in a declarative way, greatly improving code readability and maintainability.
4
+
5
+
In the example, you inherit BaseStory and BaseTask to reuse and extend required fields, add tasks to BaseStory, and add a user field to each task.
7
6
8
7
```python
8
+
from pydantic_resolve import Resolver
9
+
from biz_models import BaseTask, BaseStory, BaseUser
10
+
from biz_services import UserLoader, StoryTaskLoader
# Task inherits from BaseTask so it can be initialized from it, then fetch the user.
22
+
return loader.load(self.id)
23
+
24
+
stories = [Story(**s) for s inawait query_stories()]
25
+
data =await Resolver().resolve(stories)
13
26
```
14
27
15
-
If you have experience with GraphQL, this article provides comprehensive insights: [Resolver Pattern: A Better Alternative to GraphQL in BFF.](https://github.com/allmonday/resolver-vs-graphql/blob/master/README-en.md)
28
+
Given initial BaseStory data:
16
29
17
-
The framework enables progressive data enrichment through incremental field resolution, allowing seamless API evolution from flat to hierarchical data structures.
30
+
```json
31
+
[
32
+
{ "id": 1, "name": "story - 1" },
33
+
{ "id": 2, "name": "story - 2" }
34
+
]
35
+
```
18
36
19
-
Extend your data models by implementing `resolve_field` methods for data fetching and `post_field` methods for transformations, enabling node creation, in-place modifications, or cross-node data aggregation.
37
+
pydantic-resolve can expand it into the complex structure you declare:
38
+
39
+
```json
40
+
[
41
+
{
42
+
"id": 1,
43
+
"name": "story - 1",
44
+
"tasks": [
45
+
{
46
+
"id": 1,
47
+
"name": "design",
48
+
"user": {
49
+
"id": 1,
50
+
"name": "tangkikodo"
51
+
}
52
+
}
53
+
]
54
+
},
55
+
{
56
+
"id": 2,
57
+
"name": "story - 2",
58
+
"tasks": [
59
+
{
60
+
"id": 2,
61
+
"name": "add ut",
62
+
"user": {
63
+
"id": 2,
64
+
"name": "john"
65
+
}
66
+
}
67
+
]
68
+
}
69
+
]
70
+
```
20
71
21
-
Seamlessly integrates with modern Python web frameworks including FastAPI, Litestar, and Django-ninja.
72
+
If you have GraphQL experience, this article provides a comprehensive discussion and comparison: [Resolver Pattern: A Better Alternative to GraphQL in BFF](https://github.com/allmonday/resolver-vs-graphql/blob/master/README-en.md)
22
73
23
-
> dataclass support is also available
74
+
Unlike ORM or GraphQL data fetching solutions, pydantic-resolve's post-processing capability provides a powerful solution for building business data, avoiding repetitive loops and temporary variables in business code, simplifying logic, and improving maintainability.
24
75
25
76
## Installation
26
77
27
78
```
28
79
pip install pydantic-resolve
29
80
```
30
81
31
-
Starting from pydantic-resolve v1.11.0, both pydantic v1 and v2 are supported.
82
+
From v1.11.0, pydantic-resolve supports both pydantic v1 and v2.
-**Composition-oriented development pattern**: [https://github.com/allmonday/composition-oriented-development-pattern](https://github.com/allmonday/composition-oriented-development-pattern)
38
89
39
-
## Architecture Overview
90
+
## Three Steps to Build Complex Data
40
91
41
-
Building complex data structures requires only 3 systematic steps:
92
+
Using Story and Task from Agile as an example:
42
93
43
94
### 1. Define Domain Models
44
95
45
-
Establish entity relationships as foundational data models (stable, serves as architectural blueprint)
96
+
Establish entity relationships as the base data model (for persistence layer; these relationships are stable and rarely change).
@@ -84,11 +135,15 @@ class UserLoader(DataLoader):
84
135
return build_object(users, keys, lambdax: x.id)
85
136
```
86
137
87
-
DataLoader implementations support flexible data sources, from database queries to microservice RPC calls.
138
+
DataLoader implementations support various data sources, from database queries to microservice RPC calls.
88
139
89
-
### 2. Compose Business Models
140
+
### 2. Compose Models for Business Needs
90
141
91
-
Create domain-specific data structures through selective composition and relationship mapping (stable, reusable across use cases)
142
+
For example, you may need to build Story (with tasks, assignee, reporter), Task (with user) business models.
143
+
144
+
You can inherit base models and extend fields as needed. This composition is flexible and can be dynamically modified, but dependencies are constrained by the previous definitions.
> Once business models are validated, consider optimizing with specialized queries to replace DataLoader for enhanced performance.
186
+
> Once the stability and necessity of the business model are validated, you can later replace DataLoader with specialized queries for performance, such as ORM relationships with joins.
187
+
188
+
### 3. Implement View Layer Transformation
189
+
190
+
In real business scenarios, data from the persistence layer often needs extra computed fields, such as totals or filters.
133
191
134
-
### 3. Implement View-Layer Transformations
192
+
pydantic-resolve's post-processing capability is ideal for these scenarios.
135
193
136
-
Apply presentation-specific modifications and data aggregations (flexible, context-dependent)
194
+
The `post_field` method allows data to be passed across nodes and modified after fetching.
137
195
138
-
Leverage post_field methods for ancestor data access, node transfers, and in-place transformations.
The post method is triggered after all resolve and post methods of the current and descendant nodes are executed, so all fields are ready for post-processing, such as calculating the total estimate of all tasks.
236
+
177
237
```python
178
238
classStory(BaseStory):
179
239
tasks: list[Task] = []
@@ -194,7 +254,9 @@ class Story(BaseStory):
194
254
returnsum(task.estimate for task inself.tasks)
195
255
```
196
256
197
-
### Pattern 3: Propagate Ancestor Context
257
+
### Pattern 3: Access Ancestor Node Data
258
+
259
+
Use `__pydantic_resolve_expose__` to expose fields from the current object to all descendants, which can access them via `ancestor_context['alias_name']`.
198
260
199
261
```python
200
262
from pydantic_resolve import LoaderDepend
@@ -205,7 +267,7 @@ class Task(BaseTask):
205
267
return loader.load(self.assignee_id)
206
268
207
269
# ---------- Post-processing ------------
208
-
defpost_name(self, ancestor_context): #Access story.name from parent context
270
+
defpost_name(self, ancestor_context): #access story.name from parent context
stories: [Story(**s) for s inawait query_stories()]
295
+
data =await Resolver().resolve(stories)
234
296
```
235
297
236
-
Resolution complete!
298
+
The `query_stories()` method returns an array of BaseStory data, which can be converted to Story objects. Then, use a Resolver instance to automatically transform and obtain complete descendant nodes and post-processed data.
237
299
238
300
## Technical Architecture
239
301
240
-
The framework significantly reduces complexity in data composition by maintaining alignment with entity-relationship models, resulting in enhanced maintainability.
241
-
242
-
> Utilizing an ER-oriented modeling approach delivers 3-5x development efficiency gains and 50%+ code reduction.
243
-
244
-
Leveraging pydantic's capabilities, it enables GraphQL-like hierarchical data structures while providing flexible business logic integration during data resolution.
245
-
246
-
Seamlessly integrates with FastAPI to construct frontend-optimized data structures and generate TypeScript SDKs for type-safe client integration.
302
+
pydantic-resolve maintains consistency with the entity relationship model, reducing data composition complexity and enhancing maintainability. Using ER-based modeling can improve development efficiency by 3-5x and reduce code by over 50%.
247
303
248
-
The core architecture provides `resolve` and `post` method hooks for pydantic and dataclass objects:
304
+
pydantic-resolve provides `resolve` and `post` method hooks for pydantic and dataclass objects:
249
305
250
-
-`resolve`: Handles data fetching operations
251
-
-`post`: Executes post-processing transformations
306
+
-`resolve`: handles data fetching
307
+
-`post`: performs post-processing transformations
252
308
253
-
This implements a recursive resolution pipeline that completes when all descendant nodes are processed.
309
+
It implements a recursive parsing process, where each node executes all resolve, post, and post_default_handler methods once. After this process, the parent node's resolve method finishes.
254
310
255
311

256
312
257
-
Consider the Sprint, Story, and Task relationship hierarchy:
313
+
For example, in a Sprint, Story, and Task hierarchy:
258
314
259
-

260
-
261
-
Upon object instantiation with defined methods, pydantic-resolve traverses the data graph, executes resolution methods, and produces the complete data structure.
262
-
263
-
DataLoader integration eliminates N+1 query problems inherent in multi-level data fetching, optimizing performance characteristics.
264
-
265
-
DataLoader architecture enables modular class composition and reusability across different contexts.
266
-
267
-
Additionally, the framework provides expose and collector mechanisms for sophisticated cross-layer data processing patterns.
268
-
269
-
## Testing and Coverage
270
-
271
-
```shell
272
-
tox
273
-
```
274
-
275
-
```shell
276
-
tox -e coverage
277
-
python -m http.server
278
-
```
315
+
Sprint's resolve_stories is executed first, then Story's resolve_tasks, Task as a leaf node finishes, then Story's post_task_time and post_done_task are executed, and Story's traversal ends. Next, Sprint's post_task_time and post_total_done_task_time are triggered.
279
316
280
-
Current test coverage: 97%
317
+
When the post method is triggered, all related descendant nodes are already processed, so refactoring resolve methods does not affect post logic (e.g., removing resolve methods and providing related data directly at the parent node, such as ORM relationships or fetching complete tree data from NoSQL).
281
318
282
-
## Benchmark
319
+
This achieves complete decoupling of resolve and post responsibilities. For example, when handling data from GraphQL, since related data is ready, you can skip resolve methods and use post methods for various post-processing needs.
283
320
284
-
`ab -c 50 -n 1000` based on FastAPI.
285
-
286
-
strawberry-graphql
287
-
288
-
```
289
-
Server Software: uvicorn
290
-
Server Hostname: localhost
291
-
Server Port: 8000
292
-
293
-
Document Path: /graphql
294
-
Document Length: 5303 bytes
295
-
296
-
Concurrency Level: 50
297
-
Time taken for tests: 3.630 seconds
298
-
Complete requests: 1000
299
-
Failed requests: 0
300
-
Total transferred: 5430000 bytes
301
-
Total body sent: 395000
302
-
HTML transferred: 5303000 bytes
303
-
Requests per second: 275.49 [#/sec] (mean)
304
-
Time per request: 181.498 [ms] (mean)
305
-
Time per request: 3.630 [ms] (mean, across all concurrent requests)
306
-
Transfer rate: 1460.82 [Kbytes/sec] received
307
-
106.27 kb/s sent
308
-
1567.09 kb/s total
309
-
310
-
Connection Times (ms)
311
-
min mean[+/-sd] median max
312
-
Connect: 0 0 0.2 0 1
313
-
Processing: 31 178 14.3 178 272
314
-
Waiting: 30 176 14.3 176 270
315
-
Total: 31 178 14.4 179 273
316
-
```
317
-
318
-
pydantic-resolve
319
-
320
-
```
321
-
Server Software: uvicorn
322
-
Server Hostname: localhost
323
-
Server Port: 8000
324
-
325
-
Document Path: /sprints
326
-
Document Length: 4621 bytes
327
-
328
-
Concurrency Level: 50
329
-
Time taken for tests: 2.194 seconds
330
-
Complete requests: 1000
331
-
Failed requests: 0
332
-
Total transferred: 4748000 bytes
333
-
HTML transferred: 4621000 bytes
334
-
Requests per second: 455.79 [#/sec] (mean)
335
-
Time per request: 109.700 [ms] (mean)
336
-
Time per request: 2.194 [ms] (mean, across all concurrent requests)
0 commit comments