Skip to content

Commit 3d494aa

Browse files
committed
update doc (wip)
1 parent b89a270 commit 3d494aa

File tree

3 files changed

+114
-140
lines changed

3 files changed

+114
-140
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -209,7 +209,7 @@ class Story(BaseModel):
209209
210210
### 3. Implement View-Layer Transformations
211211

212-
Dataset from data-persistent layer can not meet all the requirement, we always need some extra computed fields or adjust the data structure.
212+
Dataset from data-persistent layer can not meet all requirements, we always need some extra computed fields or adjust the data structure.
213213

214214
post method could read fields from ancestor, collect fields from descendants or modify the data fetched by resolve method.
215215

docs/introduction.md

Lines changed: 111 additions & 137 deletions
Original file line numberDiff line numberDiff line change
@@ -1,50 +1,101 @@
1-
[![pypi](https://img.shields.io/pypi/v/pydantic-resolve.svg)](https://pypi.python.org/pypi/pydantic-resolve)
2-
[![PyPI Downloads](https://static.pepy.tech/badge/pydantic-resolve/month)](https://pepy.tech/projects/pydantic-resolve)
3-
![Python Versions](https://img.shields.io/pypi/pyversions/pydantic-resolve)
4-
[![CI](https://github.com/allmonday/pydantic_resolve/actions/workflows/ci.yml/badge.svg)](https://github.com/allmonday/pydantic_resolve/actions/workflows/ci.yml)
1+
pydantic-resolve is a general-purpose data composition tool that supports multi-level data fetching, node-level post-processing, and cross-node data transmission.
52

6-
pydantic-resolve is a sophisticated framework for composing complex data structures with an intuitive, resolver-based architecture that eliminates the N+1 query problem.
3+
It organizes and manages data in a declarative way, greatly improving code readability and maintainability.
4+
5+
In the example, you inherit BaseStory and BaseTask to reuse and extend required fields, add tasks to BaseStory, and add a user field to each task.
76

87
```python
8+
from pydantic_resolve import Resolver
9+
from biz_models import BaseTask, BaseStory, BaseUser
10+
from biz_services import UserLoader, StoryTaskLoader
11+
912
class Task(BaseTask):
1013
user: Optional[BaseUser] = None
11-
def resolve_user(self, loader=LoaderDepend(UserLoader)):
14+
def resolve_user(self, loader=Loader(UserLoader)):
1215
return loader.load(self.assignee_id) if self.assignee_id else None
16+
17+
class Story(BaseStory):
18+
tasks: list[Task] = []
19+
def resolve_tasks(self, loader=Loader(StoryTaskLoader)):
20+
# this loader returns BaseTask,
21+
# Task inherits from BaseTask so it can be initialized from it, then fetch the user.
22+
return loader.load(self.id)
23+
24+
stories = [Story(**s) for s in await query_stories()]
25+
data = await Resolver().resolve(stories)
1326
```
1427

15-
If you have experience with GraphQL, this article provides comprehensive insights: [Resolver Pattern: A Better Alternative to GraphQL in BFF.](https://github.com/allmonday/resolver-vs-graphql/blob/master/README-en.md)
28+
Given initial BaseStory data:
1629

17-
The framework enables progressive data enrichment through incremental field resolution, allowing seamless API evolution from flat to hierarchical data structures.
30+
```json
31+
[
32+
{ "id": 1, "name": "story - 1" },
33+
{ "id": 2, "name": "story - 2" }
34+
]
35+
```
1836

19-
Extend your data models by implementing `resolve_field` methods for data fetching and `post_field` methods for transformations, enabling node creation, in-place modifications, or cross-node data aggregation.
37+
pydantic-resolve can expand it into the complex structure you declare:
38+
39+
```json
40+
[
41+
{
42+
"id": 1,
43+
"name": "story - 1",
44+
"tasks": [
45+
{
46+
"id": 1,
47+
"name": "design",
48+
"user": {
49+
"id": 1,
50+
"name": "tangkikodo"
51+
}
52+
}
53+
]
54+
},
55+
{
56+
"id": 2,
57+
"name": "story - 2",
58+
"tasks": [
59+
{
60+
"id": 2,
61+
"name": "add ut",
62+
"user": {
63+
"id": 2,
64+
"name": "john"
65+
}
66+
}
67+
]
68+
}
69+
]
70+
```
2071

21-
Seamlessly integrates with modern Python web frameworks including FastAPI, Litestar, and Django-ninja.
72+
If you have GraphQL experience, this article provides a comprehensive discussion and comparison: [Resolver Pattern: A Better Alternative to GraphQL in BFF](https://github.com/allmonday/resolver-vs-graphql/blob/master/README-en.md)
2273

23-
> dataclass support is also available
74+
Unlike ORM or GraphQL data fetching solutions, pydantic-resolve's post-processing capability provides a powerful solution for building business data, avoiding repetitive loops and temporary variables in business code, simplifying logic, and improving maintainability.
2475

2576
## Installation
2677

2778
```
2879
pip install pydantic-resolve
2980
```
3081

31-
Starting from pydantic-resolve v1.11.0, both pydantic v1 and v2 are supported.
82+
From v1.11.0, pydantic-resolve supports both pydantic v1 and v2.
3283

3384
## Documentation
3485

35-
- **Documentation**: [https://allmonday.github.io/pydantic-resolve/v2/introduction/](https://allmonday.github.io/pydantic-resolve/v2/introduction/)
36-
- **Demo Repository**: [https://github.com/allmonday/pydantic-resolve-demo](https://github.com/allmonday/pydantic-resolve-demo)
37-
- **Composition-Oriented Pattern**: [https://github.com/allmonday/composition-oriented-development-pattern](https://github.com/allmonday/composition-oriented-development-pattern)
86+
- **Docs**: [https://allmonday.github.io/pydantic-resolve/v2/introduction/](https://allmonday.github.io/pydantic-resolve/v2/introduction/)
87+
- **Demo repository**: [https://github.com/allmonday/pydantic-resolve-demo](https://github.com/allmonday/pydantic-resolve-demo)
88+
- **Composition-oriented development pattern**: [https://github.com/allmonday/composition-oriented-development-pattern](https://github.com/allmonday/composition-oriented-development-pattern)
3889

39-
## Architecture Overview
90+
## Three Steps to Build Complex Data
4091

41-
Building complex data structures requires only 3 systematic steps:
92+
Using Story and Task from Agile as an example:
4293

4394
### 1. Define Domain Models
4495

45-
Establish entity relationships as foundational data models (stable, serves as architectural blueprint)
96+
Establish entity relationships as the base data model (for persistence layer; these relationships are stable and rarely change).
4697

47-
<img width="639" alt="image" src="https://github.com/user-attachments/assets/2656f72e-1af5-467a-96f9-cab95760b720" />
98+
<img width="630px" alt="image" src="https://github.com/user-attachments/assets/2656f72e-1af5-467a-96f9-cab95760b720" />
4899

49100
```python
50101
from pydantic import BaseModel
@@ -84,11 +135,15 @@ class UserLoader(DataLoader):
84135
return build_object(users, keys, lambda x: x.id)
85136
```
86137

87-
DataLoader implementations support flexible data sources, from database queries to microservice RPC calls.
138+
DataLoader implementations support various data sources, from database queries to microservice RPC calls.
88139

89-
### 2. Compose Business Models
140+
### 2. Compose Models for Business Needs
90141

91-
Create domain-specific data structures through selective composition and relationship mapping (stable, reusable across use cases)
142+
For example, you may need to build Story (with tasks, assignee, reporter), Task (with user) business models.
143+
144+
You can inherit base models and extend fields as needed. This composition is flexible and can be dynamically modified, but dependencies are constrained by the previous definitions.
145+
146+
You can treat it as a subset of the ER model.
92147

93148
<img width="709" alt="image" src="https://github.com/user-attachments/assets/ffc74e60-0670-475c-85ab-cb0d03460813" />
94149

@@ -114,7 +169,7 @@ class Story(BaseStory):
114169
return loader.load(self.report_to) if self.report_to else None
115170
```
116171

117-
Utilize `ensure_subset` decorator for field validation and consistency enforcement:
172+
Use the `ensure_subset` decorator for field validation and consistency enforcement:
118173

119174
```python
120175
@ensure_subset(BaseStory)
@@ -126,26 +181,29 @@ class Story(BaseModel):
126181
tasks: list[BaseTask] = []
127182
def resolve_tasks(self, loader=LoaderDepend(StoryTaskLoader)):
128183
return loader.load(self.id)
129-
130184
```
131185

132-
> Once business models are validated, consider optimizing with specialized queries to replace DataLoader for enhanced performance.
186+
> Once the stability and necessity of the business model are validated, you can later replace DataLoader with specialized queries for performance, such as ORM relationships with joins.
187+
188+
### 3. Implement View Layer Transformation
189+
190+
In real business scenarios, data from the persistence layer often needs extra computed fields, such as totals or filters.
133191

134-
### 3. Implement View-Layer Transformations
192+
pydantic-resolve's post-processing capability is ideal for these scenarios.
135193

136-
Apply presentation-specific modifications and data aggregations (flexible, context-dependent)
194+
The `post_field` method allows data to be passed across nodes and modified after fetching.
137195

138-
Leverage post_field methods for ancestor data access, node transfers, and in-place transformations.
196+
#### Pattern 1: Collect Objects Across Layers
139197

140-
#### Pattern 1: Aggregate Related Entities
198+
<img width="630px" alt="image" src="https://github.com/user-attachments/assets/2e3b1345-9e5e-489b-a81d-dc220b9d6334" />
141199

142-
<img width="701" alt="image" src="https://github.com/user-attachments/assets/2e3b1345-9e5e-489b-a81d-dc220b9d6334" />
200+
Use `__pydantic_resolve_collect__` to send fields from the current object up to ancestor nodes that declare a collector.
143201

144202
```python
145203
from pydantic_resolve import LoaderDepend, Collector
146204

147205
class Task(BaseTask):
148-
__pydantic_resolve_collect__ = {'user': 'related_users'} # Propagate user to collector: 'related_users'
206+
__pydantic_resolve_collect__ = {'user': 'related_users'} # send user to 'related_users' collector
149207

150208
user: Optional[BaseUser] = None
151209
def resolve_user(self, loader=LoaderDepend(UserLoader)):
@@ -170,10 +228,12 @@ class Story(BaseStory):
170228
return collector.values()
171229
```
172230

173-
#### Pattern 2: Compute Derived Metrics
231+
#### Pattern 2: Compute New Fields
174232

175233
<img width="687" alt="image" src="https://github.com/user-attachments/assets/fd5897d6-1c6a-49ec-aab0-495070054b83" />
176234

235+
The post method is triggered after all resolve and post methods of the current and descendant nodes are executed, so all fields are ready for post-processing, such as calculating the total estimate of all tasks.
236+
177237
```python
178238
class Story(BaseStory):
179239
tasks: list[Task] = []
@@ -194,7 +254,9 @@ class Story(BaseStory):
194254
return sum(task.estimate for task in self.tasks)
195255
```
196256

197-
### Pattern 3: Propagate Ancestor Context
257+
### Pattern 3: Access Ancestor Node Data
258+
259+
Use `__pydantic_resolve_expose__` to expose fields from the current object to all descendants, which can access them via `ancestor_context['alias_name']`.
198260

199261
```python
200262
from pydantic_resolve import LoaderDepend
@@ -205,7 +267,7 @@ class Task(BaseTask):
205267
return loader.load(self.assignee_id)
206268

207269
# ---------- Post-processing ------------
208-
def post_name(self, ancestor_context): # Access story.name from parent context
270+
def post_name(self, ancestor_context): # access story.name from parent context
209271
return f'{ancestor_context['story_name']} - {self.name}'
210272

211273
class Story(BaseStory):
@@ -224,126 +286,38 @@ class Story(BaseStory):
224286
return loader.load(self.report_to)
225287
```
226288

227-
### 4. Execute Resolution Pipeline
289+
### 4. Run the Resolver
228290

229291
```python
230292
from pydantic_resolve import Resolver
231293

232-
stories: List[Story] = await query_stories()
233-
await Resolver().resolve(stories)
294+
stories: [Story(**s) for s in await query_stories()]
295+
data = await Resolver().resolve(stories)
234296
```
235297

236-
Resolution complete!
298+
The `query_stories()` method returns an array of BaseStory data, which can be converted to Story objects. Then, use a Resolver instance to automatically transform and obtain complete descendant nodes and post-processed data.
237299

238300
## Technical Architecture
239301

240-
The framework significantly reduces complexity in data composition by maintaining alignment with entity-relationship models, resulting in enhanced maintainability.
241-
242-
> Utilizing an ER-oriented modeling approach delivers 3-5x development efficiency gains and 50%+ code reduction.
243-
244-
Leveraging pydantic's capabilities, it enables GraphQL-like hierarchical data structures while providing flexible business logic integration during data resolution.
245-
246-
Seamlessly integrates with FastAPI to construct frontend-optimized data structures and generate TypeScript SDKs for type-safe client integration.
302+
pydantic-resolve maintains consistency with the entity relationship model, reducing data composition complexity and enhancing maintainability. Using ER-based modeling can improve development efficiency by 3-5x and reduce code by over 50%.
247303

248-
The core architecture provides `resolve` and `post` method hooks for pydantic and dataclass objects:
304+
pydantic-resolve provides `resolve` and `post` method hooks for pydantic and dataclass objects:
249305

250-
- `resolve`: Handles data fetching operations
251-
- `post`: Executes post-processing transformations
306+
- `resolve`: handles data fetching
307+
- `post`: performs post-processing transformations
252308

253-
This implements a recursive resolution pipeline that completes when all descendant nodes are processed.
309+
It implements a recursive parsing process, where each node executes all resolve, post, and post_default_handler methods once. After this process, the parent node's resolve method finishes.
254310

255311
![](images/life-cycle.png)
256312

257-
Consider the Sprint, Story, and Task relationship hierarchy:
313+
For example, in a Sprint, Story, and Task hierarchy:
258314

259-
![](images/real-sample.png)
260-
261-
Upon object instantiation with defined methods, pydantic-resolve traverses the data graph, executes resolution methods, and produces the complete data structure.
262-
263-
DataLoader integration eliminates N+1 query problems inherent in multi-level data fetching, optimizing performance characteristics.
264-
265-
DataLoader architecture enables modular class composition and reusability across different contexts.
266-
267-
Additionally, the framework provides expose and collector mechanisms for sophisticated cross-layer data processing patterns.
268-
269-
## Testing and Coverage
270-
271-
```shell
272-
tox
273-
```
274-
275-
```shell
276-
tox -e coverage
277-
python -m http.server
278-
```
315+
Sprint's resolve_stories is executed first, then Story's resolve_tasks, Task as a leaf node finishes, then Story's post_task_time and post_done_task are executed, and Story's traversal ends. Next, Sprint's post_task_time and post_total_done_task_time are triggered.
279316

280-
Current test coverage: 97%
317+
When the post method is triggered, all related descendant nodes are already processed, so refactoring resolve methods does not affect post logic (e.g., removing resolve methods and providing related data directly at the parent node, such as ORM relationships or fetching complete tree data from NoSQL).
281318

282-
## Benchmark
319+
This achieves complete decoupling of resolve and post responsibilities. For example, when handling data from GraphQL, since related data is ready, you can skip resolve methods and use post methods for various post-processing needs.
283320

284-
`ab -c 50 -n 1000` based on FastAPI.
285-
286-
strawberry-graphql
287-
288-
```
289-
Server Software: uvicorn
290-
Server Hostname: localhost
291-
Server Port: 8000
292-
293-
Document Path: /graphql
294-
Document Length: 5303 bytes
295-
296-
Concurrency Level: 50
297-
Time taken for tests: 3.630 seconds
298-
Complete requests: 1000
299-
Failed requests: 0
300-
Total transferred: 5430000 bytes
301-
Total body sent: 395000
302-
HTML transferred: 5303000 bytes
303-
Requests per second: 275.49 [#/sec] (mean)
304-
Time per request: 181.498 [ms] (mean)
305-
Time per request: 3.630 [ms] (mean, across all concurrent requests)
306-
Transfer rate: 1460.82 [Kbytes/sec] received
307-
106.27 kb/s sent
308-
1567.09 kb/s total
309-
310-
Connection Times (ms)
311-
min mean[+/-sd] median max
312-
Connect: 0 0 0.2 0 1
313-
Processing: 31 178 14.3 178 272
314-
Waiting: 30 176 14.3 176 270
315-
Total: 31 178 14.4 179 273
316-
```
317-
318-
pydantic-resolve
319-
320-
```
321-
Server Software: uvicorn
322-
Server Hostname: localhost
323-
Server Port: 8000
324-
325-
Document Path: /sprints
326-
Document Length: 4621 bytes
327-
328-
Concurrency Level: 50
329-
Time taken for tests: 2.194 seconds
330-
Complete requests: 1000
331-
Failed requests: 0
332-
Total transferred: 4748000 bytes
333-
HTML transferred: 4621000 bytes
334-
Requests per second: 455.79 [#/sec] (mean)
335-
Time per request: 109.700 [ms] (mean)
336-
Time per request: 2.194 [ms] (mean, across all concurrent requests)
337-
Transfer rate: 2113.36 [Kbytes/sec] received
338-
339-
Connection Times (ms)
340-
min mean[+/-sd] median max
341-
Connect: 0 0 0.3 0 1
342-
Processing: 30 107 10.9 106 138
343-
Waiting: 28 105 10.7 104 138
344-
Total: 30 107 11.0 106 140
345-
```
346-
347-
## Community
321+
![](images/real-sample.png)
348322

349-
[Discord](https://discord.com/channels/1197929379951558797/1197929379951558800)
323+
> DataLoader eliminates the N+1 query problem common in multi-level data fetching.

docs/use_case.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22

33
Practical scenarios demonstrating pydantic-resolve's capabilities in data composition and resolution.
44

5-
## Simple Data Aggregation
5+
## Simple Data Composition
66

7-
Aggregate data from multiple data sources with automatic concurrency for same-level requests.
7+
Compose data from multiple data sources with automatic concurrency for same-level requests.
88

99
```python
1010
from pydantic import BaseModel

0 commit comments

Comments
 (0)