Skip to content
This repository was archived by the owner on Jan 12, 2021. It is now read-only.

Commit 6d4480e

Browse files
JianyuZhansfrothwell
authored andcommitted
SubjectL mm: use the light version __mod_zone_page_state in mlocked_vma_newpage()
mlocked_vma_newpage() is called with pte lock held(a spinlock), which implies preemtion disabled, and the vm stat counter is not modified from interrupt context, so we need not use an irq-safe mod_zone_page_state() here, using a light-weight version __mod_zone_page_state() would be OK. This patch also documents __mod_zone_page_state() and some of its callsites. The comment above __mod_zone_page_state() is from Hugh Dickins, and acked by Christoph. Most credits to Hugh and Christoph for the clarification on the usage of the __mod_zone_page_state(). Suggested-by: Andrew Morton <[email protected]> Acked-by: Hugh Dickins <[email protected]> Signed-off-by: Jianyu Zhan <[email protected]> Reviewed-by: Christoph Lameter <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 2cd6d63 commit 6d4480e

File tree

3 files changed

+19
-2
lines changed

3 files changed

+19
-2
lines changed

mm/internal.h

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,12 @@ static inline int mlocked_vma_newpage(struct vm_area_struct *vma,
201201
return 0;
202202

203203
if (!TestSetPageMlocked(page)) {
204-
mod_zone_page_state(page_zone(page), NR_MLOCK,
204+
/*
205+
* We use the irq-unsafe __mod_zone_page_stat because
206+
* this counter is not modified from interrupt context, and the
207+
* pte lock is held(spinlock), which implies preemtion disabled.
208+
*/
209+
__mod_zone_page_state(page_zone(page), NR_MLOCK,
205210
hpage_nr_pages(page));
206211
count_vm_event(UNEVICTABLE_PGMLOCKED);
207212
}

mm/rmap.c

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -982,6 +982,11 @@ void do_page_add_anon_rmap(struct page *page,
982982
{
983983
int first = atomic_inc_and_test(&page->_mapcount);
984984
if (first) {
985+
/*
986+
* We use the irq-unsafe __{inc|mod}_zone_page_stat because
987+
* these counters are not modified in interrupt context, and
988+
* pte lock(a spinlock) is held, which implies preemtion disabled.
989+
*/
985990
if (PageTransHuge(page))
986991
__inc_zone_page_state(page,
987992
NR_ANON_TRANSPARENT_HUGEPAGES);
@@ -1073,6 +1078,11 @@ void page_remove_rmap(struct page *page)
10731078
/*
10741079
* Hugepages are not counted in NR_ANON_PAGES nor NR_FILE_MAPPED
10751080
* and not charged by memcg for now.
1081+
*
1082+
* We use the irq-unsafe __{inc|mod}_zone_page_stat because
1083+
* these counters are not modified in interrupt context, and
1084+
* these counters are not modified in interrupt context, and
1085+
* pte lock(a spinlock) is held, which implies preemtion disabled.
10761086
*/
10771087
if (unlikely(PageHuge(page)))
10781088
goto out;

mm/vmstat.c

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -207,7 +207,9 @@ void set_pgdat_percpu_threshold(pg_data_t *pgdat,
207207
}
208208

209209
/*
210-
* For use when we know that interrupts are disabled.
210+
* For use when we know that interrupts are disabled,
211+
* or when we know that preemption is disabled and that
212+
* particular counter cannot be updated from interrupt context.
211213
*/
212214
void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
213215
int delta)

0 commit comments

Comments
 (0)