前面分析了memblock演算法、內核頁表的建立、記憶體管理框架的構建,這些都是x86處理的setup_arch()函數裡面初始化的,因地制宜,具有明顯處理器的特征。而start_kernel()接下來的初始化則是linux通用的記憶體管理演算法框架了。 build_all_zonelists()用來初始化 ...
前面分析了memblock演算法、內核頁表的建立、記憶體管理框架的構建,這些都是x86處理的setup_arch()函數裡面初始化的,因地制宜,具有明顯處理器的特征。而start_kernel()接下來的初始化則是linux通用的記憶體管理演算法框架了。
build_all_zonelists()用來初始化記憶體分配器使用的存儲節點中的管理區鏈表,是為記憶體管理演算法(伙伴管理演算法)做準備工作的。具體實現:
【file:/mm/page_alloc.c】
/*
* Called with zonelists_mutex held always
* unless system_state == SYSTEM_BOOTING.
*/
void __ref build_all_zonelists(pg_data_t *pgdat, struct zone *zone)
{
set_zonelist_order();
if (system_state == SYSTEM_BOOTING) {
__build_all_zonelists(NULL);
mminit_verify_zonelist();
cpuset_init_current_mems_allowed();
} else {
#ifdef CONFIG_MEMORY_HOTPLUG
if (zone)
setup_zone_pageset(zone);
#endif
/* we have to stop all cpus to guarantee there is no user
of zonelist */
stop_machine(__build_all_zonelists, pgdat, NULL);
/* cpuset refresh routine should be here */
}
vm_total_pages = nr_free_pagecache_pages();
/*
* Disable grouping by mobility if the number of pages in the
* system is too low to allow the mechanism to work. It would be
* more accurate, but expensive to check per-zone. This check is
* made on memory-hotadd so a system can start with mobility
* disabled and enable it later
*/
if (vm_total_pages < (pageblock_nr_pages * MIGRATE_TYPES))
page_group_by_mobility_disabled = 1;
else
page_group_by_mobility_disabled = 0;
printk("Built %i zonelists in %s order, mobility grouping %s. "
"Total pages: %ld\n",
nr_online_nodes,
zonelist_order_name[current_zonelist_order],
page_group_by_mobility_disabled ? "off" : "on",
vm_total_pages);
#ifdef CONFIG_NUMA
printk("Policy zone: %s\n", zone_names[policy_zone]);
#endif
}
首先看到set_zonelist_order():
【file:/mm/page_alloc.c】
static void set_zonelist_order(void)
{
current_zonelist_order = ZONELIST_ORDER_ZONE;
}
此處用於設置zonelist的順序,ZONELIST_ORDER_ZONE用於表示順序(-zonetype, [node] distance),另外還有ZONELIST_ORDER_NODE表示順序([node] distance, -zonetype)。但其僅限於對NUMA環境存在區別,非NUMA環境則毫無差異。
如果系統狀態system_state為SYSTEM_BOOTING,系統狀態只有在start_kernel執行到最後一個函數rest_init後,才會進入SYSTEM_RUNNING,於是初始化時將會接著是__build_all_zonelists()函數:
【file:/mm/page_alloc.c】
/* return values int ....just for stop_machine() */
static int __build_all_zonelists(void *data)
{
int nid;
int cpu;
pg_data_t *self = data;
#ifdef CONFIG_NUMA
memset(node_load, 0, sizeof(node_load));
#endif
if (self && !node_online(self->node_id)) {
build_zonelists(self);
build_zonelist_cache(self);
}
for_each_online_node(nid) {
pg_data_t *pgdat = NODE_DATA(nid);
build_zonelists(pgdat);
build_zonelist_cache(pgdat);
}
/*
* Initialize the boot_pagesets that are going to be used
* for bootstrapping processors. The real pagesets for
* each zone will be allocated later when the per cpu
* allocator is available.
*
* boot_pagesets are used also for bootstrapping offline
* cpus if the system is already booted because the pagesets
* are needed to initialize allocators on a specific cpu too.
* F.e. the percpu allocator needs the page allocator which
* needs the percpu allocator in order to allocate its pagesets
* (a chicken-egg dilemma).
*/
for_each_possible_cpu(cpu) {
setup_pageset(&per_cpu(boot_pageset, cpu), 0);
#ifdef CONFIG_HAVE_MEMORYLESS_NODES
/*
* We now know the "local memory node" for each node--
* i.e., the node of the first zone in the generic zonelist.
* Set up numa_mem percpu variable for on-line cpus. During
* boot, only the boot cpu should be on-line; we'll init the
* secondary cpus' numa_mem as they come on-line. During
* node/memory hotplug, we'll fixup all on-line cpus.
*/
if (cpu_online(cpu))
set_cpu_numa_mem(cpu, local_memory_node(cpu_to_node(cpu)));
#endif
}
return 0;
}
其中build_zonelists_node()函數實現:
【file:/mm/page_alloc.c】
/*
* Builds allocation fallback zone lists.
*
* Add all populated zones of a node to the zonelist.
*/
static int build_zonelists_node(pg_data_t *pgdat, struct zonelist *zonelist,
int nr_zones)
{
struct zone *zone;
enum zone_type zone_type = MAX_NR_ZONES;
do {
zone_type--;
zone = pgdat->node_zones + zone_type;
if (populated_zone(zone)) {
zoneref_set_zone(zone,
&zonelist->_zonerefs[nr_zones++]);
check_highest_zone(zone_type);
}
} while (zone_type);
return nr_zones;
}
populated_zone()用於判斷管理區zone的present_pages成員是否為0,如果不為0的話,表示該管理區存在頁面,那麼則通過zoneref_set_zone()將其設置到zonelist的_zonerefs裡面,而check_highest_zone()在沒有開啟NUMA的情況下是個空函數。由此可以看出build_zonelists_node()實則上是按照ZONE_HIGHMEM—>ZONE_NORMAL—>ZONE_DMA的順序去迭代排布到_zonerefs裡面的,表示一個申請記憶體的代價由低廉到昂貴的順序,這是一個分配記憶體時的備用次序。
回到build_zonelists()函數中,而它代碼顯示將本地的記憶體管理區進行分配備用次序排序,接著再是分配記憶體代價低於本地的,最後才是分配記憶體代價高於本地的。
分析完build_zonelists(),再回到__build_all_zonelists()看一下build_zonelist_cache():
【file:/mm/page_alloc.c】
/* non-NUMA variant of zonelist performance cache - just NULL zlcache_ptr */
static void build_zonelist_cache(pg_data_t *pgdat)
{
pgdat->node_zonelists[0].zlcache_ptr = NULL;
}
該函數與CONFIG_NUMA相關,用來設置zlcache相關的成員。由於沒有開啟該配置,故直接設置為NULL。
基於build_all_zonelists()調用__build_all_zonelists()入參為NULL,由此可知__build_all_zonelists()運行的代碼是:
for_each_online_node(nid) {
pg_data_t *pgdat = NODE_DATA(nid);
build_zonelists(pgdat);
build_zonelist_cache(pgdat);
}
主要是設置各個記憶體管理節點node裡面各自的記憶體管理分區zone的記憶體分配次序。
__build_all_zonelists()接著的是:
for_each_possible_cpu(cpu) {
setup_pageset(&per_cpu(boot_pageset, cpu), 0);
#ifdef CONFIG_HAVE_MEMORYLESS_NODES
if (cpu_online(cpu))
set_cpu_numa_mem(cpu, local_memory_node(cpu_to_node(cpu)));
#endif
}
其中CONFIG_HAVE_MEMORYLESS_NODES未配置,主要分析一下setup_pageset():
【file:/mm/page_alloc.c】
static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch)
{
pageset_init(p);
pageset_set_batch(p, batch);
}
setup_pageset()裡面調用的兩個函數較為簡單,就直接過一下。先是:
【file:/mm/page_alloc.c】
static void pageset_init(struct per_cpu_pageset *p)
{
struct per_cpu_pages *pcp;
int migratetype;
memset(p, 0, sizeof(*p));
pcp = &p->pcp;
pcp->count = 0;
for (migratetype = 0; migratetype < MIGRATE_PCPTYPES; migratetype++)
INIT_LIST_HEAD(&pcp->lists[migratetype]);
}
pageset_init()主要是將struct per_cpu_pages結構體進行初始化,而pageset_set_batch()則是對其進行設置。pageset_set_batch()實現:
【file:/mm/page_alloc.c】
/*
* pcp->high and pcp->batch values are related and dependent on one another:
* ->batch must never be higher then ->high.
* The following function updates them in a safe manner without read side
* locking.
*
* Any new users of pcp->batch and pcp->high should ensure they can cope with
* those fields changing asynchronously (acording the the above rule).
*
* mutex_is_locked(&pcp_batch_high_lock) required when calling this function
* outside of boot time (or some other assurance that no concurrent updaters
* exist).
*/
static void pageset_update(struct per_cpu_pages *pcp, unsigned long high,
unsigned long batch)
{
/* start with a fail safe value for batch */
pcp->batch = 1;
smp_wmb();
/* Update high, then batch, in order */
pcp->high = high;
smp_wmb();
pcp->batch = batch;
}
/* a companion to pageset_set_high() */
static void pageset_set_batch(struct per_cpu_pageset *p, unsigned long batch)
{
pageset_update(&p->pcp, 6 * batch, max(1UL, 1 * batch));
}
setup_pageset()函數入參p是一個struct per_cpu_pageset結構體的指針,per_cpu_pageset結構是內核的各個zone用於每CPU的頁面高速緩存管理結構。該高速緩存包含一些預先分配的頁面,以用於滿足本地CPU發出的單一記憶體請求。而struct per_cpu_pages定義的pcp是該管理結構的成員,用於具體頁面管理。原本是每個管理結構有兩個pcp數組成員,裡面的兩條隊列分別用於冷頁面和熱頁面管理,而當前分析的3.14.12版本已經將兩者合併起來,統一管理冷熱頁,熱頁面在隊列前面,而冷頁面則在隊列後面。暫且先記著這麼多,後續在Buddy演算法的時候再詳細分析了。
**至此,可以知道__build_all_zonelists()是記憶體管理框架向後續的記憶體頁面管理演算法做準備,排布了記憶體管理區zone的分配次序,同時初始化了冷熱頁管理。**
最後回到build_all_zonelists()函數。由於沒有開啟記憶體初始化調試功能CONFIG_DEBUG_MEMORY_INIT,mminit_verify_zonelist()是一個空函數。
基於CONFIG_CPUSETS配置項開啟的情況下,而cpuset_init_current_mems_allowed()實現如下:
【file:/kernel/cpuset.c】
void cpuset_init_current_mems_allowed(void)
{
nodes_setall(current->mems_allowed);
}
這裡面的current 是一個cpuset的數據結構,用來管理cgroup中的任務能夠使用的cpu和記憶體節點。而成員mems_allowed,該成員是nodemask_t類型的結構體
【file:/include/linux/nodemask.h】
typedef struct { DECLARE_BITMAP(bits, MAX_NUMNODES); } nodemask_t;
該結構其實就是定義了一個位域,每個位對應一個記憶體結點,如果置1表示該節點記憶體可用。而nodes_setall則是將這個位域中每個位都置1。
末了看一下build_all_zonelists()裡面nr_free_pagecache_pages()的實現:
【file:/mm/page_alloc.c】
/**
* nr_free_pagecache_pages - count number of pages beyond high watermark
*
* nr_free_pagecache_pages() counts the number of pages which are beyond the
* high watermark within all zones.
*/
unsigned long nr_free_pagecache_pages(void)
{
return nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE));
}
而裡面調用的nr_free_zone_pages()實現為:
【file:/mm/page_alloc.c】
/**
* nr_free_zone_pages - count number of pages beyond high watermark
* @offset: The zone index of the highest zone
*
* nr_free_zone_pages() counts the number of counts pages which are beyond the
* high watermark within all zones at or below a given zone index. For each
* zone, the number of pages is calculated as:
* managed_pages - high_pages
*/
static unsigned long nr_free_zone_pages(int offset)
{
struct zoneref *z;
struct zone *zone;
/* Just pick one node, since fallback list is circular */
unsigned long sum = 0;
struct zonelist *zonelist = node_zonelist(numa_node_id(), GFP_KERNEL);
for_each_zone_zonelist(zone, z, zonelist, offset) {
unsigned long size = zone->managed_pages;
unsigned long high = high_wmark_pages(zone);
if (size > high)
sum += size - high;
}
return sum;
}
可以看到nr_free_zone_pages()遍歷所有記憶體管理區並將各管理區的記憶體空間求和,其實質是用於統計所有的管理區可以用於分配的記憶體頁面數。
接著在build_all_zonelists()後面則是判斷當前系統中的記憶體頁框數目,以決定是否啟用流動分組機制(Mobility Grouping),該機制可以在分配大記憶體塊時減少記憶體碎片。通常只有記憶體足夠大時才會啟用該功能,否則將會提升消耗降低性能。其中pageblock_nr_pages表示伙伴系統中的最高階頁塊所能包含的頁面數。
至此,記憶體管理框架演算法基本準備完畢。