Hi ,
I have very basic understanding that for memory management kernel uses c functions like malloc(), realloc() ... but don't know which data structure it uses.
2. is there any memory pool from where kernel takes and allocates memory.
Can someone please share some links from coding and theory perspective .
Don't confuse kernel memory management with C/C++ run-time memory management.
The loader is responsible for allocating space for a program's heap when the program is launched. Pages for the heap come from the the OS memory manager. The reference dutch gave refers to memory allocation calls for the Linux OS. Calls for Windows are different.
The C/C++run-time is responsible for managing the heap (malloc, new). Managing the heap is very different from managing OS memory. The heap is usually managed by using boundary tags. I made a post about boundary tags here: http://www.cplusplus.com/forum/general/248623/
thanks AbstractionAnon/dutch for the reply, but lemme elaborate more on what I want to understand :
suppose I have 1000 MB of main memory (just for example) and 4 processes
process 1 takes 200 MB
process 2 takes 200 MB
process 3 takes 200 MB
process 4 wants 500 MB but since only 400 MB (1000-600) is remaining now so process 4 can't get 500 MB, in this case available 400 MB is also unused , so how kernel manages the memory management efficiently so that process 4 also can get memory and execute.
even if any paging or virtual memory concepts come into picture then too does kernel uses some data structure to make maximum use of available memory .
pardon me if this is a naive question but I want to understand ,
Memory management in modern operating systems is a complex subject. Modern operating systems support virtual memory which allows you to oversubscribe physical memory. Memory in a virtual memory system is backed physical disk space.
A process has (at least) two types of memory. Read only pages (your executable instructions) and read-write memory (everything else). When memory is oversubscribed, memory managers have algorithms that determine what memory is cheapest to swap out.
Program instructions, since they are read only are the cheapest since they don't have to be written back to disk, they only need to be reread when needed (a page fault).
Keep in mind that multiple copies of a process can be running, in which case all copies of the process share the same pages for executable instructions.
Data pages (read/write) which have not been modified are also relatively cheap.
Data pages which have been modified are the most expensive since they need to be written to disk, then read back when needed. Memory managers generally use a Least Recently Used (LRU) algorithm to select pages to swap out. The priority of a process also comes into play in determining which pages to swap in/out.
The data structures used by the OS are very dependent on the OS and the underlying hardware.