My Operating system Development Experience and understanding -Part 07
Hello developers…!
in this article we are going to understand about `virtual memory paging in OS development. Before that if you don’t read my previous OS development Articles please go and check those from below links
- My Operating system Development Experience and understanding. Part 01 -Setting Up environments
- My Operating system Development Experience and understanding -Part 02- C instead of assembly
- My Operating system Development Experience and understanding -Part 03-implement Drivers
- My Operating system Development Experience and understanding -Part 04-Segmentation in OS development
- My Operating system Development Experience and understanding -Part 05-nterrupts and inputs
- My Operating system Development Experience and understanding -Part 06-user modes
let get into todays Topics
Virtual Memory
Physical memory is abstracted into virtual memory. The goal of virtual memory is to make application development easier and to allow processes to access more memory than is physically available on the machine. Due to security concerns, we don’t want programs meddling with the kernel or other applications’ memory.
Virtual memory can be implemented in the x86 architecture in two ways: segmentation and paging. The most frequent and versatile strategy is paging, which we’ll use in the next chapter. To allow code to operate under different permission levels, some segmentation is still required.
A large part of what an operating system does is manage memory. This is dealt with by paging and page frame allocation.
In the previous section of my article, I discussed segmentation.
Virtual Memory Through Segmentation
You could completely avoid paging and rely solely on segmentation for virtual memory. Each user mode process would have its own segment, with the appropriate base address and limit. No process can see the memory of another process in this way. The physical memory for a process must be contiguous, which is an issue (or at least it is very convenient if it is). Either we need to know how much memory the program will take ahead of time (unlikely), or we can shift memory segments to areas where they can grow when the limit is reached (expensive, causes fragmentation, and can result in “out of memory” even when adequate memory is available). Both of these issues are solved via paging.
Paging
A logical address is translated into a linear address by segmentation. Paging converts these linear addresses to physical locations, as well as determining access permissions and how memory should be cached.
Paging is the most frequent method for enabling virtual memory in x86 processors. Virtual memory is achieved using paging, which gives each process the impression that the available memory range is 0x00000000–0xFFFFFFFF, despite the fact that the actual memory space is significantly smaller. It also means that a process will use a virtual (linear) address instead of a physical address when accessing a byte of memory. The code in the user process will be unaffected (except for execution delays). The MMU and the page table convert the linear address to a physical address. The CPU will trigger a page fault interrupt if the virtual address isn’t mapped to a physical address.
Page entries.
Virtual memory regions are generally independent of one another because each process has its own set of page mappings. Pages are fixed at 4KB in the x86 architecture (32-bit). Each page has a description word that directs the processor to which frame it should be mapped. The least significant 12 bits of the 32-bit word are always zero because pages and frames must be aligned on 4KB boundaries (4KB being 0x1000 bytes). This is taken advantage of by the architecture, which uses them to store information about the page, such as whether it is there, whether it is kernel-mode or user-mode, and so on. This word’s arrangement can be seen in the image to the below
P
Set if the page is present in memory.
R/W
If set, that page is writeable. If unset, the page is read-only. This does not apply when code is running in kernel-mode (unless a flag in CR0 is set).
U/S
If set, this is a user-mode page. Else it is a supervisor (kernel)-mode page. User-mode code cannot write to or read from kernel-mode pages.
Reserved
These are used by the CPU internally and cannot be trampled.
A
Set if the page has been accessed (Gets set by the CPU).
D
Set if the page has been written to (dirty).
AVAIL
These 3 bits are unused and available for kernel-use.
Page frame address
The high 20 bits of the frame address in physical memory.
Page directories/tables
Possibly you’ve been tapping on your calculator and have worked out that to generate a table mapping each 4KB page to one 32-bit descriptor over a 4GB address space requires 4MB of memory. Perhaps, perhaps not — but it’s true.
4MB may seem like a large overhead, and to be fair, it is. If you have 4GB of physical RAM, it’s not much. However, if you are working on a machine that has 16MB of RAM, you’ve just lost a quarter of your available memory! What we want is something progressive, that will take up an amount of space proportionate to the amount of RAM you have.
Well, we don’t have that. But intel did come up with something similar — they use a 2-tier system. The CPU gets told about a page directory, which is a 4KB large table, each entry of which points to a page table. The page table is, again, 4KB large and each entry is a page table entry, described above.
This way, The entire 4GB address space can be covered with the advantage that if a page table has no entries, it can be freed and it’s present flag unset in the page directory.
Enabling paging
Paging is enabled by changing bit 31 (the PG “paging-enable” bit) of cr0 to 1 and then putting the address of a page directory to cr3. Set the PSE bit (Page Size Extensions, bit 4) of cr4 to 4 MB pages. An example is shown in the assembly code below:
The Kernel’s Virtual Address.
A very high virtual memory address, such as 0xC0000000, is ideal for the kernel (3 GB). The user-mode process is unlikely to be 3 GB in size, which is the only way it might trigger a kernel conflict right now. A higher-half kernel uses virtual addresses in the 3 GB and higher range. The address 0xC0000000 is only an example; the kernel can be placed at any address larger than 0 and get the same results. The quantity of virtual memory available for the kernel and the amount of virtual memory available for the process define the right address.
Virtual Memory Through Paging
In virtual memory, paging provides two advantages. For starters, it allows for fine-grained memory access management. Pages can be set to be read-only, read-write, or solely for PL0, among other things. Second, it creates the illusion of a single, uninterrupted memory. User-mode programs and the kernel can access the memory as if it were contiguous, and the contiguous memory can be enlarged without moving data around. We can also provide user-mode apps access to any memory under 3 GB, but we don’t have to give the pages page frames unless they use it.This allows processes to have code in the range of 0x00000000 and a stack somewhat below 0xC0000000 while only requiring two threads.
We need to declare functions and other things in the “paging.h” header file before we can do paging.
Then we can define those functions in the “paging.c” file like this.
as I mention on my gist the programming setup you can find in bellow links
osdev.org —
osdev.org —
http://wiki.osdev.org/Setting_Up_Paging
http://wiki.osdev.org/Paging
http://wiki.osdev.org/Page_Tables
http://www.jamesmolloy.co.uk/tutorial_html/6.-Paging.html
Awesome! you now have code that enables paging and handles page faults! Let’s just check it actually works, shall we …?
Kmain.c
This will, obviously, initialise paging, print a string to make sure it’s set up right and not faulting when it shoudn’t, and then force a page fault by reading location 0xA0000000.
Congrats! you’re all done! you can now move on to the next tutorial
THANKS FOR READING
see you all in next article
Abdullah M.R.M