Memory Mapping & Protection.
|INTRO & CONTENT:|
Memory protection, memory mapping, and disk paged memory are taken by many as magic bullets, things that will magically correct many problems. These things are unable to compensate for bad code, sorry to say, and one of these is a leftover from what we now view as ancient times when physical memory was unreasonably small.
Virtual Memory refers to two things, one being the remapping of memory pages (a good thing), the other being a now excepted meaning of what was a missunderstanding being the use of paging memory to mass storage to increase available memory (now a bad thing). Do to this duality of meaning I will attempt to avoid the term 'Virtual Memory' in this document, as I learned it before the term was widely missused and thus as just being remaping of page addresses.
|Memory Protection, How it Helps and what the limit is:|
Memory Protection can be of aid in debugging programs, and in helping certain keep certain bugs from causing some kinds of problems. Yes it can be used to try to prevent reading other areas, etc. While it can be used for such it should not be used as a measure to keep data private, that is not its strength, nor is it something an OS or memory manager should be conserned with (see my page about net security for more information).
As a debugging tool memory protection can be extremely helpful, to detect unwanted access outside the application space. This can aid in finding some bugs do to pointer errrors.
Memory protection should never be used as a security measure, it is not one. With the number of bugs that are found in modern systems every year, largely as a side effect of how complex they are, we should not rely on there not being a bug in the memory page management portion of the code, this is an error of epic proportions.
|Memory Mapping, useful for maintaining the environment:|
Part of the illusion of most operating systems is that memory is continous for a given application. In order to accomplish this illusion, the page table is remaped as more memory is requested by the applicatoin. This may be needed even at application startup, depending on the amount of continous memory available. Some systems take this a step further, by making it seem that every application has the same base address, thus remapping during task switching.
It is common for systems to use relitively small page sizes of 4096 bytes. This was a good move in ancient times when paging to mass storage was also used, and memory sizes were small enough that this never got to be to big of a page table. Now days we have much larger main memories in our computer so a larger page size is more appropriate, to reduce the work of swapping address spaces when swapping tasks.
As memory is now quite large, and the number of running tasks can be significant, it is now much better to have page sizes of 64KB or even in some systems 1MB, this will reduce the size of the page table, as well as the amount of work needed to remmap the pages durring a task switch.
|Paging Memory to Mass Storage, the antique that should be removed:|
There was a time when main memories, on personal computers were less than 32MB, and mass storage was between 120MB and 520MB. In this time it was common place for systems running many tasks to run short on RAM, so a solution was sought. The solution, borrowed from older mainfraim systems for the same issue, was to take advantage of paged memory and memory protection, to be able to swap pages of RAM onto Mass Storage (back then usually an actual Hard Disk Drive), using not-present flags in the page table to cause a page fault on access so it can be swapped back into RAM when needed.
This system has unfortunately survived into the present, despite that are main memories are greater in size than the combined size of the main memory and mass storage of those systems combined. Yes we have more memory hungry applications, though not to that point (excepting poor code having the major bug of memory leaks, or the bigger bug of bloat). Today swapping pages of RAM to mass storage only serves to put more fatigue on the mass storage device, and slow down some memory accesses.
The common algorithms for page swapping, largely based on LRU (Least Recently Used), are over eager to swap RAM to Mass Storage, and thus usually will page out parts of RAM while there is still plenty of usable RAM that is not allocated to anything (or is just used for file buffers, that can be cleared to provide more memory more effeciently than page swapping). This is a huge issue in most common modern systems.
We do not need page swapping on modern systems, we have enough time to have seen how to be frugal with memory usage, and use sane allocations and algorithms. Even with huge databases, there is never a reason to have that much in RAM at a given time. And applications that leak memory have a very major bug, from realistic views a memory leak is one of the worse kinds of bugs.
|Some coding thoughts:|
Use some common sense, module test your code, and pay especial attention to its memory allocations and access patterns. Make sure that you are unable to find any memory leaks, or pointer dereferences out of range, etc before the software is labeled as Alpha Test quality. If a potential memory leak, or bad pointer reference is found later, imediately try to verify the issue, and find the offending code getting it fixed ASAP (as long as it is a valid bug report [have seen mistaken project bug reports before]).