On my XP system, there's a ".pf" file corresponding to a ".exe" - is this related to MS's push for apps to use prefetching? I couldn't find any details about this via google. The work I mentioned about trying to use past history to speed up system launch was something that Intel and Microsoft were working on some time ago. I can't say with utter confidence that it's made its way to Windows. Say I run a CPU-intensive program. There's an option that says "run faster when computer is idle". What does this do (including kicking out other programs pages faster)? There are lots of possibilities. One that you've noted is that it may give fewer pages to idle interactive programs. Another could be that it increases the time slices programs get to run when they get the CPU. Using a small time slice may make the system seem smoother and more responsive, but may cause extra overhead that eats into the amount of processor time available to applications. Are bad pages dirty? Assuming that this is a sincere question and not someone trying to get me to say something funny, the short answer is "no". If we define dirty as meaning modified and hence possibly requiring a disk write at some point in the future, bad pages can't be dirty. Instead, bad pages cane be thought of as being permanently unused, and unusable. What do you mean by "use physical memory as a cache for disk"? We view each higher layer in the memory hierarchy as being smaller yet faster than the layer right above it. Assume that a system had no physical memory. It would be possible (in theory) to run everything off a disk, but it would be painfully slow. Demand-paged virtual memory systems basically can be viewed as treating main memory as a cache for the disk. So, everything could be running from disk, but the more recently-used pages are being kept in main memory to speed things up. In the working set approach, where do you record the delta time value, since it takes space? How is the delta value decided if it's not a fixed value? You can keep delta on a per-process basis or maybe even system-wide. If it's on a per-process basis, it would go into the process control block. If it's system-wide, it just gets stored with the all of the kernel's other global variables. The simplest way of "computing" delta is using some feedback. If the page fault rate of the system is too high, decrease delta. If the page fault rate of the system is tolerable, consider increasing delta. This way, the delta value changes over time based on the workload seen by the system. Can the process tell the OS that there are certain instruction pages it wants instead of having the OS try to guess? On some OS's, it's possible to explicitly control what gets loaded and when. However, in general, such control tends to interfere with the OS and complicate its design. So, it's not overly common these days on Unix-like systems. It may be the case that these kinds of things exist on desktop-oriented systems, where fast program launch time is something that's really measured and reported. In Win98, one possible blue screen of death is the page fault. What does this mean? I assume that the blue screen "segment fault" is the program accessing a memory segment that it is not allowed to touch. I don't know with certainty about that specific system. However, there are some general bad things that can happen in OSs. One would be if the kernel tried to deref a NULL pointer. It's a bug in the kernel, but what do you do? In Unix, these kinds of faults are known as "panics" and basically shut down and restart the system. Another kind of problem is what happens if some part of the kernel is allowed to get paged out to disk, and then later gets needed in the middle of something like an interrupt, where you can't wait for a disk I/O. What is the source for the assumption that >.1 second is slow and <<.1 second is more efficient? I seem to recall reading it a long time ago in some of the developer materials provided by Apple. They did a lot of work in developing guidelines for human interface designers. However, you'd probably be better off asking someone like Prof. Perry Cook, who may have more familiarity with the field. Can you go over the discussion of marginal utility and working sets again? Sure - assume you have the graph of # of pages versus # page faults for each program in the system. Using this graph, you could then determine how many page faults each page eliminates (by just taking the difference between # faults for every page added). Now, you could be analytical and say the following: for each page I wish to allocate, what process would gain the most by having that page. This can be thought of as the marginal utility of the page. This isn't something that actually happens, but rather just a way of trying to get the intuition behind what you'd actually like the system to do. Also note that I'm not claiming it's perfect - if it's the case that if the graph for a process has certain weird shapes, you'd end up with non-optimal results if you followed a simple allocation strategy. Can you go over "simulating modified bit with access bit" again Assume you can prevent writes to a page, and that a fault gets generated if a process tries writing to a page. If you don't have a modify bit but want to get the same information, you could set this flag even on read/write pages. Then, when the process tries to write, the OS will be able to detect it. It can then set a bit somewhere saying that this page has been modified, enable write access to the page again, and have the process resume. In order to perform the equivalent of clearing the modified bit, you'd clear the bit and set the page to being read-only again. When will we receive a midterm grade? My lingering headache seems to have subsided, and I'm making progress on grading. I'm hoping to have it done this week. Sorry for the delay.