Modern Operating Systems by Herbert Bos ...
Modern_Operating_Systems_by_Herbert_Bos_and_Andrew_S._Tanenbaum_4th_Ed.pdf-M ODERN O PERATING S YSTEMS
Showing 246-247 out of 1137
Modern Operating Systems by Herbert Bos and Andrew...
Modern_Operating_Systems_by_Herbert_Bos_and_Andrew_S._Tanenbaum_4th_Ed.pdf-M ODERN O PERATING S YSTEMS
Modern Operating Systems by Herbert...
Modern_Operating_Systems_by_Herbert_Bos_and_Andrew_S._Tanenbaum_4th_Ed.pdf-M ODERN O PERATING S YSTEMS
Page 246
SEC. 3.4
PA GE
REPLACEMENT ALGORITHMS
215
Page
0
1
2
3
4
5
R bits for
pages 0-5,
clock tick 0
10000000
00000000
10000000
00000000
10000000
10000000
1
0
1
0
1
1
(a)
R bits for
pages 0-5,
clock tick 1
11000000
10000000
01000000
00000000
11000000
01000000
1
1
0
0
1
0
(b)
R bits for
pages 0-5,
clock tick 2
11100000
11000000
00100000
10000000
01100000
10100000
1
1
0
1
0
1
(c)
R bits for
pages 0-5,
clock tick 3
11110000
01100000
00010000
01000000
10110000
01010000
1
0
0
0
1
0
(d)
R bits for
pages 0-5,
clock tick 4
01111000
10110000
10001000
00100000
01011000
00101000
0
1
1
0
0
0
(e)
Figure 3-17.
The aging algorithm simulates LRU in software. Shown are six
pages for five clock ticks.
The five clock ticks are represented by (a) to (e).
clock interval from those occurring later. All we can do is remove page 3, because
page 5 was also referenced two ticks earlier and page 3 was not.
The second difference between LRU and aging is that in aging the counters
have a finite number of bits (8 bits in this example), which limits its past horizon.
Suppose that two pages each have a counter value of 0.
All we can do is pick one
of them at random.
In reality, it may well be that one of the pages was last refer-
enced nine ticks ago and the other was last referenced 1000 ticks ago.
We have no
way of seeing that.
In practice, however, 8 bits is generally enough if a clock tick
is around 20 msec.
If a page has not been referenced in 160 msec, it probably is
not that important.
3.4.8 The Working Set Page Replacement Algorithm
In the purest form of paging, processes are started up with none of their pages
in memory.
As soon as the CPU tries to fetch the first instruction, it gets a page
fault, causing the operating system to bring in the page containing the first instruc-
tion. Other page faults for global variables and the stack usually follow quickly.
After a while, the process has most of the pages it needs and settles down to run
with relatively few page faults. This strategy is called
demand paging
because
pages are loaded only on demand, not in advance.
Of course, it is easy enough to write a test program that systematically reads all
the pages in a large address space, causing so many page faults that there is not


Page 247
216
MEMORY MANAGEMENT
CHAP. 3
enough memory to hold them all. Fortunately, most processes do not work this
way. They exhibit a
locality of reference
, meaning that during any phase of ex-
ecution, the process references only a relatively small fraction of its pages. Each
pass of a multipass compiler, for example, references only a fraction of all the
pages, and a different fraction at that.
The set of pages that a process is currently using is its
working set
(Denning,
1968a; Denning, 1980).
If the entire working set is in memory, the process will
run without causing many faults until it moves into another execution phase (e.g.,
the next pass of the compiler).
If the available memory is too small to hold the en-
tire working set, the process will cause many page faults and run slowly, since ex-
ecuting an instruction takes a few nanoseconds and reading in a page from the disk
typically takes 10 msec At a rate of one or two instructions per 10 msec, it will
take ages to finish. A program causing page faults every few instructions is said to
be
thrashing
(Denning, 1968b).
In a multiprogramming system, processes are often moved to disk (i.e., all their
pages are removed from memory) to let others have a turn at the CPU.
The ques-
tion arises of what to do when a process is brought back in again. Technically,
nothing need be done.
The process will just cause page faults until its working set
has been loaded. The problem is that having numerous page faults every time a
process is loaded is slow, and it also wastes considerable CPU time, since it takes
the operating system a few milliseconds of CPU time to process a page fault.
Therefore, many paging systems try to keep track of each process’ working set
and make sure that it is in memory before letting the process run. This approach is
called the
working set model
(Denning, 1970).
It is designed to greatly reduce the
page fault rate. Loading the pages
before
letting processes run is also called
prepaging.
Note that the working set changes over time.
It has long been known that programs rarely reference their address space uni-
formly, but that the references tend to cluster on a small number of pages.
A mem-
ory reference may fetch an instruction or data, or it may store data.
At any instant
of time,
t
, there exists a set consisting of all the pages used by the
k
most recent
memory references. This set,
w
(
k
,
t
), is the working set.
Because the
k
=
1 most
recent references must have used all the pages used by the
k
> 1 most recent refer-
ences, and possibly others,
w
(
k
,
t
) is a monotonically nondecreasing function of
k
.
The limit of
w
(
k
,
t
) as
k
becomes large is finite because a program cannot refer-
ence more pages than its address space contains, and few programs will use every
single page.
Figure 3-18 depicts the size of the working set as a function of
k
.
The fact that most programs randomly access a small number of pages, but that
this set changes slowly in time explains the initial rapid rise of the curve and then
the much slower rise for large
k
.
For example, a program that is executing a loop
occupying two pages using data on four pages may reference all six pages every
1000 instructions, but the most recent reference to some other page may be a mil-
lion instructions earlier, during the initialization phase. Due to this asymptotic be-
havior, the contents of the working set is not sensitive to the value of
k
chosen. To


Ace your assessments! Get Better Grades
Browse thousands of Study Materials & Solutions from your Favorite Schools
Concordia University
Concordia_University
School:
Operating_Systems
Course:
Great resource for chem class. Had all the past labs and assignments
Leland P.
Santa Clara University
Introducing Study Plan
Using AI Tools to Help you understand and remember your course concepts better and faster than any other resource.
Find the best videos to learn every concept in that course from Youtube and Tiktok without searching.
Save All Relavent Videos & Materials and access anytime and anywhere
Prepare Smart and Guarantee better grades

Students also viewed documents