|
|
|
Modern Operating Systems by Herbert Bos and Andrew S. Tanenb...
Modern_Operating_Systems_by_Herbert_Bos_and_Andrew_S._Tanenbaum_4th_Ed.pdf
Showing 572-573 out of 1137
Modern Operating Systems by Herbert Bos and Andrew...
Modern_Operating_Systems_by_Herbert_Bos_and_Andrew_S._Tanenbaum_4th_Ed.pdf-M ODERN O PERATING S YSTEMS
Modern Operating Systems by Herbert...
Modern_Operating_Systems_by_Herbert_Bos_and_Andrew_S._Tanenbaum_4th_Ed.pdf-M ODERN O PERATING S YSTEMS
Page 572
SEC. 8.1
MULTIPROCESSORS
541
0
4
8
12
1
5
9
13
2
6
10
14
3
7
11
15
A
B
C
D
E
F
G
H
I
J
K
L
M
N
7
5
4
2
1
0
Priority
CPU
0
A
8
12
1
5
9
13
2
6
10
14
3
7
11
15
B
C
D
E
F
G
H
I
J
K
L
M
N
7
5
4
2
1
0
Priority
CPU 4
goes idle
CPU 12
goes idle
0
A
8
B
1
5
9
13
2
6
10
14
3
7
11
15
C
D
E
F
G
H
I
J
K
L
M
N
7
5
4
2
3
3
3
6
6
6
1
0
Priority
(a)
(b)
(c)
Figure 8-12.
Using a single data structure for scheduling a multiprocessor.
thread is scheduled again and releases the lock.
On a uniprocessor, spin locks are
rarely used, so if a process is suspended while it holds a mutex, and another thread
starts and tries to acquire the mutex, it will be immediately blocked, so little time is
wasted.
To get around this anomaly, some systems use
smart scheduling
, in which a
thread acquiring a spin lock sets a processwide flag to show that it currently has a
spin lock (Zahorjan et al., 1991).
When it releases the lock, it clears the flag. The
scheduler then does not stop a thread holding a spin lock, but instead gives it a lit-
tle more time to complete its critical region and release the lock.
Another issue that plays a role in scheduling is the fact that while all CPUs are
equal, some CPUs are more equal.
In particular, when thread
A
has run for a long
time on CPU
k
, CPU
k
’s cache will be full of
A
’s blocks. If
A
gets to run again
soon, it may perform better if it is run on CPU
k
, because
k
’s cache may still con-
tain some of
A
’s blocks. Having cache blocks preloaded will increase the cache hit
rate and thus the thread’s speed. In addition, the TLB may also contain the right
pages, reducing TLB faults.
Some multiprocessors take this effect into account and use what is called
affin-
ity scheduling
(Vaswani and Zahorjan, 1991).
The basic idea here is to make a
serious effort to have a thread run on the same CPU it ran on last time. One way to
create this affinity is to use a
two-level scheduling algorithm
.
When a thread is
created, it is assigned to a CPU, for example based on which one has the smallest
load at that moment. This assignment of threads to CPUs is the top level of the al-
gorithm. As a result of this policy, each CPU acquires its own collection of
threads.
The actual scheduling of the threads is the bottom level of the algorithm.
It is
done by each CPU separately, using priorities or some other means.
By trying to
Page 573
542
MULTIPLE PROCESSOR SYSTEMS
CHAP. 8
keep a thread on the same CPU for its entire lifetime, cache affinity is maximized.
However, if a CPU has no threads to run, it takes one from another CPU rather than
go idle.
Two-level scheduling has three benefits. First, it distributes the load roughly
evenly over the available CPUs.
Second, advantage is taken of cache affinity
where possible. Third, by giving each CPU its own ready list, contention for the
ready lists is minimized because attempts to use another CPU’s ready list are rel-
atively infrequent.
Space Sharing
The other general approach to multiprocessor scheduling can be used when
threads are related to one another in some way. Earlier we mentioned the example
of parallel
make
as one case.
It also often occurs that a single process has multiple
threads that work together. For example, if the threads of a process communicate a
lot, it is useful to have them running at the same time. Scheduling multiple threads
at the same time across multiple CPUs is called
space sharing
.
The simplest space-sharing algorithm works like this. Assume that an entire
group of related threads is created at once.
At the time it is created, the scheduler
checks to see if there are as many free CPUs as there are threads.
If there are, each
thread is given its own dedicated (i.e., nonmultiprogrammed) CPU and they all
start. If there are not enough CPUs, none of the threads are started until enough
CPUs are available. Each thread holds onto its CPU until it terminates, at which
time the CPU is put back into the pool of available CPUs.
If a thread blocks on
I/O, it continues to hold the CPU, which is simply idle until the thread wakes up.
When the next batch of threads appears, the same algorithm is applied.
At any instant of time, the set of CPUs is statically partitioned into some num-
ber of partitions, each one running the threads of one process.
In Fig. 8-13, we
have partitions of sizes 4, 6, 8, and 12 CPUs, with 2 CPUs unassigned, for ex-
ample. As time goes on, the number and size of the partitions will change as new
threads are created and old ones finish and terminate.
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
4-CPU partition
12-CPU partition
Unassigned CPU
6-CPU partition
8-CPU partition
Figure 8-13.
A set of 32 CPUs split into four partitions, with two CPUs
available.
Ace your assessments! Get Better Grades
Browse thousands of Study Materials & Solutions from your Favorite Schools
Concordia University
Concordia_University
School:
Operating_Systems
Course:
Introducing Study Plan
Using AI Tools to Help you understand and remember your course concepts better and faster than any other resource.
Find the best videos to learn every concept in that course from Youtube and Tiktok without searching.
Save All Relavent Videos & Materials and access anytime and anywhere
Prepare Smart and Guarantee better grades
Students also viewed documents
lab 18.docx
lab_18.docx
Course
Course
3
Module5QuizSTA2023.d...
Module5QuizSTA2023.docx.docx
Course
Course
10
Week 7 Test Math302....
Week_7_Test_Math302.docx.docx
Course
Course
30
Chapter 1 Assigment ...
Chapter_1_Assigment_Questions.docx.docx
Course
Course
5
Week 4 tests.docx.do...
Week_4_tests.docx.docx
Course
Course
23
Week 6 tests.docx.do...
Week_6_tests.docx.docx
Course
Course
106