|
|
|
Modern Operating Systems by Herbert Bos and Andrew S. Tanenb...
Modern_Operating_Systems_by_Herbert_Bos_and_Andrew_S._Tanenbaum_4th_Ed.pdf
Showing 346-347 out of 1137
Modern Operating Systems by Herbert Bos and Andrew...
Modern_Operating_Systems_by_Herbert_Bos_and_Andrew_S._Tanenbaum_4th_Ed.pdf-M ODERN O PERATING S YSTEMS
Modern Operating Systems by Herbert...
Modern_Operating_Systems_by_Herbert_Bos_and_Andrew_S._Tanenbaum_4th_Ed.pdf-M ODERN O PERATING S YSTEMS
Page 346
SEC. 4.4
FILE-SYSTEM MANAGEMENT AND OPTIMIZATION
315
Caching
The most common technique used to reduce disk accesses is the
block cache
or
buffer cache
.
(Cache is pronounced ‘‘cash’’ and is derived from the French
cacher
, meaning to hide.)
In this context, a cache is a collection of blocks that log-
ically belong on the disk but are being kept in memory for performance reasons.
Various algorithms can be used to manage the cache, but a common one is to
check all read requests to see if the needed block is in the cache.
If it is, the read
request can be satisfied without a disk access.
If the block is not in the cache, it is
first read into the cache and then copied to wherever it is needed. Subsequent re-
quests for the same block can be satisfied from the cache.
Operation of the cache is illustrated in Fig. 4-28. Since there are many (often
thousands of) blocks in the cache, some way is needed to determine quickly if a
given block is present. The usual way is to hash the device and disk address and
look up the result in a hash table. All the blocks with the same hash value are
chained together on a linked list so that the collision chain can be followed.
Rear (MRU)
Hash table
Front (LRU)
Figure 4-28.
The buffer cache data structures.
When a block has to be loaded into a full cache, some block has to be removed
(and rewritten to the disk if it has been modified since being brought in).
This
situation is very much like paging, and all the usual page-replacement algorithms
described in Chap. 3, such as FIFO, second chance, and LRU, are applicable. One
pleasant difference between paging and caching is that cache references are rel-
atively infrequent, so that it is feasible to keep all the blocks in exact LRU order
with linked lists.
In Fig. 4-28, we see that in addition to the collision chains starting at the hash
table, there is also a bidirectional list running through all the blocks in the order of
usage, with the least recently used block on the front of this list and the most
recently used block at the end. When a block is referenced, it can be removed from
its position on the bidirectional list and put at the end.
In this way, exact LRU
order can be maintained.
Unfortunately, there is a catch. Now that we have a situation in which exact
LRU is possible, it turns out that LRU is undesirable. The problem has to do with
Page 347
316
FILE SYSTEMS
CHAP. 4
the crashes and file-system consistency discussed in the previous section.
If a criti-
cal block, such as an i-node block, is read into the cache and modified, but not
rewritten to the disk, a crash will leave the file system in an inconsistent state.
If
the i-node block is put at the end of the LRU chain, it may be quite a while before
it reaches the front and is rewritten to the disk.
Furthermore, some blocks, such as i-node blocks, are rarely referenced two
times within a short interval. These considerations lead to a modified LRU scheme,
taking two factors into account:
1.
Is the block likely to be needed again soon?
2.
Is the block essential to the consistency of the file system?
For both questions, blocks can be divided into categories such as i-node blocks,
indirect blocks, directory blocks, full data blocks, and partially full data blocks.
Blocks that will probably not be needed again soon go on the front, rather than the
rear of the LRU list, so their buffers will be reused quickly. Blocks that might be
needed again soon, such as a partly full block that is being written, go on the end
of the list, so they will stay around for a long time.
The second question is independent of the first one.
If the block is essential to
the file-system consistency (basically, everything except data blocks), and it has
been modified, it should be written to disk immediately, regardless of which end of
the LRU list it is put on.
By writing critical blocks quickly, we greatly reduce the
probability that a crash will wreck the file system. While a user may be unhappy if
one of his files is ruined in a crash, he is likely to be far more unhappy if the whole
file system is lost.
Even with this measure to keep the file-system integrity intact, it is undesirable
to keep data blocks in the cache too long before writing them out. Consider the
plight of someone who is using a personal computer to write a book. Even if our
writer periodically tells the editor to write the file being edited to the disk, there is
a good chance that everything will still be in the cache and nothing on the disk.
If
the system crashes, the file-system structure will not be corrupted, but a whole
day’s work will be lost.
This situation need not happen very often before we have a fairly unhappy
user. Systems take two approaches to dealing with it.
The UNIX way is to have a
system call,
sync
, which forces all the modified blocks out onto the disk im-
mediately. When the system is started up, a program, usually called
update
, is
started up in the background to sit in an endless loop issuing
sync
calls, sleeping
for 30 sec between calls.
As a result, no more than 30 seconds of work is lost due
to a crash.
Although Windows now has a system call equivalent to
sync
, called
FlushFile-
Buffers
, in the past it did not. Instead, it had a different strategy that was in some
ways better than the UNIX approach (and in some ways worse). What it did was
to write every modified block to disk as soon as it was written to the cache. Caches
Ace your assessments! Get Better Grades
Browse thousands of Study Materials & Solutions from your Favorite Schools
Concordia University
Concordia_University
School:
Operating_Systems
Course:
Introducing Study Plan
Using AI Tools to Help you understand and remember your course concepts better and faster than any other resource.
Find the best videos to learn every concept in that course from Youtube and Tiktok without searching.
Save All Relavent Videos & Materials and access anytime and anywhere
Prepare Smart and Guarantee better grades
Students also viewed documents
lab 18.docx
lab_18.docx
Course
Course
3
Module5QuizSTA2023.d...
Module5QuizSTA2023.docx.docx
Course
Course
10
Week 7 Test Math302....
Week_7_Test_Math302.docx.docx
Course
Course
30
Chapter 1 Assigment ...
Chapter_1_Assigment_Questions.docx.docx
Course
Course
5
Week 4 tests.docx.do...
Week_4_tests.docx.docx
Course
Course
23
Week 6 tests.docx.do...
Week_6_tests.docx.docx
Course
Course
106