Accessing Files Different Ways to Access a File Canonical Mode (O_SYNC and O_DIRECT cleared) Synchronous Mode (O_SYNC flag set) Memory Mapping Mode Direct I/O Mode (O_DIRECT flag set, user space - disk) Asynchronous Mode Reading a file is
Different Ways to Access a File
ð Canonical Mode (O_SYNC and O_DIRECT cleared)
ð Synchronous Mode (O_SYNC flag set)
ð Memory Mapping Mode
ð Direct I/O Mode (O_DIRECT flag set, user space <-> disk)
ð Asynchronous Mode
Reading a file is always page-based: the kernel always transfers whole pages of data at once.
Allocate a new page frame -> fill the page with suitable portion of the file -> add the page to the page cache -> copy the requested bytes to the process address space
Writing to a file may involve disk space allocation because the file size may increase.
/**
* do_generic_file_read - generic file read routine
* @filp: the file to read
* @ppos: current file position
* @desc: read_descriptor
* @actor: read method
*
* This is a generic file read routine, and uses the
* mapping->a_ops->readpage() function for the actual low-level stuff.
*
* This is really ugly. But the goto's actually try to clarify some
* of the logic when it comes to error handling etc.
*/
static void do_generic_file_read(struct file *filp, loff_t *ppos,
read_descriptor_t *desc, read_actor_t actor)
Many disk accesses are sequential, that is, many adjacent sectors on disk are likely to be fetched when handling a series of process’s read requests on the same file.
Read-ahead consists of reading several adjacent pages of data of a regular file or block device file before they are actually requested. In most cases, this greatly improves the system performance, because it lets the disk controller handle fewer commands. In some cases, the kernel reduces or stops read-ahead when some random accesses to a file are performed.
Natural language description -> design (data structure + algo) -> code
Description:
ð Read-ahead may be gradually increased as long as the process keeps accessing the file sequentially.
ð Read-ahead must be scaled down when or even disabled when the current access is not sequential.
ð Read-ahead should be stopped when the process keeps accessing the same page over and over again or when almost all the pages of the file are in the cache.
Design:
Current window: a contiguous portion of the file consisting of pages being requested by the process
Ahead window: a contiguous portion of the file following the ones in the current window
/*
* Track a single file's readahead state
*/
struct file_ra_state {
pgoff_t start; /* where readahead started */
unsigned int size; /* # of readahead pages */
unsigned int async_size; /* do asynchronous readahead when
there are only # of pages ahead */
unsigned int ra_pages; /* Maximum readahead window */
unsigned int mmap_miss; /* Cache miss stat for mmap accesses */
loff_t prev_pos; /* Cache last read() position */
};
struct file {
…
struct file_ra_state f_ra;
…
}
When is read-ahead algorithm executed?
1. Read pages of file data
2. Allocate a page for a file memory mapping
3. Readahead(), posix_fadvise(), madvise()
Deferred write
ð Shared Memory Mapping
ð Private Memory Mapping
System call: mmap(), munmap(), msync()
mmap, munmap - map or unmap files or devices into memory
msync - synchronize a file with a memory map
The kernel offers several hooks to customize the memory mapping mechanism for every different filesystem. The core of memory mapping implementation is delegated to a file object’s method named mmap. For disk-based filesystems and for block devices, this method is implemented by a generic function called generic_file_mmap().
Memory mapping mechanism depends on the demand paging mechanism.
For reasons of efficiency, page frames are not assigned to a memory mapping right after it has been created, but at the last moment that is, when the process tries to address one of its pages, thus causing a Page Fault exception.
The remap_file_pages() system call is used to create a non-linear mapping, that is, a mapping in which the pages of the file are mapped into a non-sequen‐
tial order in memory. The advantage of using remap_file_pages() over using repeated calls to mmap(2) is that the former approach does not require the ker‐
nel to create additional VMA (Virtual Memory Area) data structures.
To create a non-linear mapping we perform the following steps:
1. Use mmap(2) to create a mapping (which is initially linear). This mapping must be created with the MAP_SHARED flag.
2. Use one or more calls to remap_file_pages() to rearrange the correspondence between the pages of the mapping and the pages of the file. It is possible
to map the same page of a file into multiple locations within the mapped region.
There’s no substantial difference between:
1. Accessing a regular file through filesystem
2. Accessing it by referencing its blocks on the underlying block device file
3. Establish a file memory mapping
However, some highly-sophisticated programs (self-caching application such as high-performance server) would like to have full control of the I/O data transfer mechanism.
Linux offers a simple way to bypass the page cache: direct I/O transfer.
O_DIRECT
Generic_file_direct_IO() -> __block_dev_direct_IO(), it does not return until all direct IO data transfers have been completed.
“Asynchronous” essentially means that when a User Mode process invokes a library function to read or write a file, the function terminates as soon as the read or write operation has been enqueued, possibly even before the real I/O data transfer takes place. The calling process thus continue its execution while the data is being transferred.
aio_read(3), aio_cancel(3), aio_error(3), aio_fsync(3), aio_return(3), aio_suspend(3), aio_write(3)
Asynchronous I/O Implementation
ð User-level Implementation
ð Kernel-level Implementation
User-level Implementation:
Clone the current process -> the child process issues synchronous I/O requests -> aio_xxx terminates in parent process
io_setup(2), io_cancel(2), io_destroy(2), io_getevents(2), io_submit(2)
声明:本网页内容旨在传播知识,若有侵权等问题请及时与本网联系,我们将在第一时间删除处理。TEL:177 7030 7066 E-MAIL:11247931@qq.com