In the world of hardware and software, IO is everywhere.
A computer can be thought of as a series of IO devices connected by a bus. Even a bus can be considered as an IO device. The basic way of working of these hardware is: input data -> process data -> output data. They differ only in the way they handle data.
The same is true for software, where data can be static (e.g. constants), from local disks, or from remote processes (e.g. the internet). All software does is process data, and the purpose of programming is to tell the computer the logic of what to do with the data.
When the CPU sends I/O requests to the drive, the drives have their own dedicated chips called device (or hardware) controllers, specifically designed to handle commands from the CPU, so they are able to perform very complex operations without the help of the main CPU. While the drive’s controller is busy executing requests, the CPU is free to do whatever it wants, and the controller is able to read and write directly to the system RAM using what is called a direct memory access (DMA) controller. When the drive completes the request and the relevant data has been loaded into RAM via DMA, an interrupt is issued to notify the CPU that the data has been loaded into RAM.
IO is generally divided into synchronous IO and asynchronous IO, as defined by POSIX:
A synchronous I/O operation causes the requesting process to be blocked until that I/O operation completes;
An asynchronous I/O operation does not cause the requesting process to be blocked;
Synchronous IO includes: blocking IO, non-blocking IO, IO multiplexing and signal driven IO. So with asynchronous IO there are 5 IO models in total.
When an IO occurs in a program, it goes through two phases:
- Waiting for the data to be ready
- Copying the data from the kernel to the process
The difference between these IO models is that there are different situations on each of the two phases
blocking IO Model
When the user process calls a system call, the kernel starts the first phase of IO: preparing the data, for network IO, often the data has not arrived at the beginning (e.g. a complete UDP packet has not been received yet), so the kernel has to wait for enough data to arrive, and on the user When the kernel is ready, it copies the data from the kernel to the user memory, and then the kernel returns the result before the user process is unblocked and running again.
The characteristic of blocking IO is that both phases of IO execution (waiting for data and copying data) are blocked.
As a result of blocking, the threads cannot perform any operations or respond to any network requests. The alternative is to use multiple threads (or multiple processes) on the server side, but if you have to respond to hundreds or thousands of connection requests at the same time, either multiple threads or multiple processes will seriously occupy the system resources. However, pools always have an upper limit, and when the request exceeds the upper limit, the pool system does not respond to the outside world much better than without the pool, so the pool must consider the response size it faces and adjust the pool size according to the response size.
non-blocking I/O Model
When the user thread issues a read operation, if the data in the kernel is not ready, it does not block the user process, but immediately returns an error (from the user process’ point of view, it does not need to wait after initiating a read operation, but gets a result immediately). Once the data is ready in the kernel and the user process receives another system call, it immediately copies the data to user memory and returns.
In non-blocking IO, non-blocking means that the real user-level IO is not blocked, i.e. waiting for data, but when the data in the kernel is ready, the data is copied from the kernel to user memory, and the thread is blocked at this time.
non-blocking IO requires the thread to actively check, and when the data is ready, the process has to actively make another system call to copy the data to user memory.
I/O Multiplexing Model
IO multiplexing, also known as event driven IO, enables a thread to monitor multiple IOs and notify the application of the read/write operation once an IO is ready, blocking the application and surrendering the CPU when no file handle is ready. Multiplexing refers to network connections, multiplexing refers to the same thread (or concurrent thread). This concept comes from the field of communications and refers to the technique of transmitting multiple signals on a single channel, which in a computer means using a thread to monitor the readiness of multiple descriptors.
IO multiplexing is generally implemented in four ways: select, poll, epoll, kqueue
For select, data structure is a bitmap, maximum number of connections is 1024, fd copy is copied for each select call, polling time is O(n).
For poll, data structure is array, maximum number of connections is unbounded, fd copy is per poll call, polling time O(n).
For epoll, data structure is red black tree, maximum number of connections is unbounded, fd copy is fd first call to epoll_ctl, no copy per call to epoll_wait, callback time O(1).
epoll has two trigger modes, EPOLLLT and EPOLLET, LT is the default mode and ET is the high speed mode.
In LT mode, epoll_wait returns its event every time the fd is read, reminding the user program to do something about it.
In ET mode, it will only prompt once, and will not prompt again until the next data flow, regardless of whether there is still data in the fd.
Signal-Driven I/O Model
The sigaltion system call is called to notify the requesting process with a SIGIO signal when the IO data is ready in the kernel, the requesting process then reads the data from the kernel into user space, this step is blocking.
Asynchronous I/O Model
No blocking of IO operations, the requesting thread is not notified until the IO operation is complete, the user thread does not need to check the status of the IO operation, nor does it need to actively copy the data.
Asynchronous IO is generally available in AIO (Linux) and IOCP (Windows 10).