[ Prev ] [ Index ] [ Next ]

Day 11

Created Wednesday 13 May 2020

Welcome to the final lecture

We did it everyone! Congrats! Today is the last lecture. GLHF with the final.

Final Notes

10 questions on the test. As per the midterm, the questions will be padded from the HW. Welcome to bring a 2-page cheatsheet. 2 hours and 30 minutes. Begin at 6:00. The final will be in the assignments section. The absolute deadline is at 8:30! Exam is only on the stuff from the midterm onwards. He says we might not need MARS on hand. I suggest just doing all the HW questions again. Take pics of handwritten work.

Chapter 7: Input/Output Systems

Understand how I/O systems work, including I/O methods and architectures.
Become familiar with storage media, and the differences in their respective formats.
Understand how RAID improves disk performance and reliability, and which RAId systems are most useful today.
Be familiar with emerging data storage technologies and the barriers that remain to be overcome.

Data storage and retrieval is one of the primary functions of computer systems.
One could say that computers are more useful to us as data storage and retrieval devices than they are as computational machines.
All computers have I/O devices connected to them.
To achieve good performance I/O should be kept to a minmum!

Sluggish I/O throughput can have a ripple effect, dragging down overall system performance.
This is espescially true when virtual memory is involved.
The fastest processor in the world is of little use if it spends most of its time waiting for data!

Amdahl's Law

The overall performance of a system is a result of the interaction of all of its components.
System performance is most effectively improved when the performance of the most heavily used components is improved.
This idea is quantified by Amdahl's Law!

S = 1 / ( (1 - f) + (f / k))
Where S is the overall speedup;
f is the faction of work performed by a faster component;
and k is the speedup of the faster component.
Calculating k is initially not intuitive!
Only possibility of a speedup is if f is greater than 1.

On a large system, suppose we can

Processes spend 70% of their time running in the CPU and 30% of their time waiting for disk service.
An upgrade of which component would offer the greater benefit for the lesser cost?
Between the two, CPU is being used the most where else we hardly touch the disk drive. Therefore, we can try and calculate which is better.

Calculating K - The Speedup

The new CPU is 50% faster than the old CPU.
Restated: The new CPU is 1.5 times faster than the old CPU.
So, in this case K = 1.5.

The new disk drive is 150% faster than the old one.
Restated: The new disk drive is 2.5 times faster than the old one.
So, in this case : = 2.5

Using the formula above, the processor option offers a 30% speedup.
So, Each 1% of improvement for the processor costs $333 ($10,000 / 30)

The disk drive option gives a 22% speedup.
So, each 1% of improvement for the disk costs $318. ($7,000 / 22).

Numbers alone, we see that the disk drive is better for the price!

There are other factors to consider our professor says. The actual values of CPU and disk service can vary over time. Reliability is an issue. Etc.

Architectures

We define input/output as a subsystem of components that moves coded data between external devices and a host system.

I/O subsystems include:

I/O can be controlled in five general ways:

In a system that uses interrupts, the status of the interrupt signal is checked at the top of the fetch-decode-execute cycle.
The particular code that is executed whenever an interrupt occurs is determined by a set of addresses called interrupt vectors that are stored in low memory.
The system state is saved before the interrupt service routine is executed and is restored afterward.

Memory-Mapped I/O

In a memory-mapped I/O, devices and main memory share the same address space.

In small systems, the low-level details of the data transfers are offloaded to the I/O controllers built into the I/O devices.

Direct Memory Access

DMA and the CPU share the same bus.
The DMA runs at a higher priority and steals memory cycles from the CPU!
Cycle Stealing is a method of accessing computer memory or bus without interfering with the CPU. It is similar to direct memory access for allowing I/O controllers to read or write RAM without CPU intervention.

Channel I/O

Very large systems employ channel I/O
Channel I/O consists of one or more I/O processors (IOPs) that control various channel paths.
Slower devices such as terminals and printers are combined (multiplexed) into a single faster channel.
On IBM mainframes, multiplexed channels are called multiplexor channels, the faster ones are called selector channels.

Channel I/O is distinguished from DMA by the intelligence of the IOPs.
The IOP negotiates protocols, issues device commands, translates storage coding to memory coding, and can transfer entire files or groups of files independent of the host CPU.
The host has only to create the program instructions for the I/O operation and tell the IOP where to find them.

Character and Block I/O

Character I/O devices process one byte (or character) at a time.

Block I/O devices handle bytes in groups.

I/O Busses

I/O buses, unlike memory buses, operate asynchronously. Requests for bus access must be arbitrated among the devices involved.
Bus control lines activate the devices when they are needed, raise signals when errors have occurred, and reset devices when necessary.
The number of data lines is the width of the bus.
A bus clock coordinates activites and provides bit cell boundaries.

Serial Vs Parallel Data Transmission

Bytes can be conveyed from one point to another.

In a parallel data transmission, the interface requires one conductor for each bit.
Parallel cables are fatter than serial cables.
Compared with parallel data interfaces, serial communications interfaces:

Serial communications interfaces are suitable for time-sensitive (isochronous) data such as voice and video.

Disk Technology

Magnetic disks offer large amounts of durable storage that can be accessed quickly
Disk drives are called random (or direct) access storage devices, because blocks of data can be accessed according to their location on the disk.

Magnetic disk organization is shown on the following slide.

See the hard drive in action:
https://www.youtube.com/watch?v=NtPc0jI21i0

Rigid Disk Drives

Disk tracks are numbered from the outside edge, starting with zero.
Hard disk platters are mounted on spindles.
Read/write heads are mounted on a comb that swings radially to read the disk.
The rotating disk forms a logical cylinder beneath the read/write heads.
Data blocks are addressed by their cylinder, surface, and sector.
There are a number of electromechanical properties of hard disk drives that determine how fast its data can be accessed.
Seek time is the time that it takes for a disk arm to move into position over the desired cylinder.
Rotational delay is the time that it takes for the desired sector to move into position beneath the read/write head.
Seek time + rotational delay = access time!

Low cost is the major advantage of hard disks.
But their limitations include:

Reductions in memory cost enable the widespread adoption of solid state drives (SSDs).

Solid State Drives

SSD access time and transfer rates are typically 100 times faster than magnetic disk, but slower than onboard RAM by a factor of 100,000.

Unlike RAM, flash is block-addressable (like disk drives).
The duty cycle of flash is between 30,000 and 1,000,000 updates to a block.
Updates are spread over the entire medium through wear levlingto prolong the life of the SSD.

The Joint Electron Devices Engineering Council (JEDEC) sets standards for SSD performance and reliability metrics. The most important are:
Unrecoverable Bit Error Ratio (UBER) measures disk reliability
UBER is calculated by dividing the number of data errors by the number of bits read using a simulated lifetime workload.
TeraBytes Written (TBW). TBW measures disk endurance (or service life)
TBW is the number of terabytes that can be written to the disk before the disk fails to meet specifications for speed and error rates.

Optical Disks

Optical disks provide large storage capacities very inexpensively.
They come in a number of varieties including CD-ROM, DVD, and WORM.
Many large computer installations produce document output on optical disk rather than on paper. This idea is called COLD — Computer Output Laser Disk.
It is estimated that optical disks can endure for a hundred years. Other media are good for only a decade — at best.