Polling Interrupts
Interrupt Vector Polling Loop
Interrupt Request (IRQ) Interrupt Latency
Vectored Interrupts Race Condition

 

Signals sent from hardware or software to the CPU to temporarily suspend the current operation and handle a specific event or condition. A technique used by hardware devices to check the status of a peripheral device by sending repeated requests at regular intervals.
A programming construct that continuously checks for a specific condition or event until it becomes true. A memory address pointer pointing to the location of the interrupt service routine that needs to be executed when an interrupt occurs.
The time delay between the occurrence of an interrupt and the execution of the corresponding interrupt service routine by the CPU. A signal sent by a hardware device to request attention from the CPU by causing an interrupt to be processed.
A situation where the execution of multiple threads or processes in a multitasking system is not synchronized properly, leading to unpredictable outcomes. Interrupts that provide additional information to the CPU about the source, priority, and location of the interrupt service routine to be executed.

 

Deadlock Concurrency
Thread First Come First Served Scheduling
Scheduling Policy
Arrival Order Execution Order

 

The ability of different parts of a program to be executed out of order or in partial order without affecting the final outcome. A situation where two or more competing actions are waiting for the other to finish, preventing any of them from completing.
Scheduling policy where tasks are executed based on their arrival order, prioritizing tasks that arrive first. The smallest unit of execution within a process.
A set of rules or guidelines that dictate how a particular task or process should be carried out. The process of determining the order in which tasks are executed by a computer system.
The sequence in which tasks or processes are actually executed by the system. The sequence in which tasks or processes arrive at the system for execution.

 

Preemptive Non-Preemptive
Task Process
Arrival Time Execution Time
Turnaround Time Waiting Time

 

A type of scheduling where a task cannot be interrupted and must complete its execution before another task can be executed. A type of scheduling where a task can be interrupted and moved out of the CPU before it has completed its execution.
An instance of a running computer program that includes its current state and execution information. A unit of work or activity that needs to be executed by a computer system.
The amount of time a process takes to complete its execution. The time at which a process enters the system and is ready to be executed by the CPU.
The total time a process spends waiting in the ready queue before being executed. The total time taken by a process from arriving in the system to its completion.

 

Context Switching Starvation
Preemption Burst Time
Round Robin Scheduling Backfilling
Round Robin Process Scheduling Time Quantum

 

A situation where a process is denied CPU time due to the presence of higher priority processes. The process of saving and restoring the state of a process when it is interrupted for execution by another process.
The amount of time a process requires to complete its execution without any interruption. The act of temporarily suspending a process's execution to allow another process to run.
A scheduling technique where a job is allowed to run if resources become available before its designated start time. A scheduling algorithm where each process is assigned a fixed time slice to execute before being moved to the back of the ready queue.
The fixed time unit allocated to each process in a Round Robin scheduling algorithm. A scheduling algorithm where each process is assigned a fixed time unit or quantum to execute before moving on to the next process in a circular manner.

 

Scheduling Algorithm Processor Pipelining
Instruction Pipeline Pipeline Hazard
Data Hazard Structural Hazard
Pipeline Flush Hazard

 

A technique in computer architecture that allows multiple instruction stages to be overlapped in order to improve efficiency and performance. A method used to determine the order in which processes are executed by the CPU based on certain criteria such as priority, fairness, and efficiency.
A condition in processor pipelining where the next instruction cannot execute in the next stage due to a dependency or conflict. A series of stages through which instructions pass in a processor pipeline, each stage carrying out a specific operation.
A condition in processor pipelining where the hardware is unable to support overlapping of certain stages. A type of pipeline hazard where a later instruction depends on the result of an earlier instruction that has not yet completed.
A hazard in processor pipelining refers to a condition that prevents the next instruction in a sequence from executing during its designated clock cycle. The process of discarding all instructions in a pipeline due to a misprediction or hazard, and restarting the pipeline.

 

Clock Speed Instructions Per Cycle (IPC)
Cache Memory Benchmarking
Thermal Design Power (TDP) Overclocking
Response Time Throughput

 

The number of instructions a processor can execute in one cycle. The speed at which a processor can execute instructions, measured in gigahertz (GHz).
The process of comparing the performance of a processor against standard reference points or other processors. A small, high-speed memory storage unit that temporarily holds frequently accessed data and instructions for faster processing.
The practice of increasing a processor's clock speed beyond the manufacturer's specifications to achieve higher performance. The maximum amount of heat generated by a processor that the cooling system is designed to handle.
The amount of data or instructions processed by a processor in a given amount of time. The time it takes for a processor to respond to a command or input.

 

FLOPS (Floating-Point Operations Per Second) Memory Bandwidth
Hyper-Threading

 

The rate at which data can be read from or written to the computer's memory, affecting overall processor performance. A measure of a processor's floating-point performance.
A technology that allows a single processor core to execute multiple threads simultaneously, improving efficiency.