Home > Uncategorized > Multi-Threading Basics – Setting the stage

Multi-Threading Basics – Setting the stage

Multi-Threading Basics

Multi-Tasking and Multi-Processing

Multi-Tasking and Multi-Processing refer to the ability of an operating system to execute multiple tasks / processes at the same time. Why there are two terms then. Multi-Tasking is the ability to execute multiple tasks on the same CPU in a Single Processor machine. Multi-Processing is the ability to execute multiple tasks / processes on a multiple processor machine.

 

What is Co-operative Multi-Tasking?

Co-operative multitasking is where a process relinquishes control of the CPU voluntarily when it is done with its work and another task gets the CPU time.

 

What is Pre-emptive multitasking?

Pre-emptive multi-tasking is where the OS is in control and schedules different tasks in a by switching running tasks. Each task will be allowed to run for a time-slice.

 

What is Thread?

Thread is a flow of execution within a program.

 

What is context-switching?

When we create a Thread kernel data structure gets created and also a Thread specific Stack gets allocated. This kernel data structure has a reference to another data structure called the context. Context holds the CPU registers – Control registers like the Instruction Pointer, Stack Pointer, Program return address, and Floating Point registers etc when the thread was last executed. When CPU is executing Thread A and wants to run another Thread B, a snapshot of the CPU registers is taken and stored the context data structure for Thread A. Then CPU registers are restored from the context data structure of Thread B. This is called a context-switch.

 

A kernel object can either be signaled or non-signaled. When a process is created the kernel object associated with it is non-signaled. When the process is destroyed the kernel object becomes signaled. The same applies to threads as well.

 

When writing multi-threading applications the common tasks include,

  • Protecting access to shared data (synchronization) and
  • Preventing thread from wasting CPU time when waiting for something to happen.

If we don’t protect access to shared data the following problems can result:

  • Race Conditions
  • Deadlock
  • Livelock
  • Starvation

Let us see how to achieve each of this in coming articles.

 

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. November 19, 2010 at 10:58 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: