Total Credits: 10
Level: Level 4
Target Students: Specialist MSc and Part III undergraduate students in the School of Computer Science. Also available to Part II undergraduate students in the School of Computer Science subject to Part I performance. Also available to students from other Schools with the agreement of the module convenor. Available to JYA/Erasmus students.
|Spring||Assessed by end of Spring Semester|
Prerequisites: Or equivalent knowledge and experience of computer programming and the basic principles of concurrency or G51PRG up to 2009/10
Summary of Content: This module is part of the Operating Systems and Architecture theme in the School of Computer Science. A simple sequential computer program effectively executes one instruction at a time on individual data items. Various strategies are used in CPU design to increase the speed of this basic model (see the module on Advanced Computer Architecture), but at the cost of CPU complexity and power-consumption. To further increase performance the task must be re-organsied to explicitly execute on multiple processors and/or on multiple data items simultaneously This module charts the broad spectrum of approaches that are used to increase the performance of computing tasks by exploiting parallelism and/or distributed computation. It then considers in more detail a number of contrasting examples. The course deals mainly with the principles involved, but there is the chance to experiment with some of these approaches in the supporting labs. Topics covered include: commons applications of parallel and distributed computing; parallel and distributed machine architectures including Single Instruction Multiple Data (SIMD) or short-vector processing, multi-core and multi-processor shared memory, custom co-processors including DSPs and GPUs, and cluster and grid computing; programming approaches including parallelising compilers, explicit message-passing (such as MPI), specialised parallel computing abstractions (such as MapReduce), and specialised co-processor programming (such as for GPUs).
Module Web Links:
Method and Frequency of Class:
|Activity||Number Of Weeks||Number of sessions||Duration of a session|
|Lecture||11 weeks||2 per week||1 hour|
|Computing||10 weeks||1 per week||1 hour|
Method of Assessment:
|Exam 1||100||Two Hour Written Examination|
Professor D Elliman
Education Aims: The module aims to equip students to identify, select and make use of various parallel and distributed computing approaches to increasing the performance of a range of computational tasks. The emphasis is on high-performance and high-throughput applications rather than distributed systems and algorithms in general.
Learning Outcomes: Knowledge and Understanding:
The practice of parallel programming for a range of architectures and approaches. The strengths and weaknesses of various approaches to increasing task performance through parallelism. The synergy of hardware and software in parallel computer systems implementation. The properties of networked and distributed systems as used for parallel computation.Intellectual Skills:
Think independently while giving due weight to the arguments of others in approaches to parallelism Understand complex ideas and relate them to specific problems or questions in the area of parallel computation.Professional/Practical Skills:
Program in various paradigms relevant to parallel computing. Evaluate available parallel programming approaches, and select those that are fit for purpose within a given domain.Transferable/Key Skills:
Solve problems Communicate effectively in writing Retrieve information from appropriate sources (e.g. API, instruction set and compiler documentation).
Offering School: Computer Science
Use the Back facility of your browser to return to the previous page.
Search for another module
Return to The University of Nottingham Welcome Page