MCA-20-31: Computer Architecture and Parallel Processing
Type: Compulsory
Contact Hours: 4 hours/week
Examination Duration: 3 Hours
Mode: Lecture
External Maximum Marks: 75
External Pass Marks: 30(i.e. 40%)
Internal Maximum Marks: 25
Total Maximum Marks: 100
Total Pass Marks: 40(i.e. 40%)
Instructions to paper setter for End semester examination:
Total number of questions shall be nine. Question number one will be compulsory and will be consisting of short/objective type questions from complete syllabus. In addition to compulsory first question there shall be four units in the question paper each consisting of two questions. Student will attempt one question from each unit in addition to compulsory question. All questions will carry equal marks.
Course Objectives: To know parallel processing and new trends and developments in computer architectures. Understand design and development of ILP based processors and evaluate their performance. Understand MIMD architectures and different topologies used in these architectures. Study the cache coherence problems and their solutions.
Course Outcomes (COs) At the end of this course, the student will be able to:
MCA-20-31.1 learn the concepts of parallel architectures and exploitation of parallelism at instruction level;
MCA-20-31.2 understand architectural features of multi-issue ILP processors;
MCA-20-31.3 learn MIMD architectures and interconnection networks used in them and evaluate their comparative performances;
MCA-20-31.4 analyze causes of cache coherence problem and learn algorithm for its solution.
Unit – I
Computational Model: Basic computational models, evolution and interpretation of computer architecture, concept of computer architecture as a multilevel hierarchical framework. Classification of parallel architectures, Relationships between programming languages and parallel architectures
Parallel Processing: Types and levels of parallelism, Instruction Level Parallel (ILP) processors, dependencies between instructions, principle and general structure of pipelines, performance measures of pipeline, pipelined processing of integer, Boolean, load and store instructions, VLIW architecture, Code Scheduling for ILP- Processors – Basic block scheduling, loop scheduling, global scheduling.
Unit – II
Superscalar Processors: Emergence of superscalar processors, Tasks of superscalar processing – parallel decoding, superscalar instruction issue, shelving, register renaming, parallel execution, preserving sequential consistency of instruction execution and exception processing, comparison of VLIW & superscalar processors Branch Handling: Branch problem, Approaches to branch handling – delayed branching, branch detection and prediction schemes, branch penalties and schemes to reduce them, multiway branches, guarded execution.
Unit – III
MIMD Architectures: Concepts of distributed and shared memory MIMD architectures, UMA, NUMA, CC-NUMA & COMA models, problems of scalable computers.
Direct Interconnection Networks: Linear array, ring, chordal rings, star, tree, 2D mesh, barrel shifter, hypercubes.
Unit – IV
Dynamic interconnection networks: single shared buses, comparison of bandwidths of locked, pended & split transaction buses, arbiter logics, crossbar, multistage networks – omega, butterfly
Cache coherence problem, hardware based protocols – snoopy cache protocol, directory schemes, hierarchical cache coherence protocols.
Text Books:
⦁ Sima, Fountain, Kacsuk, Advanced Computer Architecture, Pearson Education.
⦁ D. A. Patterson and J. L. Hennessey, Computer Architecture – A Quantitative Approach, Elsevier India.
Reference Books:
1. Kai Hwang, Advanced Computer Architecture, McGraw Hill.
2. Nicholas Carter, Computer Architecture, McGraw Hill.
3. Harry F. Jordan, Gita Alaghband, Fundamentals of Parallel Processing, Pearson Education.