Assignment title: Information
Scalability is a measure of a parallel system' capacity to increase speedup in proportion to the problem size - T2. If a problem size w has serial component w5. W/w5 is an upper bound on its speedup. No matter how many processing elements are used – F 3. A hypercube with 2n nodes can be regarded as a d-dimensional mesh with two nodes in each dimension. – T4. A linear array (or a ring) composed of 2n nodes can be embedded into a d-dimensional hypercube by mapping nodes I of the linear array onto node. –T 5. In cut through routing, all small message (flits) go the same routine. –T 6. as long as wc increase the problem size. The parallel efficiency can be improved. – F 7. There may have many approaches to decompose an application into tasks. –T 8. Recursive decomposition in not suitable for questions that can be solved by divide-and-conquer technique. – F 9. In store and forward routing a message can only be passed to the next node after completely received. – T10. Centralized load-balancing schemes are usually easier to implement than distributed schemes, but may have limited scalability. –T 11. Exploratory decomposition is a powerful and commonly used method for deriving concurrency in algorithms that operate on large data structure. – F 12. Efficiency is a measure that captures the relative benefit of solving a problem in parallel. –F 13. a. Both parallel computing and grid computing belong to high performance computing.14. d. Both uniform memory access and non-uniform memory access machine are shared address space machines.15. a. Recursive decomposition in not suitable for questions that can be solved by divide-and-conquer technique. 16. d. Both uniform memory access and non-uniform memory access machine are shared address space machines. 17. b. we usually need many decomposition methods working together to get a good decomposition. 18. d. multistage network is a tradeoff between bus and crossbar networks.19. c. In cut through routing, all small messages (flits) go the same routine,20. c. MPI Send. MPI Gather21. There are 16 processor in the following graph. Each node is a processor. They are connected to a mesh network. Now there is a message going to be broadcasted from one node to all other nodes