Abstract Many econometric problems can benefit from the application of parallel computing techniques, and recent advances in hardware and software have made such application feasible. There are a number of freely available software libraries that make it possible to write message passing parallel programs using personal computers or Unix workstations. This review discusses one of these-the LAM (Local Area Multiprocessor) implementation of MPI (the Message Passing Interface). Show
Journal Information The Journal of Applied Econometrics is a bi-monthly international journal which publishes articles of high quality dealing with the application of existing as well as new econometric techniques to a wide variety of problems in economics and related subjects, covering topics in measurement, estimation, testing, forecasting, and policy analysis. The emphasis is on the careful and rigorous application of econometric techniques and the appropriate interpretation of the results. The economic content of the articles is stressed. The intention of the Journal is to provide an outlet for innovative, quantitative research in economics which cuts across areas of specialization, involves transferable techniques, and is easily replicable by other researchers. Contributions that introduce statistical methods that are applicable to a variety of economic problems are actively encouraged. The Journal also features occasional sections of short papers re-evaluating previously published papers. Publisher Information Wiley is a global provider of content and content-enabled workflow solutions in areas of scientific, technical, medical, and scholarly research; professional development; and education. Our core businesses produce scientific, technical, medical, and scholarly journals, reference works, books, database services, and advertising; professional books, subscription products, certification and training services and online applications; and education content and services including integrated online teaching and learning resources for undergraduate and graduate students and lifelong learners. Founded in 1807, John Wiley & Sons, Inc. has been a valued source of information and understanding for more than 200 years, helping people around the world meet their needs and fulfill their aspirations. Wiley has published the works of more than 450 Nobel laureates in all categories: Literature, Economics, Physiology or Medicine, Physics, Chemistry, and Peace. Wiley has partnerships with many of the world’s leading societies and publishes over 1,500 peer-reviewed journals and 1,500+ new books annually in print and online, as well as databases, major reference works and laboratory protocols in STMS subjects. With a growing open access offering, Wiley is committed to the widest possible dissemination of and access to the content we publish and supports all sustainable models of access. Our online platform, Wiley Online Library (wileyonlinelibrary.com) is one of the world’s most extensive multidisciplinary collections of online resources, covering life, health, social and physical sciences, and humanities. Note: This article is a review of another work, such as a book, film, musical composition, etc. The original work is not included in the purchase of this review. Rights & Usage This item is part of a JSTOR Collection. AbstractThis paper generalizes the widely used Nelder and Mead (Comput J 7:308–313, 1965) simplex algorithm to parallel processors. Unlike most previous parallelization methods, which are based on parallelizing the tasks required to compute a specific objective function given a vector of parameters, our parallel simplex algorithm uses parallelization at the parameter level. Our parallel simplex algorithm assigns to each processor a separate vector of parameters corresponding to a point on a simplex. The processors then conduct the simplex search steps for an improved point, communicate the results, and a new simplex is formed. The advantage of this method is that our algorithm is generic and can be applied, without re-writing computer code, to any optimization problem which the non-parallel Nelder–Mead is applicable. The method is also easily scalable to any degree of parallelization up to the number of parameters. In a series of Monte Carlo experiments, we show that this parallel simplex method yields computational savings in some experiments up to three times the number of processors. Access optionsBuy single articleInstant access to the full article PDF. 39,95 € Price includes VAT (Singapore) References
Download references Author informationAuthors and Affiliations
Authors
Corresponding authorCorrespondence to Matthew Wiswall. Rights and permissionsAbout this articleCite this articleLee, D., Wiswall, M. A Parallel Implementation of the Simplex Function Minimization Routine. Comput Econ 30, 171–187 (2007). https://doi.org/10.1007/s10614-007-9094-2 Download citation
Keywords
What is a parallel implementation process?The premise of the parallel strategy is to facilitate both systems, the legacy system and the new at the same point in time so as to identify how well the new adapts to the process environment and comparatively analyse its benefits. Some of the benefits for this option of implementation are; Ensures Business Continuity.
What are the three methods of implementation?There are three main methods used: phased implementation, direct changeover and parallel running. Phased implementation: A staged method whereby one part of the overall system that needs changing is changed. If any problems arise, they are limited in scope and therefore non-critical.
What are the methods of implementation?The Implementation Methodology is broken into five stages: Prepare, Plan, Design, Validate, and Deploy. Each stage includes a series of segments that are filled with a set of inputs, tools, techniques, and deliverables all building upon one another to move to the next stage.
What is the purpose of parallel running?Parallel Run is the stage where the existing systems of the bank run concurrently with the new Oracle FLEXCUBE system. The basic objective of this activity is to ensure stability of the new system, enable the users to become comfortable with the new processes and to develop confidence leading to complete switch over.
|