Proceedings of the 2012 International Conference on Information Technology and Software Engineering: Information Technology & Computing Intelligence
The proceedings provide a major interdisciplinary forum for researchers and engineers to present the most innovative studies and advances, which can serve as an excellent reference work for researchers and graduate students working on information technology and software engineering.
Prof. Wei Lu, Dr. Guoqiang Cai, Prof. Weibin Liu and Dr. Weiwei Xing all work at Beijing Jiaotong University.
Diverse disciplines contributed to the findings in this book: computer scientists, standardization leaders and professionals, users and vendors, economists, auditors, software implementors, and communication specialists.
This book presents results of research into techniques to aid the formal verification of mixed hardware/software systems. Aspects of system specification and verification from requirements down to the underlying hardware are addressed, with particular regard to real-time issues. The work presented is largely based around the Occam programming language and Transputer microprocessor paradigm. The HOL theorem prover, based on higher order logic, has mainly been used in the application of machine-checked proofs.
The book describes research work undertaken on the collaborative UK DTI/SERC-funded Information Engineering Dictorate Safemos project. The partners were Inmos Ltd., Cambridge SRI, the Oxford University Computing Laboratory and the University of Cambridge Computer Laboratory, who investigated the problems of formally verifying embedded systems. The most important results of the project are presented in the form of a series of interrelated chapters by project members and associated personnel. In addition, overviews of two other ventures with similar objectives are included as appendices.
The material in this book is intended for computing science researchers and advanced industrial practitioners interested in the application of formal methods to real-time safety-critical systems at all levels of abstraction from requirements to hardware. In addition, material of a more general nature is presented, which may be of interest to managers in charge of projects applying formal methods, especially for safety-critical-systems, and others who are considering their use.
Massively Parallel Processing Applications and Development: Proceedings of the 1994 EUROSIM Conference on Massively Parallel Processing Applications and Development, Delft, The Netherlands, 21-23 June 1994
From a practical point of view, massively parallel data processing is a vital step to further innovation in all areas where large amounts of data must be processed in parallel or in a distributed manner, e.g. fluid dynamics, meteorology, seismics, molecular engineering, image processing, parallel data base processing. MPP technology can make the speed of computation higher and substantially reduce the computational costs. However, to achieve these features, the MPP software has to be developed further to create user-friendly programming systems and to become transparent for present-day computer software.
Application of novel electro-optic components and devices is continuing and will be a key for much more general and powerful architectures. Vanishing of communication hardware limitations will result in the elimination of programming bottlenecks in parallel data processing. Standardization of the functional characteristics of a programming model of massively parallel computers will become established. Then efficient programming environments can be developed. The result will be a widespread use of massively parallel processing systems in many areas of application.
A clear illustration of how parallel computers can be successfully applied
to large-scale scientific computations. This book demonstrates how a
variety of applications in physics, biology, mathematics and other sciences
were implemented on real parallel computers to produce new scientific
results. It investigates issues of fine-grained parallelism relevant for
future supercomputers with particular emphasis on hypercube architecture.
The authors describe how they used an experimental approach to configure
different massively parallel machines, design and implement basic system
software, and develop algorithms for frequently used mathematical
computations. They also devise performance models, measure the performance
characteristics of several computers, and create a high-performance
computing facility based exclusively on parallel computers. By addressing
all issues involved in scientific problem solving, Parallel Computing
Works! provides valuable insight into computational science for large-scale
parallel architectures. For those in the sciences, the findings reveal the
usefulness of an important experimental tool. Anyone in supercomputing and
related computational fields will gain a new perspective on the potential
contributions of parallelism. Includes over 30 full-color illustrations.
Thoroughly revised and updated, this year's report contains invaluable information investigating the impact of supercomputing technology on data processing in the near term and its influence for the next five years.
The information and data in this report are critical in supplying: information on the size of each of the technical computing segments, what percentage of each is comprised of vector and parallel systems now, and what that percentage will be by 1995.
For each technical computing segments, the report gives dollar volumes, units shipped, user industries, applications, operating system requirements, vector shares of market and installations. Find out which computing segments and vectors show the greatest promise for growth and profitability.
The technical aspects influencing the development of supercomputing architectures and the features driving their user acceptance are analyzed. The report also provides immediate market opportunities by discussing types of applications that will benefit most from supercomputing technology.
All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.
Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for parallel shared-memory systems, and the design of fast algorithms and implementations for vector supercomputers.