We developed a scientific information management system to facilitate remote access and analysis of earth and space science data, using the Component Model of software development provided by the Java language. The data sets are part of the Earth Observing. System project, being carried out at the College of Oceanic...
The Java programming language is a recent entry in the family of object-oriented languages and like all new languages has yet to achieve widespread standardization of coding styles for the unique aspects it possesses. This paper and project address this issue by presenting a complete and rationalized style guide for...
Dataparallel C is a SIMD extension to the standard C programming language. It is derived from the original C* language developed by Thinking Machines Corporation.. We have nearly completed a third-generation Dataparallel C compiler, which transforms Dataparallel C programs into SPMD-style C code suitable for compilation and execution on NCUBE...
Large integer factorization exemplifies a class of hard computational problems requiring the power of a supercomputer but which have algorithms decomposable into many large independent computations. The availability of internetworking provides the opportunity to solve such problems in distributed fashion on ordinary machines. Such a distributed network might contain a...
It often makes sense to write a program in the SIMD style, even if the program is to execute on a MIMD computer. Simulating physical events, in which all motion takes place simultaneously, is one area in which SIMD languages fit the applications particularly well. In this paper we present...
The Parallel Programming Support Environment (PPSE) is an experimental integrated set of tools for the design and construction of large software systems to run on parallel computers. The tools include a graphical de.sign editor, a graphical target machine description system, a task mapper/scheduler tool, parallel code generator, and graphical aids...
Parallel software development requires the flexibility to describe algorithms regardless of hardware specification, the ability to accommodate existing applications. and maintainability throughout the software life cycle. We propose the following model to address these issues. Our model incorporates aspects of the object-oriented and large grain data flow programming paradigms, and...
We are convinced that the combination of data-parallel languages and MIMD hardware can make an important contribution to high speed computing. The data-parallel paradigm is a natural way to solve a large number of problems arising in science and engineering. Data-parallel programs are easier to design, implement, and debug than...
We describe our third generation C* compiler for a hypercube multicomputer. This production quality compiler features a full implementation of the language, including general pointer-based communication and support for separate compilation. The compiler incorporates new optimizations and utilizes an improved set of communication primitives. It supports a variety of standard...
The coverage of a learning algorithm is the number of concepts that can be learned by that algorithm from samples of a given size. This paper asks whether good learning algorithms can be designed by maximizing their coverage. The paper extends a previous upper bound on the coverage of any...
This paper applies learning techniques to make engineering optimization more efficient and reliable. When the function to be optimized is highly non-linear, the search space generally forms several disjoint convex regions . Unless gradient-descent search is begun in the right region, the solution found will be suboptimal. This paper formalizes...
High level data-parallel languages are easy to use and shield the programmer from machine specific details. A simple and efficient way of providing an interface to such languages is to develop a machine-independent compiler and a routing library, which isolates the low-level machine dependent communication functions, The compiler translates the...
The secondary structure of a 16S rRNA molecule is a graphical, two dimensional representation used by molecular biologists in determining evolutionary relationships between different organisms. By comparing two secondary structures, scientists can obtain knowledge of how 'related' one species of bacteria may be to another species [OLSE 1986]. To date,...
Visual Fortran D (VFD) is a graphical tool to assist parallel programmers in specifying data distributions. Its target is Fortran D, an extension to Fortran77 or Fortran90 which supports data parallelism. VFD provides an intuitive framework where the user employs simple, fast graphical manipulations to specify how data is to...
CHARM is a parallel programming language that was originally implemented for a network of workstations each of which has only one processor. In this project, we ported CHARM for a network of workstations each of which has more than one processor (multi-computer) using multithreading to exploit the multiple processors.
Network...
Applications supporting a graphical user interface (GUI) are difficult to write. While existing tools can accelerate software development, they suffer from a number of problems that limit their helpfulness. They offer too little functionality, and support only a small part of the GUI software development task. They lack architectural models...
We describe a set of data flow techniques and code transformations that translate a single instruction stream, multiple data stream (SIMD) Dataparallel C program into a semantically equivalent single program, multiple data stream (SPMD) C program suitable for execution on shared memory multiprocessor computers, such as the Sequent Balance and...
Scheduling problem has both theoretical and practical interest. A great deal of research has been done in this field. In this paper we will use a local optimal search method to do job shop scheduling. This method will be compared with a constraint satisfaction method called Micro-boss. Other issues such...
Dataparallel C is a SIMD style data-parallel programming language for MIMD computers. Dataparallel Chas been implemented on both shared memory (Sequent) and distributed memory (Intel and nCUBE) computers. Here we analyze the strengths and weaknesses of Dataparallel C by comparing the performance of compiled Dataparallel C programs with the performance...