Component based software technologies are viewed as essential for creating the software systems of the future. However, the use of externally provided component has serious drawbacks for a wide range of software engineering activities often because of a lack of information about the components. One such drawback involves validation of...
High level data-parallel languages are easy to use and shield the programmer from machine specific details. A simple and efficient way of providing an interface to such languages is to develop a machine-independent compiler and a routing library, which isolates the low-level machine dependent communication functions, The compiler translates the...
Digital libraries are digitally accessible, organized collections of knowledge. Although under this broad definition any digitally accessible data set might be considered a digital library, the term is generally reserved for collections whose structures are carefully documented and made available in the form of so-called metadata. There is no specific...
A project is described wherein a source code browser was implemented based upon tasks performed by software engineers looking for source code modules with only UNIX utilities. These tasks were discovered by observing the actions of software engineers looking for code modules in a library, The success of the implementation...
G. Spencer Brown's book Laws of Form has been enjoying a vogue among social and biological scientists. Proponents claim that the book introduces a new logic ideally suited to their fields of study, and that the new logic solves the problems of self-reference. These claims are false. We show that...
The coverage of a learning algorithm is the number of concepts that can be learned by that algorithm from samples of a given size. This paper asks whether good learning algorithms can be designed by maximizing their coverage. The paper extends a previous upper bound on the coverage of any...
This paper applies learning techniques to make engineering optimization more efficient and reliable. When the function to be optimized is highly non-linear, the search space generally forms several disjoint convex regions . Unless gradient-descent search is begun in the right region, the solution found will be suboptimal. This paper formalizes...
Recently several simply computed graph theoretic measures of computer program complexity, testing and unstructuredness have been proposed. Most of them are based on a static analysis of the program graph. One of the best known and most widely accepted is the cyclomatic number. Another is the number of knots, or...
Parallel software development requires the flexibility to describe algorithms regardless of hardware specification, the ability to accommodate existing applications. and maintainability throughout the software life cycle. We propose the following model to address these issues. Our model incorporates aspects of the object-oriented and large grain data flow programming paradigms, and...
We are convinced that the combination of data-parallel languages and MIMD hardware can make an important contribution to high speed computing. The data-parallel paradigm is a natural way to solve a large number of problems arising in science and engineering. Data-parallel programs are easier to design, implement, and debug than...