Link to Colorado State University Home Page

BMAC Schedule and Abstracts, Spring 2002

This page lists the BMAC schedule and abstracts of talks for the Fall 2001 semester. See the current BMAC schedule for an abbreviated list without abstracts.

In vivo imaging of biological samples:
application of computer vision to biology and genomics.

Dr. June Medford
Department of Biology
Colorado State University
June.Medford@colostate.edu
January 14, 2002
3:30-4:30 PM Clark C251

Observation has been a keep part of biology since Leeuwenhoek used a simple microscope to discover the world of "microbes". Today, biologists demand much great abilities from their instruments. Biologists look for three key aspects: (1) the ability to imaging living samples (in vivo), (2) the ability to image how genes are controlled, (3) the ability to do both aspects on a "genomic" scale.

Optical coherence microscopy (OCM) is a new photonics-based imaging technology that provides biologists with the ability to obtain all three aspects. Data from 1-10 million voxels are processes and used to produce a three-dimensional image using computer technology. The images are cropped, sliced and rotated to see features of interest. We will discuss how the data are processes and needed inputs from computer vision to non-destructively see life; genes and genomic information.

On the Accuracy of Garbage Collection

Dr. Amer Diwan
Computer Science Department
University of Colorado, Boulder
diwan@cs.colorado.edu
January 28, 2002
4:10-5:00 PM Rockwell 167

An ideal garbage collector reclaims any heap-allocated records that can not be reached by following pointers that will be dereferenced in the future of the computation (dynamically live pointers). A real garbage collector, of course, has no way of knowing what pointers will be referenced in the future; thus it may use compiler support to identify statically live pointers. The set of statically live pointers must include the set of dynamically live pointers in order for the garbage collector o be correct but it may also contain pointers that are not dynamically live. Thus, a real collector may retain some records that would not be retained by an ideal garbage collector. "Garbage collector accuracy" refers to the quality of information available to the garbage collector that allows it to identify the pointers that it must examine during garbage collection (i.e., statically live pointers).

Accuracy is important for effective and efficient garbage collection. In this talk, I will present results from our experiments with two dimensions of accuracy: "type accuracy" and "liveness accuracy". Type accuracy determines whether or not a garbage collector can distinguish pointers from non-pointers. Liveness accuracy determines if a garbage collector knows whether or not a pointer will be dereferenced in the future. I will show that liveness accuracy is more important than type accuracy as far as reclaiming objects is concerned and that liveness accuracy can mean a much more effective garbage collector. I will conclude with a list of interesting open problems.

This work is supported by IBM and NSF.

Iterative Compilation in a Non-Linear Optimisation Space

Michael O'Boyle
University of Edinburgh, UK
currently visiting Stanford University
February 4, 2002
4:10-5:00 PM, Rockwell 167

Effective utilisation of modern processors by user programs relies on good compiler optimisation. The use of static analysis to select the best optimisation has been extensively studied for many years yet still fails to achieve significant performance improvement. This paper proposes that the failure of static analysis is inevitable due to the underlying complexity of the optimisation task and proposes a new technique, namely iterative compilation. By describing compiler optimisation as finding the minimum execution time in a program transformation space, it shows that this space is highly non-linear and processor dependent and unsuitable for static optimisation. By using profile feedback in the form of execution time, an iterative compilation algorithm is described which searches a large but restricted transformation space where, in over 99% of cases, it outperforms two static techniques. This generic technique was also applied to 3 SPEC benchmarks where it outperformed the native optimising profile-directed compiler. Such an approach can achieve good results with few executions and is a viable approach for future compiler based optimisation.

Mixed-Initiative Interaction = Mixed Computation

Prof. Naren Ramakrishnan
Department of Computer Science
Virginia Tech, Blacksburg, VA 24061
Email:
naren@cs.vt.edu
URL(http://vtopus.cs.vt.edu/~ramakris/)
February 18, 2002
4:10-5:00 PM, Rockwell 167

We will show that partial evaluation can be usefully viewed as a programming model for realizing mixed-initiative functionality in interactive applications. Mixed-initiative interaction between two participants is one where the parties can take turns at any time to change and steer the flow of interaction. We concentrate on the facet of mixed-initiative referred to as `unsolicited reporting' and demonstrate how out-of-turn interactions by users can be modeled by `jumping ahead' to nested dialogs (via partial evaluation). Our approach permits the view of dialog management systems in terms of their support for staging and simplifying interactions; we characterize three different voice-based interaction technologies using this viewpoint. Time permitting, we show that the built-in form interpretation algorithm (FIA) in the VoiceXML dialog management architecture is actually a (well disguised) combination of an interpreter and a partial evaluator.

This is an invited paper for the 2002 ACM SIGPLAN symposium on partial evaluation. http://vtopus.cs.vt.edu/~ramakris/papers/miimc.pdf Joint work with Robert Capra and Manuel Perez-Quinones.

Biography: Naren Ramakrishnan is an assistant professor of computer science at Virginia Tech. He received a Ph.D. in computer sciences from Purdue University in Aug 1997. His research interests include recommender systems, problem solving environments, data mining, and personalization. He is the recipient of a 2000 NSF CAREER award and the 2001 New Century Technology Council Innovation award.

Novel Approaches to Certificate Management in PKI

Prof. Ravi Mukkamala
Department of Computer Science
Old Dominion University
Email:
mukka@cs.odu.edu
February 25, 2002
4:10-5:00 PM, Rockwell 167

With the ever-increasing growth in electronic messaging and electronic commerce, the need for an infrastructure to provide confidentiality, security, and confidence for such exchanges to take place is quite evident. In these applications, public keys and certificates are used for the purpose of authentication, authorization, and access control. One of the primary concerns in these systems is the handling of certificate revocation prior to its expiration. Existing schemes require that the information about revoked certificates be sent only periodically to the directories. This gives rise to the problem of obsolescence. The other problem is the huge communication and processing cost due to the long revocation lists. We have been looking at several techniques to increase the freshness of the revocation information. We have also developed some techniques to decrease the cost of propagating such information. In addition, they also improve the robustness of the overall system. The talk will summarize the current work in Public-Key Infrastructure (PKI) and discuss the schemes developed by our group.

Indirect Reinforcement Learning: An Analysis of the Exploitation-Exploration Tradeoff and an Application to Human-Computer Interaction

Dr. Satinder Singh
Syntek Capital

Friday, March 22, 2002
1:10-2:00 PM, Eddy 100

The field of reinforcement learning (RL) has introduced into AI many abstract mathematical formulations of the AI problem, mostly by borrowing them from the fields of operations research and adaptive control. Has this been good for AI? I will argue that RL formulations have allowed us to ask and answer precise questions about many important AI issues as well as to bring a (mostly) principled design methodology to many application areas. I will illustrate these twin advantages through examples from my own work. In the first part of the talk I will provide an answer to a formulation of the "exploitation-exploration" tradeoff: should an agent exploit what it already knows or should it explore in the hope of learning something that leads to even greater long-term return? In the second part of the talk, I will describe why and how RL offers a powerful methodology for designing many human-computer interaction systems. I will illustrate this methodology through our design, construction and empirical evaluation of NJFun (a system that provides telephonic access to a database of fun activities in NJ), and show that RL measurably improves NJFun's performance.

MSXmin: A Modular Multicast ATM Packet Switch with Low Delay and Hardware Complexity

Prof. Sibabrata Ray
Computer Science Department The University of Alabama at Tuscaloosa
Email:
sibu@cs.ua.edu
March 25, 2002
4:10-5:00 PM, Rockwell 167

Here we propose and analyze the architecture of a large scale high speed multicast switch called MSXmin. The hardware complexity of MSXmin is O(N log2N) and the internal delay O(log2N). This compares favorably with other switches. While MSXmin is superior in terms of hardware complexity and latency, it is comparable to other multicast switches in terms of the header overhead and translation table complexity. MSXmin is based on group knockout principle. It is a dual-bit controlled self-routing switch.

Low Fidelity Algorithm Visualization

Dr. Christopher Hundhausen
Information and Computer Sciences Department
University of Hawai'i

Friday, March 29, 2002
9:00-9:50AM, Rockwell 167

Computer science educators have traditionally used algorithm visualization (AV) software to create graphical representations of algorithms that are later used as visual aids in lectures, or as the basis for interactive labs. Typically, such visualizations are "high fidelity" in the sense that (a) they depict the target algorithm for arbitrary input, and (b) they tend to have the polished look of textbook figures. In contrast, "low fidelity" visualizations illustrate the target algorithm for a few, carefully chosen input data sets, and tend to have a sketched, unpolished appearance. Drawing on the findings of ethnographic and experimental studies, I motivate the use of low fidelity AV technology as the basis for an alternative learning paradigm in which students construct their own visualizations, and then present those visualizations to their instructor and peers for feedback and discussion. To explore the design space of low fidelity AV technology, I present a prototype language and system derived from empirical studies in which students constructed and presented visualizations made out of simple art supplies. The prototype language and system pioneer a novel technique for programming visualizations based on spatial relations, and a novel presentation interface that supports reverse execution and dynamic mark-up and modification. Moreover, the prototype provides an ideal foundation for what I see as the algorithms classroom of the future: the "algorithms studio," which I will be exploring over the next five years under funding from the National Science Foundation.

A State Variable Approach for Feedback Software Process Control

Joao W. L. Cangussu
Department of Computer Science
Purdue University
cangussu@cs.purdue.edu

Monday, April 1, 2002
1 PM, B101 Engineering

A novel approach to model the system test phase of the software life cycle is presented. This approach is based on concepts and techniques from control theory and is useful in computing the effort required to reduce the number of errors and the schedule slippage under a changing process environment. Results from these computations are used, and possibly revised, at specific checkpoints in a feedback-control structure to meet the schedule and quality objectives. The use of a formal approach allows for automatic calibration of the model by means of System Identification techniques. A sensitivity analysis based on tensor product is conducted to analyze the model behavior for variations in the parameters. Two case studies were conducted to study the behavior of the proposed model. One study uses data from a commercial project. The outcome from these two case studies combined with the results of the sensitivity analysis suggest that the proposed model might well be the first significant milestone along the road to a formal and practical theory of software process control.

Optical High-Speed Networking

Dr. Mohsen Guizani
Computer Science Department
University of West Florida

Tuesday, April 2, 2002 9:00 a.m.
Lory Student Center Room 207

Computer communication has become an essential part of our infrastructure. Networking is used in every aspect of business, including advertising, production, shopping, planning, and accounting. Consequently, most corporations have multiple networks that should be interconnected together and expected to work efficiently at high-speed. There is an urgent need for instantaneous access to information on-line all around the world.

On the other hand, continued growth of the global Internet is one of the most interesting and exciting phenomena in networking. Today, the Internet has grown into a communication system that reaches millions of people worldwide. In the US only, the Internet connects most corporations, colleges and universities, as well as federal, state, and local government offices. It has reached some elementary, junior, and senior high schools but soon will reach almost all the rest. In addition, many private residences have access to the Internet through dial-up telephone connections. But, newer technologies are providing even higher capacity services.

Fiber-optic technology can be considered our hope for meeting this explosive growth due to its inherent advantages, such as its huge bandwidth, low attenuation, low signal distortion, low power requirement, and low cost. Since fiber has been used successfully as a medium, attention is now focusing on designing the all-optical network (including the switching, routing, and restoration) that will serve as the backbone for the next generation Internet. To achieve this goal, few research questions have to be answered related to optical switch design architectures, quality of service (QoS), wavelength assignment, and survivability & fault-tolerance, just to name a few. Just a year ago, researchers at MIT, Harvard, and Stanford managed to stop the light, blistering at 186,000 miles/s, and send it off again. This represents a major innovation that will enable us to store data carried by photons (to be published in the Journals of Nature and American Physical Letters).

In this talk, we discuss the state-of-the art of designing the all-optical network that will serve as the Internet backbone, present the on-going research projects around the world in this field, and discuss some of our efforts in this regard. If time permits, will present some of our other research activities that lead to the same objective.

Pervasive Information Communities Organization (PICO)

Dr. Behrooz Shirazi
Department of Computer Science and Engineering
The University of Texas at Arlington
shirazi@cse.uta.edu

Friday, April 5, 2002
Time and Room to be announced.

Consider the following scenario: a car-accident victim in a critical condition needs immediate attention by medical and other personnel who are in geographically distributed locations. Timely and automated creation of mission-oriented communities of software agents representing doctors, hospitals, ambulances, and other personnel, and their effective collaboration, are essential to save the victim. The question is: can the current networking infrastructure and the Internet model meet the demands of such a scenario? The answer is no! The existing push/pull based Internet computing model, primarily designed to handle static information, is inadequate to meet the requirements of proactive services for sustained periods.

In this talk we present a general-purpose framework called Pervasive Information Communities Organization (PICO), which deals with the creation of mission-oriented dynamic software communities that perform tasks on behalf of users and devices autonomously. PICO consists of software entities, called delegents (intelligent delegates) and hardware entities, called camileuns (connected, adaptive, mobile, intelligent, learned, efficient, ubiquitous nodes). PICO encompasses computing, communication, and dynamic context-aware information processing by employing the concept of software communities. Communities of delegents are created and deployed dynamically and autonomously to carry out missions. PICO has application in such areas as telemedicine, military, crisis management and avoidance, office management, smart homes, entertainment, and education. The PICO architecture, its application to telemedicine, and the related research problems under investigation are presented.

Bug Riddance: Hard Core Software Engineering

Rex Page
Computer Science Department
The University of Oklahoma
Email:
page@ou.edu
April 8, 2002
4:10-5:00 PM, Rockwell 167
Jointly sponsored by the Computer Science and Electrical and Computer Engineering Departments.

Students, when asked how they know their programs work, almost always respond that their programs have been thoroughly tested. Software development professionals give the same answer, except that they're likely to add a disclaimer to make it clear that their software doesn't come with a guarantee. The professional knows that no amount of software testing can ensure correctness, yet few software developers have any significant experience with other approaches to ridding software of bugs. Why is that?

Software engineering education gives short shrift to reasoning about software. Mathematical reasoning, if it is discussed at all, usually appears in the context of number theory or difference equations or some other purely mathematical context. It is rarely applied directly to software. As a result, software developers, even those who have received a formal education in the subject, have no competence in applying mathematical logic to the problem of software correctness.

This seminar discusses some experience in teaching students to use formal logic to reason about software, cites some preliminary estimates of the impact on programming effectiveness, and outlines a core collection of courses in which students make serious use of logic, in addition to testing, to help them answer questions about how well their programs work.

BIO: Rex Page has worked in academia and industry, in about equal measure, for just over three decades. Now a member the Computer Science faculty at the University of Oklahoma, he is working to establish a BS in Software Engineering with a solid grounding in the application of mathematical logic to the practice of software development.

Algorithms for Computational Biology and Data Mining

Dr. Ross McConnell
University of Colorado at Denver.
Friday, April 12, 2002
9AM 113 Natural Resources

Finding the places in a text where a string occurs is a problem familiar to everyone who has used a word processor or a search engine. In computational biology and in data mining, the text is sometimes quite large, and must be searched intensively by algorithms that generate large numbers of search strings. When the running time is dominated by these searches, it is worthwhile to pre-compute a tree-like representation of the text that speeds up the queries dramatically. A separate problem that is motivated by biology is the reconstruction of the genome of an organism, given a set of genetic fragments and information about which pairs of fragments intersect in the genome. Because of the linear topology of DNA, one must find a linear arrangement of the fragments that is consistent with the intersection data. The talk will review my recent work on data structures and algorithms for these problems and for related problems.

Software Archaeology

Grady Booch
Chief Scientist
Rational Software Corporation
Email:
egb@rational.com

April 15, 2002
4:10-5:00 PM, Room TBA

When approaching an existing system, software developers spend a considerable amount of time trying to grok the intent of its original designers. When approaching a new system, most developers plagiarize from the architecture of previous systems that worked. In either case, that developer is acting as a software archeologist. This talk will example the activities of the archeologist and their relationship to tools, patterns, methods, and the emerging concepts of aspect-oriented programming.

Biography: Grady Booch is recognized internationally for his innovative work on software architecture, modeling, and software engineering. He has been with Rational Software Corporation as Chief Scientist since its founding in 1980. Grady is one of the original developers of the Unified Modeling Language (UML) and was also one of the original developers of several of Rational's products. Grady has served as architect and architectural mentor for numerous complex software systems around the world.

Grady is the author of six best-selling books including the UML User Guide and the seminal Object-oriented Analysis and Design with Applications. Grady has published several hundred technical articles on software engineering, including papers published in the early '80 that originated the term and practice of object-oriented design. He has lectured and consulted worldwide.

Grady is a member of the Association for Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers (IEEE), the American Association for the Advancement of Science (AAAS), and Computer Professionals for Social Responsibility (CPSR). He is also an ACM Fellow and a Rational Fellow and a board member of the Agile Alliance.. Grady received his BS in engineering from the United States Air Force Academy in 1977 and his MSEE from the University of California at Santa Barbara in 1979.

A Cryptographic Solution to Implement Access Control in a Hierarchy and More

Dr. Indrakshi Ray
Computer Science Department
Colorado State University
Email:
iray@cs.colostate.edu
April 22, 2002
4:10-5:00 PM, Rockwell 167

The need for access control in a hierarchy arises in several different contexts. One such context is managing the information of an organization where the users are divided into different security classes depending on who has access to what. Several cryptographic solutions have been proposed to address this problem -- the solutions are based on generating cryptographic keys for each security class such that the key for a lower level security class depends on the key for the security class that is higher up in the hierarchy. Most solutions use complex cryptographic techniques: integrating these into existing systems may not be trivial. Others have impractical requirement: if a user at a security level wants to access data at lower levels, then all intermediate nodes must be traversed. Moreover, if there is an access control policy that does not conform to the hierarchical structure, such policy cannot be handled by existing solutions. We propose a new solution that overcomes the above mentioned shortcomings. Our solution not only addresses the problem of access control in a hierarchy but also can be used for general cases. It is a scheme similar to the RSA cryptosystem and can be easily incorporated in existing systems.

Three Tools to Help with Cluster and Grid Computing: ATLAS, PAPI, and Netsolve

Prof. Jack Dongarra
University Distinguished Professor of Computer Science
University of Tennessee
May 2, 2002
11 AM, Engineering E204

In this talk we will look at some methods for generating automatically fast robust numerical kernels for numerical operations called ATLAS and methods for measuring the performance on today's processors called PAPI. ATLAS is a software package that will automatically generate highly optimized numerical kernels for our commodity processors. As the underlying computing hardware doubles its speed every eighteen months, it often takes more than a year for software to be optimized or tuned for performance on a newly released CPU. Users tend to see only a fraction of the power available from any new processor until it is well on the way to obsolescence.

The Performance API (PAPI) project specifies a standard application programming interface (API) for accessing hardware performance counters available on most modern microprocessors. These counters exist as a small set of registers that count Events, occurrences of specific signals related to the processor's function. Monitoring these events facilitates correlation between the structure of source/object code and the efficiency of the mapping of that code to the underlying architecture.

Finally we will look at a system, called Netsolve that allows users to access computational resources, such as hardware and software, distributed across the network. This project has been motivated by the need for any easy-to-use, efficient mechanism for using computational resources remotely. Ease of use is obtained as a result of different interfaces, some of which do not require any programming effort from the user. Good performance is ensured by a load-balancing policy that enables Netsolve to use the computational resource available as efficiently as possible. Netsolve offers the ability to look for computational resources on a network, choose the best one available, solve a problem (with retry for fault-tolerance) and return the answer to the user.

Short Biography: Jack Dongarra holds an appointment as University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. Other current research involves the development, testing and documentation of high quality mathematical software. He was involved in the design and implementation of the open source software packages EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, ATLAS, PAPI, and Harness; and is currently involved in the design of algorithms and techniques for high performance computer architectures. He is a Fellow of the AAAS, ACM, and IEEE and a member of the National Academy of Engineering.

An Overview and Trends in High Performance Computing

Prof. Jack Dongarra
University Distinguished Professor of Computer Science
University of Tennessee
May 2, 2002
2 PM, Lory Student Center, Room 224-226

In this talk we will look at how High Performance computing has changed over the last 10-years and look toward the future in terms of trends. In addition, we advocate `Computational Grids' to support `large-scale' applications. These must provide transparent access to the complex mix of resources - computational, networking, and storage - that can be provided through the aggregation of resources. The vision is of uniform, location independent, and transient access to the computational, catalogued data, instrument system, and human collaborator resources of contemporary research activity in order to facilitate the solution of large-scale, complex, multi-institutional / multidisciplinary data and computational based problems.

Short Biography: Jack Dongarra holds an appointment as University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee. He specializes in numerical algorithms in linear algebra, parallel computing, the use of advanced-computer architectures, programming methodology, and tools for parallel computers. Other current research involves the development, testing and documentation of high quality mathematical software. He was involved in the design and implementation of the open source software packages EISPACK, LINPACK, the BLAS, LAPACK, ScaLAPACK, Netlib, PVM, MPI, NetSolve, ATLAS, PAPI, and Harness; and is currently involved in the design of algorithms and techniques for high performance computer architectures. He is a Fellow of the AAAS, ACM, and IEEE and a member of the National Academy of Engineering.

Refreshments will be served.


For questions, suggestions, or to schedule a BMAC talk contact
Jim Bieman at bieman@cs.colostate.edu or by telephone at 970-491-7096. BMAC stands for "Barney's Monday Afternoon Club", in honor of Barney Marchner, a past chair of the Department of Computer Science.