Research Interests

Keywords/Buzzwords: codesign, embedded systems, FPGA's, formal methods, functional programming, loop parallelization, parallel algorithms, reconfigurable computing, tiling, VLSI signal processing, high-performance computing.

My research concerns the foundations of mapping parallel computations to parallel architectures, using the polyhedral model which I helped develop in the eighties when I worked on systolic array synthesis. I still work in essentially the same area, and there is a resurgence of interest in these methods. My current research agenda is silicon compilation in the polyhedral model. I investigate methods, tools and techniques to transform/compile equations directly to silicon. Silicon systems increasingly use programmable processors, so the problems include many aspects of conventional compilation and parallelization.

The Silicon Compilation Problem

[Although this section is dated, I believe that it is still relevant] Given the advances in VLSI fabrication technology, billion-transistor chips are around the corner. The US Semiconductor Industry Association in collaboration with semiconductor manufactureres worldwide, regularly puts out the International Technology Roadmap for Semiconductors (ITRS) Roadmaps that chart the progress, and also set projections which serve to drive the industry (e.g. Moore's law is really a self-fulfiling prophesy). The 1999 edition of the Roadmap (updated 2000) chalked out Systems-on-a-Chip (SoC's) as an emerging area with unique and significant challenges, and stated (p. 5, Introduction) that

... With the growing importance of high-volume consumer markets and the ability to integrate almost all aspects of a system design on a single chip, the Roadmap has included an additional vehicle to capture the requirements of this important, emerging area. This vehicle is refered to as a System-On-a-Chip (SoC). There are a number of characteristics that distinguish a mainstream SoC, but the main consideration is that it is primarily defined by its performance and cost rather than by technological limits... Design productivity is a key requirement, particularly true for the SoC category, where the time-to-market for a particular application-specific capability is a key requirement of the designs. For primarily cost and time-to-market reasons, we expect that product families will be developed around specific SoC architectures and that many of these SoC designs will be customized for their target markets by programming the part (using software, FPGA, Flash and others). This category of SoC is referred to as a programmable platform. The design tools and technologies needed to assemble, verify and program such embedded SoC's will present a major challenge over the next decade.

Whenever any two parameters parameters grow exponentially, but with different growth-rates, their ratio also grows exponentially, leading to an exponential "gap". Witness, for example, the memory wall effect: memory bandwith is increasing "only" at a 7% rate per year, while density, speed etc., are increasing much faster, and hence microprocesor designers seek ways to alleviate memory latencies by increasing the volume, speed and number caches and on-chip memory. In the SoC area, a similar gap exists in design productivity: the number of transistors that can be put on a chip is increasing much faster than designer productivity growth due to better tools. This is why SoC design tools will present a major challenge.

A SoC is defined by its performance (i.e., a weighted combination of many criteria: speed, power/energy consumption, silicon area, etc.) and its cost. Together, they define a very large multicriterion design space of valid/feasible solutions. Depending on the application, some criteria are constraints to be satisfied, while others are so called benefits to be optimized among the different viable solutions. Hence the key design challenge for SoC's is systematic, efficient, rapid and (provably) correct exploration of this design space. This is the silicon compilation problem for SoC's. Viewed in the broadest sense of the term, it consists of methods, techniques and tools to rapidly, surely, optimally, and as far as possible, automatically (note that there may not be a single "best" solution, so the "compiler" must have methods for sytematic design space exploration) transform an application (system) specification to silicon (chip).

Need for Formal Methods

It has been my conviction for a number of years that formal methods are absolutely siné qua non for this problem. The SIA Roadmap (produced by a very pragmatic segment of computing milieu) reconfirms that this is far from being an ivory tower academician's point of view. The chapter on design and associated challenges for SoC's, identifies System-Level Design as a critical need, and states [p 42, "Difficult Challenges in System Design (below 100nm, beyond 2005)" ] that,

Additional algebraic or other discrete mathematically based [sic] formalisms to cover the various application domains will need to be developed. It is only with more formal specification forms that automated tools can be developed to capture, verify and then optimize a design. Digital Signal Processing theory, when mapped to the various models of computation such as dynamic dataflow, or synchronous dataflow, is an example of of a new algebra for specifying, and manipulating systems in their domain and at a higher level of abstraction.

The polyhedral model: restrictions and generality

As with any aspect of computing and engineering, there is a trade-off between efficiency and generality. I believe that the challenges of SoC silicon compilation can be addressed by curtailing generality. Many applications in the SoC domain are dominated by repetitive computations (eg. do or for loops). These loops may be imperfectly nested, their bounds are affine functions of surrounding indices, and their bodies are assignment statements accessing array and scalar variables with affine dependence/access functions. Such loops are called affine control loops (ACL's). Over a number of years, an elegant and powerful formalism called the polyhedral model has been developed for reasoning about ACL's and related classes of programs. It is a formal model that views programs as mathematical objects (specifically, as systems of equations) that obey certain closure properties and have well defined semantics. The model enables us to (i) specify, (ii) reason about, (iii) analyze, (iv) transform, and (v) compile/parallelize ACL's or equations. Furthermore, the model provides us with a certain generality in terms of architecture independence. Thus, although we consider a restricted class of programs/computations, the tools for analysis, parallelization and formal transformation treat programs and hardware descriptions in a single, unified framework.

Architecture independence

Since we envision billion-transistor chips, whose architecture we cannot predict today, our design tools, methods and formalisms must necessarily be independent of the specific architectural features. In this context, silicon is abstracted as ``merely'' a resource which can be arbitrarily reconfigured depending on the application program. This view encompasses the currently popular notion of reconfigurability (that provided by FPGA's) and also includes architectures with larger granularity such as the RAW machine (MIT), the TTA architecture (Delft), the eveolution of VLIW to multiple clusters (as in the PICO project at HP Labs), the increasing levels of SIMD parallelism available in microprocessors, and as yet unforeseen evolutions of computer architectures which will most likely have reconfigurability at many different levels of granularity. The space of target architectures also includes full-custom ASIC, or programmable (instruction-set) processors (ISP's) with memory, interconnect, buses, etc. The ISP's themselves may be application specific or general purpose, and in the future, increasingly parallel.

Instructions to prospective graduate students

If you are already a graduate student at CSU, please drop me an email and we can discuss topics of mutual interest.

If you are not at CSU, I'm sorry but I don't have the time to respond to unsolicited mass mailings, unless you have some specific questions/issues to discuss. So please first take a look at the instructions that Vivek Pai at Princeton University eloquently provides on his page [recent update: Vivek Pai has left Princeton and his home page is no longer accessible easily. However, his colleague David August has similar instructions]. After that, remember that Poitiers is a French town not far from Rennes, where I spent a number of years before moving to Fort Collins. So "Dear M. Rennes" should replace "Sidney"Poitier" in Pai's instructions [and the word "blue" in those of David August].