![]() |
|
|
Innovations In Teaching Software Development Skills
Educators have been looking at various ways to improve the classroom experience, incorporating ideas
such as active learning, on-line lectures, and various communications technologies. However, the way in
which programming and software development is taught has not changed much at many schools.
I'll be talking about various approaches being developed at Maryland to improve the way we teach
programming and software development. Much of this has evolved through the Marmoset project, which is a
web based framework for handling student project submission and evaluation. Marmoset, also known as the
submit server, accepts student project submissions and provides students with limited access to test
results before the project deadline. It provides various incentives for students to start work on
projects early and practice test driven development. It also provides students with access to tools such
as static analysis and code coverage data, and supports web-based code reviews. Code reviews include
instructional code reviews (where TA's or faculty review student code), peer code reviews (where each
student reviews code by two other students), and canonical code reviews (where all students are asked to
review one specific code example, perhaps something from a standard library). Marmoset is open source,
and used in most CS programming courses at UMD and by several other universities.
Speaker biography:
Bill Pugh received a Ph.D. in Computer Science (with a minor in Acting) from Cornell University. He was a
professor at the University of Maryland for 23.5 years, and in January 2012 became professor emeritus to start
new adventure somewhere at the crossroads of software development and entrepreneurship.
Bill Pugh is a Packard Fellow, and invented Skip Lists, a randomized data structure that is widely taught in
undergraduate data structure courses. He has also made research contributions in in techniques for analyzing
and transforming scientific codes for execution on supercomputers, and in a number of issues related to the
Java programming language, including the development of JSR 133 - Java Memory Model and Thread
Specification Revision. Professor Pugh's current research focus is on developing tools to improve software
productivity, reliability and education. Current research projects include FindBugs, a static analysis tool for
Java, and Marmoset, an innovative framework for improving the learning and feedback cycle for student
programming projects.
Professor Pugh has spoken at numerous developer conferences, including JavaOne, Goto/Jaoo in Aarhus, the
Devoxx conference in Antwerp, and CodeMash. At JavaOne, he received six JavaOne RockStar awards, given
to the speakers that receive the highest evaluations from attendees.
Professor Pugh spent the 2008-2009 school year on sabbatical at Google, where, among other activities, he
learned how to eat fire.
Research and Engineering Challenges in FindBugs
I'll talk about some of the research and engineering issues in FindBugs, a static analysis tool for
finding errors in Java programs (and other languages that compile to Java byte code). FindBugs has been
downloaded more than a million times, incorporated into all major commercial static analysis tools, and
is used tens of thousands of time a day worldwide. After a brief review of FindBugs, I'll talk about the
design of the type qualifier analysis built into FindBugs, the challenges of annotation-driven
frameworks, the null dereference analysis used by FindBugs, and new ideas about how FindBugs could be
made user extensible.
Bio-Inspired Supercomputing
The two major issues in the formulation and design of parallel multiprocessor
systems are algorithm design and architecture design. The parallel
multiprocessor systems should be so designed so as to facilitate the design
and implementation of the efficient parallel algorithms that exploit optimally
the capabilities of the system. From an architectural point of view, the
system should have low hardware complexity, be capable of being built of
components that can be easily replicated, should exhibit desirable
cost-performance characteristics, be cost effective and exhibit good
scalability in terms of hardware complexity and cost with increasing problem
size. In distributed memory systems, the processing elements can be considered
to be nodes that are connected together via an interconnection network. In
order to facilitate algorithm and architecture design, we require that the
interconnection network have a low diameter, the system be symmetric and
each node in the system have low degree of connectivity. For most symmetric
network topologies, however, the requirements of low degree of connectivity
for each node and low network diameter are often conflicting. Low network
diameter often entails that each node in the network have a high degree of
connectivity resulting in a drastic increase in the number of inter-processor
connection links. A low degree of connectivity on the other hand, results in
a high network diameter which in turn results in high inter-processor
communication overhead and reduced efficiency of parallelism. Reconfigurable
networks attempt to address this tradeoff. In this presentation, we discuss
our design of a reconfigurable network topology that is targeted at medical
applications; however, others have found a number of interesting properties
about the network that makes it ideal for applications in computational
biology as well as information engineering. The design that will be presented
in this talk is a bio-inspired reconfigurable interconnection topology (the
work is based on an ongoing project).
Hamid R. Arabnia received a Ph.D. degree in Computer Science from the
University of Kent (Canterbury, England) in 1987. Arabnia is currently a
Professor of Computer Science at University of Georgia (Georgia, USA), where
he has been since October 1987. His research interests include Parallel and
distributed processing techniques and algorithms, supercomputing,
interconnection networks, and applications (in particular, in image
processing, medical imaging, bioinformatics, knowledge engineering, and other
computational intensive problems). Dr. Arabnia is Editor-in-Chief of The
Journal of Supercomputing published by Springer; Co-Editor of Journal of
Computational Science published by Elsevier; and serves on advisory/editorial
boards of 35 other journals.
How we teach impacts students learning, performance, and persistence: Results from three recent studies of Peer Instruction in Computer Science
What a course “is” and “does” can be viewed through the lens of instructional design. Any course should be based around the learning goals we have for students taking the course
– what it is we want them to know and be able to do when they finish the course. Describing how we go about supporting students in achieving those goals can be broken into two
parts: a) the content/materials we choose to cover and b) the methods/pedagogical approaches we employ. In this talk I review the results of three about to be published studies
looking at the impact of method or pedagogical approach in computing courses. Specifically, I’ll review our experience using the Peer Instruction method (aka “clickers”) in
computer science courses at UC, San Diego and discuss the following:
a) an observed 50% reduction in fail rate in four computing courses adopting PI,
b) an in-situ comparison study showing Peer Instruction students to perform 6% better than students in a standard “lecture” setting, and
c) a 30% increase in retention of majors after adopting a trio of best practices in our introductory programming course (Peer Instruction, Media Computation, and Pair
Programming).
Advances in High-Performance Computing: The Race to the Top
In recent years, the world has seen an unprecedented international race for the leadership in supercomputing. Over the past decade or so, the top ranking supercomputer has been moving in circles every several months between the US, Japan and China. In addition, over almost one decade, the speed of the top supercomputer has risen four orders of magnitude and is moving rapidly to be measured in exaflops, or million trillion calculations per second, units that are not used today by industry. The secret behind this international out-of-control race is perhaps because leadership in supercomputing means technological and eventually economic leadership. This talk will provide a quick in-depth characterization to the advances in High-Performance Computing in the past 20 years. In the light of this, it will also provide some projections, future trends, and identify some of the open research issues in High-Performance Computing.
Speaker biography:
Dr. Tarek El-Ghazawi (http://hpcl.seas.gwu.edu/tarek/) is director of the GW Institute for Massively Parallel Applications and Computing Technologies (IMPACT) and the NSF Industry University Center for High-Performance Reconfigurable Computing (CHREC), and oversees the GWU program in HPC. El-Ghazawi’s research interests include high-performance computing, computer architectures, reconfigurable, embedded computing and computer vision. He is one of the principal co-authors of the UPC parallel programming language and the first author of the UPC book from John Wiley and Sons. El-Ghazawi has published well over 200 refereed research publications and his research has been frequently supported by Federal agencies and industry, including NSF, DARPA, DoD, and NASA to name a few. He served in many editorial roles including an Associate Editor for the IEEE Transactions on Computers, and chaired many international technical symposia. He also serves on a number of advisory boards. Professor El-Ghazawi is a Fellow of the IEEE and a Research Faculty Fellow of the IBM Center for Advanced Studies, Toronto. He is a member of the Phi Kappa Phi national honor society and an elected member of the IFIP WG10.3 and a Fulbright Senior Scholar. Professor El-Ghazawi is a recipient of the 2011 Alexander Schwarzkopf prize for technical innovation.
Cameras as Computational Devices
As electronic sensors replaced film in cameras, not much appeared to change.
However, modern digital cameras contain computers. Instead of merely
simulating film, the camera can use the sensor and optics to intelligently
capture data for more sophisticated processing -- doing things no film
camera could. This talk will introduce two very different computational
photography concepts that we've been developing. The first is a method
by which a commodity camera can be used to capture scene depth data in
a single shot. Combining a better understanding of optics with appropriate
processing produces images for "3D" viewing, allows refocus after capture,
etc. The second concept involves a completely new way of thinking about
camera sensors -- in which the sensor itself is a massively-parallel
computer constructed using millions of nanocontrollers.
Speaker biography: Upon completing his Ph.D. at Polytechnic University (now NYU-Poly) in 1986, Henry G. (Hank) Dietz joined the Computer Engineering faculty at Purdue University's School of Electrical and Computer Engineering. In 1999, he moved to the University of Kentucky, where he is a Professor of Electrical and Computer Engineering and the James F. Hardymon Chair in Networking. Despite authoring approximately 200 scholarly publications mostly in the fields of compilers and parallel processing, his group is best known for the open source research products it has created: PCCTS/Antlr compiler tools, Linux PC cluster supercomputing, SWAR (SIMD Within a Register), FNNs (computer-evolved Flat Neighborhood Networks), MOG (MIMD On GPU), etc. Most of his work is freely distributed via Aggregate.Org. Dietz also is an active teacher, and was one of the founders of the EPICS (Engineering Projects In Community Service) program. He is a member of ACM, IEEE, and SPIE.
William Pugh
University of Maryland
William Pugh
University of Maryland
Hamid R. Arabnia
University of Georgia
Beth Simon
University of California, San Diego
Tarek El-Ghazawi
George Washington University
Hank Dietz
University of Kentucky