Ian ANDERSON
Abstract In this talk I shall illustrate, by means of
two examples, an on-going program at Utah State University to
create Maple software and Java applets for implementing the
solution to various classification problems in mathematics.
The first classification problem to be discussed is Petrov's
remarkable classification of all 4 dimensional local group
actions which admit an Lorentz invariant metric. (This is
different from Petrov's classification of spacetimes according to
the algebraic properties of the Weyl tensor.) Petrov's
classification contains over 100 distinct local group ations
and associated invariant metrics and thus it is a
non-trivial task to identify a given local group action or a
given invariant metric in Petrov's list. I shall demonstrate
a web-based Java applet which allows one to identify any
metric with symmetry in Petrov's list and also to search for
group actions or invariant metrics in Petrov's list with
prescribed properties. The results of some preliminary
efforts to automate the verification of Petrov's
classification will be shown.
The second classification problem will focus on the
classification of low dimensional Lie algebras. There are many
such tables of Lie algebras in the literature but once again
the problem remains of explicitly identifying a given Lie
algebra in these tables, a task which is further complicated by
the existence of families of Lie algebras containing arbitrary
parameters.
The programs which support the implementation of these
classification problems are part of Vessiot, an integrated
suite of Maple packages for computations in differential
geometry, Lie algebras and the variational calculus on jet
spaces. The talk will conclude with a quick overview and
demonstration of these programs.
|
Click to return to timetable |
Click to return to timetable |
Albert COHEN
Abstract In image processing applications such as
compression and denoising, one is often led to select a
representation in which the images of interest are as sparse as
possible. As we shall explain, wavelet bases provide an optimal
anwser to this problem when the images are modeled as functions
of bounded variation. The main limitation of this approach is
that the geometric smoothness of edges is not taken into account
in the BV model and exploited in the wavelet method. This state
of fact has open new trends in terms of finding sparse image
representations which have anisotropic features in order to
capture the geometric simplicity of edges with less
parameters. We shall present a multiresolution representation of
this new type which is based on nonlinear data processing, using
prediction operators which are inspired from ENO interpolation
introduced by Harten and Osher in the context of shock
computations. We shall discuss the new theoretical difficulties
which occur when analyzing these methods and present their
practical performances on real and synthetic images.
|
Click to return to timetable |
Tony DeROSE
Abstract Film making is undergoing a digital revolution
brought on by advances in areas such as computer technology,
computational physics and computer graphics. This talk will
provide a behind the scenes look at how fully digital films -
such as Pixar's "Toy Story 2" and "Monster's Inc" - are made,
with particular emphasis on the role that geometry plays in the
revolution. I'll use our Academy Award winning short film "Geri's
game" as a running example, and I'll highlight the two main
technical innovations behind it: the use of subdivision surfaces
for geometric modeling, and the use of simulated cloth dynamics.
|
Ron DeVORE
Abstract I am frequently asked this question when talking
about FoCM. While I cannot answer it precisely, I can give some
examples of how mathematical reasoning can properly frame
computational problems and lead to a new and better understanding of
computational issues. I will give two examples of this; one in data
processing and the other in PDE solvers.
|
Click to return to timetable |
Herbert
EDELSBRUNNER
Abstract We consider piecewise linear functions over
triangulated manifolds and algorithms that use the theory of
smooth Morse functions on such data. Specifically, we address the
following questions:
All questions are motivated by concrete applications in scientific
visualization and the reconstruction of structure from density
data. The goal of this research is to develop the methods
necessary to give mathematically well-founded solutions to the
application problems.
|
Click to return to timetable |
Michael FREEDMAN
Abstract Anyons are particle like excitations of two
dimensional quantum systems which may posses exotic "statistics"
under the action of the braid group. It is known that universal
quantum computation can be founded on such systems. It is not
known if a suitable candidate can actually be manufactured in the
real world although there is evidence (in both directions) on
this question. I will dicuss a very simple family of anyonic
models and their relation with ongoing experiments.
|
Click to return to timetable |
Wolfgang
HACKBUSCH
Abstract The hierarchical matrix technique applies to
fully populated matrices as they arise from the discretisation of
non-local operators like in the boundary element method or as the
inverse of a sparse finite element matrix, provided that the
underlying problems are elliptic. Using the format of the
so-called \emph{hierarchical matrices} (or short
H-matrices) we are able to approximate the dense matrix by
a data-sparse one so that the required storage is almost linear
in the dimension, i.e., linear up to logarithmic factors.
Moreover, the essential operations for these matrices
(matrix-vector and matrix-matrix multiplication, addition and
inversion) can approximately be performed again in almost linear
time. This fact offers new application, e.g., the computation of
matrix functions like the exponential or the solution of matrix
equations like the Riccati equation can be performed with almost
optimal complexity.
|
Click to return to timetable |
Stefan HEINRICH
Abstract A challenging question in the overlap of
computer science, mathematics, and physics, is the exploration of
potential capabilities of quantum computers. Milestones were the
algorithm of Shor (1994), who showed that quantum computers could
factor large integers efficiently (which is widely believed to be
infeasible on classical computers) and the quantum search
algorithm of Grover (1996), which provides a quadratic speedup
over deterministic and randomized classical algorithms of
searching a database.
So far, major research was concentrated on discrete problems like
the above and many others one encounters in computer
science. Much less was known about computational problems of
analysis, including such a prominent example as high dimensional
numerical integration, which is well-studied in the classical
setting. We seek to understand how efficiently this and related
problems can be solved in the quantum model of computation (that
is, on a quantum computer) and how the outcome compares to the
efficiency of deterministic or randomized algorithms on a
classical (i.e. non-quantum) computer.
Complexity of numerical problems is studied in the general
framework of information-based complexity theory. By now for many
important problems of numerical analysis, including high
dimensional integration in various function spaces, matching
upper and lower complexity bounds are known for both the
classical deterministic and randomized setting. It is a
challenging task to study these problems in the setting of
quantum computation. Once such results are obtained, one can
compare them to the deterministic and randomized classical ones
to understand the possible speedups by quantum algorithms.
In the talk we first give a short introduction to the basic ideas
of quantum computation and present the quantum setting of
information-based complexity theory, a framework suitable to
handle the complexity of numerical problems in the quantum model
of computation. Then we discuss recent results on quantum
algorithms, lower bounds, and complexity for various integration
problems, like approximating the mean of p-summable sequences
and the integral of functions from Lp and Sobolev spaces. It
turns out that in many cases there is an exponential speed-up of
quantum algorithms over classical deterministic ones, and a
quadratic speed-up of quantum algorithms over classical
randomized ones.
|
Click to return to timetable |
Pascal KOIRAN
Abstract This talk will be devoted to definability
problems in first-order logic. For instance, is there a
first-order sentence F in the language of fields expanded
by a unary symbol function f which satisfies the two
following properties:
Note that the same problem for the field of real numbers has
an obvious solution. We shall see that questions of this kind
have intringuing connections with the model theory of complex
analytic functions.
|
Click to return to timetable |
Teresa KRICK
Abstract Solving symbolically polynomial equation systems
when polynomials are represented in the usual dense codification
turns out to be very inefficient: the sizes of the systems one
can deal with do not answer to realistic needs. The straight-line
program encoding appeared a decade ago in this frame as an
alternative to classify families of problems which behave better
with respect to complexity.
The talk will present an overview of the most recent complexity
results for different polynomial problems when the input is
encoded by straight-line programs. It will also try to show how
surprising mathematical by-products, such as new mathematical
invariants and results, can appear as a consequence of the
search of good algorithms.
|
Click to return to timetable |
Dan LOZIER
Abstract Abramowitz and Stegun's Handbook of Mathematical
Functions, published in 1964 by the National Bureau of Standards,
is familiar to most mathematicians. It is even more familiar to
scientists who use special functions in their daily work. But
developments in mathematics and computing since 1964 make the old
handbook less and less useful. To meet the need for a modern
reference, NIST is developing a new handbook that will be
published in 2003. A Web site covering much the same territory
will be released also. This work is supported by the National
Science Foundation. Details of the project and its current
status will be described.
|
Click to return to timetable |
Volker MEHRMANN
Abstract Large scale polynomial eigenvalue problems arise
in many different applications. We are concerned with special
classes of polynomials, where the coefficients essentially
(except for small perturbations) alternate between symmetric and
skew symmetric matrices. These problems arise in several classes
of applications, e.g. the computation of 3D vertex singularities
of anisotropic elastic fields, the solution of optimal control
problems for second or higher order ordinary differential
equations, problems from damped gyroscopic systems as well as
Maxwell eigenvalue problems for Optical Waveguide Design.
We discuss a shift-and-invert, Lanczos as well as implicitely
restarted Arnoldi methods that are specifically adapted to the
symmetry structure of the polynomials. The new approaches extend
recent work of Mehrmann and Watkins for quadratic eigenvalue
with Hamiltonian eigensymmetry.
This is partly joint work with Thomas Apel, Tsung-Min Hwang,
Wen-Wei Lin, and David Watkins.
|
Click to return to timetable |
Andrew ODLYZKO
Abstract The Riemann Hypothesis is now the most famous
unsolved problem in mathematics. It has stimulated extensive
computations, including development of novel algorithms that are
similar to those used in astrophysical simulations and in signal
processing. These computations have so far confirmed the
predictions of the Riemann Hypothesis, as well as of other, even
more speculative conjectures. However, there is still a serious
question of what these results mean.
|
Click to return to timetable |
Steve
SMALE
Abstract The idea is to discuss some joint work with
Felipe Cucker, emphasizing mathematical foundations of learning,
with motivation from centuries of work on laws of physics to
present day problems of recognition of words and meanings. This
a learning by induction and a learning from examples.
|
Click to return to timetable |
Edriss TITI
Abstract In this talk we will survey old results about
Approximate Inertial Manifolds and their implementation in
nonlinear Galerkin (NLG) methods. We will stress the benefits
and shortcomings of the NLG methods and present an alternative
postprocessing procedure for the Galerkin method. This
postprocessing Galerkin method, which is much cheaper to
implement computationally than the NLG method, possess the same
rate of convergence (accuracy) as the simplest version of the
NLG method, which is more accurate than the standard Galerkin
method. Our results valid in the context of spectral and finite
element Galerkin methods and for many nonlinear parabolic
equations including the Navier-Stokes equations. We justify the
derivation of the postprocessing algorithm from a classical
truncation analysis point of view. More specifically, we will
show that, to leading order, the correct approximative scheme is
actually the postprocessed Galerkin method, and not the standard
Galerkin method as is commonly believed.
We will also present some computational study to support our
analytical results. Furthermore, we will present numerical
criteria for detecting the right dynamics for systems like the
Navier-Stokes equations.
This talk is based on joint works with Bosco Garcia-Archilla, Len
Margolin, Julia Novo and Shannon Wynne.
|
Click to return to timetable |
Michael TODD
Abstract Interior-point methods are both theoretically and
practically efficient algorithms for solving linear programming
problems and certain convex extensions. However, they seem to be
less flexible than the well-known simplex method: in particular,
they do not deal as gracefully with infeasible or unbounded (dual
infeasible) problems. One elegant way to produce either an
optimal solution or a certificate of infeasibility is to embed
the original problem in a larger homogeneous self-dual problem,
but this seems less efficient computationally. Most
implementations use so-called infeasible-interior-point methods,
which focus exclusively on attaining feasibility and
optimality. However, such methods are surprisingly successful in
detecting infeasibility. We show that, on infeasible problems,
they can be viewed as implicitly searching for an infeasibility
certificate.
|
Click to return to timetable |
Vladimir VAPNIK
Abstract In the talk I would like to show that:
|
Click to return to timetable |
Grace WAHBA
Abstract We describe two modern methods for
classification, penalized likelihood methods for `soft'
classification, and support vector machines (SVM's) for `hard'
classification. A training set is given, and a classification
algorithm for future observations is built from it. The
k-category penalized likelihood method returns
a vector of probabilities (p1(x), ... ,
pk(x)) for each category, where x is the
attribute vector that will be observed. The multicategory support
vector machine (MSVM) returns a classifier vector (f1(x),
... , fk(x)) satisfying suml
fl(x) = 0, where maxl
fl(x) identifies the category. The two category
SVM is very well known, while the multi-category SVM described
here is new. These procedures can be found as the solution to
optimization problems in a reproducing kernel Hilbert space
(RKHS). Numerical methods for large data sets and methods for
choosing the tuning parameter(s) will be noted. An example
estimating the probabilities of mortality from several causes or
survival, from a demographic study, will be given, along with an
example estimating the classification of gene expression data
according to one of several types of disease or `none of the above'.
|
Click to return to timetable |