Title: Random Walks
15:10 Fri 12 October, 2018 :: Napier 208 :: A/Prof Kais Hamza :: Monash University
A random walk is arguably the most basic stochastic process one can define. It is also among the most intuitive objects in the theory of probability and stochastic processes. For these and other reasons, it is one of the most studied processes or rather family of processes, finding applications in all areas of science, technology and engineering. In this talk, I will start by recalling some of the classical results for random walks and then discuss some of my own recent explorations in this area of research that has maintained relevance for decades.
Title: Bayesian Synthetic Likelihood
15:10 Fri 26 October, 2018 :: Napier 208 :: A/Prof Chris Drovandi :: Queensland University of Technology
Complex stochastic processes are of interest in many applied disciplines. However, the likelihood function associated with such models is often computationally intractable, prohibiting standard statistical inference frameworks for estimating model parameters based on data. Currently, the most popular simulation-based parameter estimation method is approximate Bayesian computation (ABC). Despite the widespread applicability and success of ABC, it has some limitations. This talk will describe an alternative approach, called Bayesian synthetic likelihood (BSL), which overcomes some limitations of ABC and can be much more effective in certain classes of applications. The talk will also describe various extensions to the standard BSL approach. This project has been a joint effort with several academic collaborators, post-docs and PhD students.
Title: Interactive theorem proving for mathematicians
15:10 Fri 5 October, 2018 :: Napier 208 :: A/Prof Scott Morrison :: Australian National University
Mathematicians use computers to write their proofs (LaTeX), and to do their calculations (Sage, Mathematica, Maple, Matlab, etc, as well as custom code for simulations or searches). However today we rarely use computers to help us to construct and understand proofs.
There is a long tradition in computer science of interactive and automatic theorem proving; particularly today these are important tools in engineering correct software, as well as in optimisation and compilation. There have been some notable examples of formalisation of modern mathematics (e.g. the odd order theorem, the Kepler conjecture, and the four-colour theorem). Even in these cases, huge engineering efforts were required to translate the mathematics to a form a computer could understand. Moreover, in most areas of research there is a huge gap between the interests of human mathematicians and the abilities of computer provers.
Nevertheless, I think it’s time for mathematicians to start getting interested in interactive theorem provers! It’s now possible to write proofs, and write tools that help write proofs, in languages which are expressive enough to encompass most of modern mathematics, and ergonomic enough to use for general purpose programming.
I’ll give an informal introduction to dependent type theory (the logical foundation of many modern theorem provers), some examples of doing mathematics in such a system, and my experiences working with mathematics students in these systems.
Title: Mathematical modelling of the emergence and spread of antimalarial drug resistance
15:10 Fri 14 Sep, 2018 :: Napier 208 :: Dr Jennifer Flegg :: University of Melbourne
Malaria parasites have repeatedly evolved resistance to antimalarial drugs, thwarting efforts to eliminate the disease and contributing to an increase in mortality. In this talk, I will introduce several statistical and mathematical models for monitoring the emergence and spread of antimalarial drug resistance. For example, results will be presented from Bayesian geostatistical models that have quantified the space-time trends in drug resistance in Africa and Southeast Asia. I will discuss how the results of these models have been used to update public health policy.
Title: Topological Data Analysis
15:10 Fri 31 August, 2018 :: Napier 208 :: Dr Vanessa Robins :: Australian National University
Topological Data Analysis has grown out of work focussed on deriving qualitative and yet quantifiable information about the shape of data. The underlying assumption is that knowledge of shape – the way the data are distributed – permits high-level reasoning and modelling of the processes that created this data. The 0-th order aspect of shape is the number pieces: “connected components” to a topologist; “clustering” to a statistician. Higher-order topological aspects of shape are holes, quantified as “non-bounding cycles” in homology theory. These signal the existence of some type of constraint on the data-generating process.
Homology lends itself naturally to computer implementation, but its naive application is not robust to noise. This inspired the development of persistent homology: an algebraic topological tool that measures changes in the topology of a growing sequence of spaces (a filtration). Persistent homology provides invariants called the barcodes or persistence diagrams that are sets of intervals recording the birth and death parameter values of each homology class in the filtration. It captures information about the shape of data over a range of length scales, and enables the identification of “noisy” topological structure.
Statistical analysis of persistent homology has been challenging because the raw information (the persistence diagrams) are provided as sets of intervals rather than functions. Various approaches to converting persistence diagrams to functional forms have been developed recently, and have found application to data ranging from the distribution of galaxies, to porous materials, and cancer detection.
Title: Tales of Multiple Regression: Informative missingness, Recommender Systems, and R2-D2
15:10 Fri 17 August, 2018: Napier 208: Prof Howard Bondell: University of Melbourne
In this talk, we briefly discuss two projects tangentially related under the umbrella of high-dimensional regression. The first part of the talk investigates informative missingness in the framework of recommender systems. In this setting, we envision a potential rating for every object-user pair. The goal of a recommender system is to predict the unobserved ratings in order to recommend an object that the user is likely to rate highly. A typically overlooked piece is that the combinations are not missing at random. For example, in movie ratings, a relationship between the user ratings and their viewing history is expected, as human nature dictates the user would seek out movies that they anticipate enjoying. We model this informative missingness, and place the recommender system in a shared-variable regression framework which can aid in prediction quality. The second part of the talk deals with a new class of prior distributions for shrinkage regularization in sparse linear regression, particularly the high dimensional case. Instead of placing a prior on the coefficients themselves, we place a prior on the regression R-squared. This is then distributed to the coefficients by decomposing it via a Dirichlet Distribution. We call the new prior R2-D2 in light of its R-Squared Dirichlet Decomposition. Compared to existing shrinkage priors, we show that the R2-D2 prior can simultaneously achieve both high prior concentration at zero, as well as heavier tails. These two properties combine to provide a higher degree of shrinkage on the irrelevant coefficients, along with less bias in estimation of the larger signals.