printable pdf
比利时vs摩洛哥足彩 ,
university of california san diego

****************************

colloquium seminar

dr. robert webber

caltech

randomized matrix decompositions for faster scientific computing

abstract:

 traditional numerical methods based on expensive matrix factorizations struggle with the scale of modern scientific applications. for example, kernel-based algorithms take a data set of size $n$, form the kernel matrix of size $n x n$, and then perform an eigendecomposition or inversion at a cost of $o(n^3)$ operations. for data sets of size $n \geq 10^5$, kernel learning is too expensive, straining the limits of personal workstations and even dedicated computing clusters. randomized iterative methods have emerged as a faster alternative to the classical approaches. these methods combine randomized exploration with information about which matrix structures are important, leading to significant speed gains.

in this talk, i will review recent developments concerning two randomized algorithms. the first is "randomized block krylov iteration", which uses an array of random gaussian test vectors to probe a large data matrix in order to provide a randomized principal component analysis. remarkably, this approach works well even when the matrix of interest is not low-rank. the second algorithm is "randomly pivoted cholesky decomposition", which iteratively samples columns from a positive semidefinite matrix using a novelty metric and reconstructs the matrix from the randomly selected columns. ultimately, both algorithms furnish a randomized approximation of an n x n matrix with a reduced rank $k << n$, which enables fast inversion or singular value decomposition at a cost of $o(n k^2)$ operations. the speed-up factor from $n^3$ to $n k^2$ operations can be 3 million. the newest algorithms achieve this speed-up factor while guaranteeing performance across a broad range of input matrices.

host: rayan saab

january 18, 2024

4:00 pm

apm 6402 (halkin room)

****************************