CoRR abs/2101.05719 ( 2021 ) ", "Collection of new upper and lower sample complexity bounds for solving average-reward MDPs. Allen Liu. endobj I often do not respond to emails about applications. Michael B. Cohen, Yin Tat Lee, Gary L. Miller, Jakub Pachocki, and Aaron Sidford. Oral Presentation for Misspecification in Prediction Problems and Robustness via Improper Learning. We forward in this generation, Triumphantly. /Filter /FlateDecode My research focuses on the design of efficient algorithms based on graph theory, convex optimization, and high dimensional geometry (CV). Group Resources. 2022 - Learning and Games Program, Simons Institute, Sept. 2021 - Young Researcher Workshop, Cornell ORIE, Sept. 2021 - ACO Student Seminar, Georgia Tech, Dec. 2019 - NeurIPS Spotlight presentation. Here are some lecture notes that I have written over the years. when do tulips bloom in maryland; indo pacific region upsc My long term goal is to bring robots into human-centered domains such as homes and hospitals. ", "Streaming matching (and optimal transport) in \(\tilde{O}(1/\epsilon)\) passes and \(O(n)\) space. Our method improves upon the convergence rate of previous state-of-the-art linear programming . About Me. The system can't perform the operation now. We make safe shipping arrangements for your convenience from Baton Rouge, Louisiana. I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. Email: [name]@stanford.edu Assistant Professor of Management Science and Engineering and of Computer Science. Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. You interact with data structures even more often than with algorithms (think Google, your mail server, and even your network routers). Aaron Sidford is an Assistant Professor in the departments of Management Science and Engineering and Computer Science at Stanford University. Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, David P. Woodruff Innovations in Theoretical Computer Science (ITCS) 2018. I am generally interested in algorithms and learning theory, particularly developing algorithms for machine learning with provable guarantees. Selected for oral presentation. My research focuses on AI and machine learning, with an emphasis on robotics applications. I have the great privilege and good fortune of advising the following PhD students: I have also had the great privilege and good fortune of advising the following PhD students who have now graduated: Kirankumar Shiragur (co-advised with Moses Charikar) - PhD 2022, AmirMahdi Ahmadinejad (co-advised with Amin Saberi) - PhD 2020, Yair Carmon (co-advised with John Duchi) - PhD 2020. ! In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. I am a fifth year Ph.D. student in Computer Science at Stanford University co-advised by Gregory Valiant and John Duchi. Try again later. In submission. United States. how . [pdf] University, Research Institute for Interdisciplinary Sciences (RIIS) at ICML, 2016. The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. Full CV is available here. Efficient Convex Optimization Requires Superlinear Memory. Faster energy maximization for faster maximum flow. 5 0 obj Fall'22 8803 - Dynamic Algebraic Algorithms, small tool to obtain upper bounds of such algebraic algorithms. Conference of Learning Theory (COLT), 2021, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs International Conference on Machine Learning (ICML), 2020, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG sidford@stanford.edu. Roy Frostig, Sida Wang, Percy Liang, Chris Manning. This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications. Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization . missouri noodling association president cnn. O! ", "How many \(\epsilon\)-length segments do you need to look at for finding an \(\epsilon\)-optimal minimizer of convex function on a line? In Sidford's dissertation, Iterative Methods, Combinatorial . . We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). [pdf] [pdf] [slides] Their, This "Cited by" count includes citations to the following articles in Scholar. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. by Aaron Sidford. [pdf] In Symposium on Discrete Algorithms (SODA 2018) (arXiv), Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes, Efficient (n/) Spectral Sketches for the Laplacian and its Pseudoinverse, Stability of the Lanczos Method for Matrix Function Approximation. Faculty Spotlight: Aaron Sidford. riba architectural drawing numbering system; fort wayne police department gun permit; how long does chambord last unopened; wayne county news wv obituaries xwXSsN`$!l{@ $@TR)XZ( RZD|y L0V@(#q `= nnWXX0+; R1{Ol (Lx\/V'LKP0RX~@9k(8u?yBOr y In Symposium on Foundations of Computer Science (FOCS 2020) Invited to the special issue ( arXiv) View Full Stanford Profile. Some I am still actively improving and all of them I am happy to continue polishing. Aaron Sidford is an Assistant Professor of Management Science and Engineering at Stanford University, where he also has a courtesy appointment in Computer Science and an affiliation with the Institute for Computational and Mathematical Engineering (ICME). Try again later. Annie Marsden. ", "An attempt to make Monteiro-Svaiter acceleration practical: no binary search and no need to know smoothness parameter! Publications and Preprints. of practical importance. Authors: Michael B. Cohen, Jonathan Kelner, Rasmus Kyng, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford Download PDF Abstract: We show how to solve directed Laplacian systems in nearly-linear time. We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory ( COLT 2022 )! Janardhan Kulkarni, Yang P. Liu, Ashwin Sah, Mehtaab Sawhney, Jakub Tarnawski, Fully Dynamic Electrical Flows: Sparse Maxflow Faster Than Goldberg-Rao, FOCS 2021 Done under the mentorship of M. Malliaris. SODA 2023: 4667-4767. ", "A new Catalyst framework with relaxed error condition for faster finite-sum and minimax solvers. We will start with a primer week to learn the very basics of continuous optimization (July 26 - July 30), followed by two weeks of talks by the speakers on more advanced . Articles Cited by Public access. MS&E welcomes new faculty member, Aaron Sidford ! Two months later, he was found lying in a creek, dead from . Stability of the Lanczos Method for Matrix Function Approximation Cameron Musco, Christopher Musco, Aaron Sidford ACM-SIAM Symposium on Discrete Algorithms (SODA) 2018. University of Cambridge MPhil. >> [pdf] [talk] [poster] Abstract. I am currently a third-year graduate student in EECS at MIT working under the wonderful supervision of Ankur Moitra. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. We prove that deterministic first-order methods, even applied to arbitrarily smooth functions, cannot achieve convergence rates in $$ better than $^{-8/5}$, which is within $^{-1/15}\\log\\frac{1}$ of the best known rate for such . I am a senior researcher in the Algorithms group at Microsoft Research Redmond. I am an Assistant Professor in the School of Computer Science at Georgia Tech. 2023. . stream They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission . ?_l) 2016. [pdf] [poster] STOC 2023. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022, Stochastic Bias-Reduced Gradient Methods 2022 - current Assistant Professor, Georgia Institute of Technology (Georgia Tech) 2022 Visiting researcher, Max Planck Institute for Informatics. Contact. COLT, 2022. Aaron Sidford is an assistant professor in the departments of Management Science and Engineering and Computer Science at Stanford University. arXiv preprint arXiv:2301.00457, 2023 arXiv. Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation. Lower bounds for finding stationary points II: first-order methods. International Conference on Machine Learning (ICML), 2021, Acceleration with a Ball Optimization Oracle [pdf] Summer 2022: I am currently a research scientist intern at DeepMind in London. Source: www.ebay.ie In particular, this work presents a sharp analysis of: (1) mini-batching, a method of averaging many . Neural Information Processing Systems (NeurIPS, Spotlight), 2019, Variance Reduction for Matrix Games My interests are in the intersection of algorithms, statistics, optimization, and machine learning. (arXiv), A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization, In Symposium on Foundations of Computer Science (FOCS 2015), Machtey Award for Best Student Paper (arXiv), Efficient Inverse Maintenance and Faster Algorithms for Linear Programming, In Symposium on Foundations of Computer Science (FOCS 2015) (arXiv), Competing with the Empirical Risk Minimizer in a Single Pass, With Roy Frostig, Rong Ge, and Sham Kakade, In Conference on Learning Theory (COLT 2015) (arXiv), Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, In International Conference on Machine Learning (ICML 2015) (arXiv), Uniform Sampling for Matrix Approximation, With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng, In Innovations in Theoretical Computer Science (ITCS 2015) (arXiv), Path-Finding Methods for Linear Programming : Solving Linear Programs in (rank) Iterations and Faster Algorithms for Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2014), Best Paper Award and Machtey Award for Best Student Paper (arXiv), Single Pass Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco, An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations, With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia, In Symposium on Discrete Algorithms (SODA 2014), Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems, In Symposium on Fondations of Computer Science (FOCS 2013) (arXiv), A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time, With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu, In Symposium on the Theory of Computing (STOC 2013) (arXiv), SIAM Journal on Computing (arXiv before merge), Derandomization beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space, With Jack Murtagh, Omer Reingold, and Salil Vadhan, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (arXiv), Lower Bounds for Finding Stationary Points II: First-Order Methods. Alcatel flip phones are also ready to purchase with consumer cellular. ", "Improved upper and lower bounds on first-order queries for solving \(\min_{x}\max_{i\in[n]}\ell_i(x)\). In International Conference on Machine Learning (ICML 2016). We also provide two . "FV %H"Hr ![EE1PL* rP+PPT/j5&uVhWt :G+MvY c0 L& 9cX& Unlike previous ADFOCS, this year the event will take place over the span of three weeks. July 8, 2022. Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization Algorithms which I created. theory and graph applications. The following articles are merged in Scholar. Aaron Sidford joins Stanford's Management Science & Engineering department, launching new winter class CS 269G / MS&E 313: "Almost Linear Time Graph Algorithms." This work characterizes the benefits of averaging techniques widely used in conjunction with stochastic gradient descent (SGD). To appear as a contributed talk at QIP 2023 ; Quantum Pseudoentanglement. Multicalibrated Partitions for Importance Weights Parikshit Gopalan, Omer Reingold, Vatsal Sharan, Udi Wieder ALT, 2022 arXiv . Department of Electrical Engineering, Stanford University, 94305, Stanford, CA, USA (arXiv pre-print) arXiv | pdf, Annie Marsden, R. Stephen Berry. 4026. in Mathematics and B.A. I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. Previously, I was a visiting researcher at the Max Planck Institute for Informatics and a Simons-Berkeley Postdoctoral Researcher. >CV >code >contact; My PhD dissertation, Algorithmic Approaches to Statistical Questions, 2012. [c7] Sivakanth Gopi, Yin Tat Lee, Daogao Liu, Ruoqi Shen, Kevin Tian: Private Convex Optimization in General Norms. AISTATS, 2021. Before attending Stanford, I graduated from MIT in May 2018. Outdated CV [as of Dec'19] Students I am very lucky to advise the following Ph.D. students: Siddartha Devic (co-advised with Aleksandra Korolova . Goethe University in Frankfurt, Germany. In Innovations in Theoretical Computer Science (ITCS 2018) (arXiv), Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space. Instructor: Aaron Sidford Winter 2018 Time: Tuesdays and Thursdays, 10:30 AM - 11:50 AM Room: Education Building, Room 128 Here is the course syllabus. Yin Tat Lee and Aaron Sidford; An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations. ReSQueing Parallel and Private Stochastic Convex Optimization. I enjoy understanding the theoretical ground of many algorithms that are Nima Anari, Yang P. Liu, Thuy-Duong Vuong, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, FOCS 2022, Best Paper 2017. Applying this technique, we prove that any deterministic SFM algorithm . Here are some lecture notes that I have written over the years. I am a fourth year PhD student at Stanford co-advised by Moses Charikar and Aaron Sidford. in math and computer science from Swarthmore College in 2008. with Yair Carmon, Aaron Sidford and Kevin Tian We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second . << with Yair Carmon, Arun Jambulapati and Aaron Sidford Neural Information Processing Systems (NeurIPS, Oral), 2019, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions Email / Conference of Learning Theory (COLT), 2022, RECAPP: Crafting a More Efficient Catalyst for Convex Optimization Research Institute for Interdisciplinary Sciences (RIIS) at to appear in Innovations in Theoretical Computer Science (ITCS), 2022, Optimal and Adaptive Monteiro-Svaiter Acceleration I completed my PhD at Links. My CV. MI #~__ Q$.R$sg%f,a6GTLEQ!/B)EogEA?l kJ^- \?l{ P&d\EAt{6~/fJq2bFn6g0O"yD|TyED0Ok-\~[`|4P,w\A8vD$+)%@P4 0L ` ,\@2R 4f small tool to obtain upper bounds of such algebraic algorithms. Emphasis will be on providing mathematical tools for combinatorial optimization, i.e. Aaron Sidford, Gregory Valiant, Honglin Yuan COLT, 2022 arXiv | pdf. arXiv | conference pdf, Annie Marsden, Sergio Bacallado. Neural Information Processing Systems (NeurIPS, Oral), 2020, Coordinate Methods for Matrix Games Before attending Stanford, I graduated from MIT in May 2018. Mail Code. Prior to coming to Stanford, in 2018 I received my Bachelor's degree in Applied Math at Fudan Secured intranet portal for faculty, staff and students. 172 Gates Computer Science Building 353 Jane Stanford Way Stanford University Annie Marsden, Vatsal Sharan, Aaron Sidford, and Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. Many of my results use fast matrix multiplication Computer Science. with Aaron Sidford AISTATS, 2021. Overview This class will introduce the theoretical foundations of discrete mathematics and algorithms. aaron sidford cvis sea bass a bony fish to eat. [pdf] [talk] [poster] I am broadly interested in mathematics and theoretical computer science. Aaron Sidford, Introduction to Optimization Theory; Lap Chi Lau, Convexity and Optimization; Nisheeth Vishnoi, Algorithms for . One research focus are dynamic algorithms (i.e. Neural Information Processing Systems (NeurIPS), 2014. With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang. Optimal Sublinear Sampling of Spanning Trees and Determinantal Point Processes via Average-Case Entropic Independence, FOCS 2022 4 0 obj Aaron Sidford. Before joining Stanford in Fall 2016, I was an NSF post-doctoral fellow at Carnegie Mellon University ; I received a Ph.D. in mathematics from the University of Michigan in 2014, and a B.A. [pdf] [talk] Personal Website. Jonathan A. Kelner, Yin Tat Lee, Lorenzo Orecchia, and Aaron Sidford; Computing maximum flows with augmenting electrical flows. ", "A special case where variance reduction can be used to nonconvex optimization (monotone operators). Sequential Matrix Completion. 2021. I graduated with a PhD from Princeton University in 2018. Google Scholar; Probability on trees and . Some I am still actively improving and all of them I am happy to continue polishing. with Vidya Muthukumar and Aaron Sidford Before Stanford, I worked with John Lafferty at the University of Chicago. with Yair Carmon, Arun Jambulapati, Qijia Jiang, Yin Tat Lee, Aaron Sidford and Kevin Tian with Yair Carmon, Aaron Sidford and Kevin Tian {{{;}#q8?\. with Yair Carmon, Danielle Hausler, Arun Jambulapati and Aaron Sidford He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford. Prof. Sidford's paper was chosen from more than 150 accepted papers at the conference. I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. "I am excited to push the theory of optimization and algorithm design to new heights!" Assistant Professor Aaron Sidford speaks at ICME's Xpo event. [pdf] Aaron Sidford (sidford@stanford.edu) Welcome This page has informatoin and lecture notes from the course "Introduction to Optimization Theory" (MS&E213 / CS 269O) which I taught in Fall 2019. I received my PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where I was advised by Professor Jonathan Kelner. Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Efficient Convex Optimization Requires . ", "A general continuous optimization framework for better dynamic (decremental) matching algorithms. If you have been admitted to Stanford, please reach out to discuss the possibility of rotating or working together. Stanford, CA 94305 I am broadly interested in optimization problems, sometimes in the intersection with machine learning theory and graph applications. In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. /N 3 BayLearn, 2019, "Computing stationary solution for multi-agent RL is hard: Indeed, CCE for simultaneous games and NE for turn-based games are both PPAD-hard. Research Interests: My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. [i14] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian: ReSQueing Parallel and Private Stochastic Convex Optimization. Navajo Math Circles Instructor. This is the academic homepage of Yang Liu (I publish under Yang P. Liu). Page 1 of 5 Aaron Sidford Assistant Professor of Management Science and Engineering and of Computer Science CONTACT INFORMATION Administrative Contact Jackie Nguyen - Administrative Associate International Colloquium on Automata, Languages, and Programming (ICALP), 2022, Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods We establish lower bounds on the complexity of finding $$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. he Complexity of Infinite-Horizon General-Sum Stochastic Games, Yujia Jin, Vidya Muthukumar, Aaron Sidford, Innovations in Theoretical Computer Science (ITCS 202, air Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, Advances in Neural Information Processing Systems (NeurIPS 2022), Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Advances in Neural Information Processing Systems (NeurIPS 202, n Symposium on Foundations of Computer Science (FOCS 2022) (, International Conference on Machine Learning (ICML 2022) (, Conference on Learning Theory (COLT 2022) (, International Colloquium on Automata, Languages and Programming (ICALP 2022) (, In Symposium on Theory of Computing (STOC 2022) (, In Symposium on Discrete Algorithms (SODA 2022) (, In Advances in Neural Information Processing Systems (NeurIPS 2021) (, In Conference on Learning Theory (COLT 2021) (, In International Conference on Machine Learning (ICML 2021) (, In Symposium on Theory of Computing (STOC 2021) (, In Symposium on Discrete Algorithms (SODA 2021) (, In Innovations in Theoretical Computer Science (ITCS 2021) (, In Conference on Neural Information Processing Systems (NeurIPS 2020) (, In Symposium on Foundations of Computer Science (FOCS 2020) (, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (, In International Conference on Machine Learning (ICML 2020) (, In Conference on Learning Theory (COLT 2020) (, In Symposium on Theory of Computing (STOC 2020) (, In International Conference on Algorithmic Learning Theory (ALT 2020) (, In Symposium on Discrete Algorithms (SODA 2020) (, In Conference on Neural Information Processing Systems (NeurIPS 2019) (, In Symposium on Foundations of Computer Science (FOCS 2019) (, In Conference on Learning Theory (COLT 2019) (, In Symposium on Theory of Computing (STOC 2019) (, In Symposium on Discrete Algorithms (SODA 2019) (, In Conference on Neural Information Processing Systems (NeurIPS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2018) (, In Conference on Learning Theory (COLT 2018) (, In Symposium on Discrete Algorithms (SODA 2018) (, In Innovations in Theoretical Computer Science (ITCS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2017) (, In International Conference on Machine Learning (ICML 2017) (, In Symposium on Theory of Computing (STOC 2017) (, In Symposium on Foundations of Computer Science (FOCS 2016) (, In Symposium on Theory of Computing (STOC 2016) (, In Conference on Learning Theory (COLT 2016) (, In International Conference on Machine Learning (ICML 2016) (, In International Conference on Machine Learning (ICML 2016). (, In Symposium on Foundations of Computer Science (FOCS 2015) (, In Conference on Learning Theory (COLT 2015) (, In International Conference on Machine Learning (ICML 2015) (, In Innovations in Theoretical Computer Science (ITCS 2015) (, In Symposium on Fondations of Computer Science (FOCS 2013) (, In Symposium on the Theory of Computing (STOC 2013) (, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (, Journal of Machine Learning Research, 2017 (. ", "Team-convex-optimization for solving discounted and average-reward MDPs! Given an independence oracle, we provide an exact O (nr log rT-ind) time algorithm. arXiv | conference pdf (alphabetical authorship) Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with . Yang P. Liu, Aaron Sidford, Department of Mathematics Neural Information Processing Systems (NeurIPS), 2021, Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss /Length 11 0 R Np%p `a!2D4! July 2015. pdf, Szemerdi Regularity Lemma and Arthimetic Progressions, Annie Marsden. Verified email at stanford.edu - Homepage. With Jack Murtagh, Omer Reingold, and Salil P. Vadhan. It was released on november 10, 2017. He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. with Aaron Sidford COLT, 2022. arXiv | code | conference pdf (alphabetical authorship), Annie Marsden, John Duchi and Gregory Valiant, Misspecification in Prediction Problems and Robustness via Improper Learning. By using this site, you agree to its use of cookies. [5] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian. There will be a talk every day from 16:00-18:00 CEST from July 26 to August 13. If you see any typos or issues, feel free to email me. with Yair Carmon, Arun Jambulapati and Aaron Sidford Eigenvalues of the laplacian and their relationship to the connectedness of a graph. 2016. ", "Sample complexity for average-reward MDPs? Email: sidford@stanford.edu. >> F+s9H The Complexity of Infinite-Horizon General-Sum Stochastic Games, With Yujia Jin, Vidya Muthukumar, Aaron Sidford, To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv), Optimal and Adaptive Monteiro-Svaiter Acceleration, With Yair Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, To appear in Advances in Neural Information Processing Systems (NeurIPS 2022) (arXiv), On the Efficient Implementation of High Accuracy Optimality of Profile Maximum Likelihood, With Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Improved Lower Bounds for Submodular Function Minimization, With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang, In Symposium on Foundations of Computer Science (FOCS 2022) (arXiv), RECAPP: Crafting a More Efficient Catalyst for Convex Optimization, With Yair Carmon, Arun Jambulapati, and Yujia Jin, International Conference on Machine Learning (ICML 2022) (arXiv), Efficient Convex Optimization Requires Superlinear Memory, With Annie Marsden, Vatsal Sharan, and Gregory Valiant, Conference on Learning Theory (COLT 2022), Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method, Conference on Learning Theory (COLT 2022) (arXiv), Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales, With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan, Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching, With Arun Jambulapati, Yujia Jin, and Kevin Tian, International Colloquium on Automata, Languages and Programming (ICALP 2022) (arXiv), Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary, With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun, Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers, With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng, In Symposium on Theory of Computing (STOC 2022) (arXiv), Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space, With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian, In Symposium on Discrete Algorithms (SODA 2022) (arXiv), Algorithmic trade-offs for girth approximation in undirected graphs, With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick, In Symposium on Discrete Algorithms (SODA 2022), Computing Lewis Weights to High Precision, With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan, With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin, In Advances in Neural Information Processing Systems (NeurIPS 2021) (arXiv), Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss, In Conference on Learning Theory (COLT 2021) (arXiv), The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood, With Nima Anari, Moses Charikar, and Kirankumar Shiragur, Towards Tight Bounds on the Sample Complexity of Average-reward MDPs, In International Conference on Machine Learning (ICML 2021) (arXiv), Minimum cost flows, MDPs, and 1-regression in nearly linear time for dense instances, With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang, In Symposium on Theory of Computing (STOC 2021) (arXiv), Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers, In Symposium on Discrete Algorithms (SODA 2021) (arXiv), Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration, In Innovations in Theoretical Computer Science (ITCS 2021) (arXiv), Acceleration with a Ball Optimization Oracle, With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian, In Conference on Neural Information Processing Systems (NeurIPS 2020), Instance Based Approximations to Profile Maximum Likelihood, In Conference on Neural Information Processing Systems (NeurIPS 2020) (arXiv), Large-Scale Methods for Distributionally Robust Optimization, With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution), High-precision Estimation of Random Walks in Small Space, With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan, In Symposium on Foundations of Computer Science (FOCS 2020) (arXiv), Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs, With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang, In Symposium on Foundations of Computer Science (FOCS 2020), With Yair Carmon, Yujia Jin, and Kevin Tian, Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time, Invited to the special issue (arXiv before merge)), Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (arXiv), Efficiently Solving MDPs with Stochastic Mirror Descent, In International Conference on Machine Learning (ICML 2020) (arXiv), Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond, With Oliver Hinder and Nimit Sharad Sohoni, In Conference on Learning Theory (COLT 2020) (arXiv), Solving Tall Dense Linear Programs in Nearly Linear Time, With Jan van den Brand, Yin Tat Lee, and Zhao Song, In Symposium on Theory of Computing (STOC 2020).
Seafood Junction 67th Wentworth Menu,
Michael Fowler Obituary 2021,
Articles A