
 
Budgeted Prediction With Expert Advice. 
2015 

With Kareem Amin, Gerald Tesauro, and Deepak Turaga. In 29^{th}AAAI, 2015.


Abstract
We consider a budgeted variant of the problem of learning from expert advice with \(N\) experts. Each queried expert incurs a cost and there is a given budget \(B\) on the total cost of experts that can be queried in any prediction round. We provide an online learning algorithm for this setting with regret after \(T\) prediction rounds bounded by \(O\!\!\left(\sqrt{\frac{C}{B}\log(N)T}\right)\), where \(C\) is the total cost of all experts. We complement this upper bound with a nearly matching lower bound \(\Omega\!\left(\sqrt{\frac{C}{B}T}\right)\) on the regret of any algorithm for this problem. We also provide experimental validation of our algorithm. 

Taming the Monster: A Fast and Simple Algorithm for Contextual Bandits. 
2014 

With Alekh Agarwal, Daniel Hsu, John Langford, Lihong Li, and Robert E. Schapire. In 31^{st} ICML, 2014.


Abstract
We present a new algorithm for the contextual bandit learning problem,
where the learner repeatedly takes one of \(K\) actions in response to the
observed context, and observes the reward only for that
chosen
action. Our method assumes access to an oracle for solving fully
supervised costsensitive classification problems and achieves the
statistically optimal regret guarantee with only \(\tilde{O}(\sqrt{KT})\)
oracle calls across all \(T\) rounds. By doing so, we obtain the most
practical contextual bandit learning algorithm amongst approaches that
work for general policy classes. We further conduct a
proofofconcept experiment which demonstrates the excellent
computational and prediction performance of (an online variant of) our
algorithm relative to several baselines. 

Multiarmed Bandits With Limited Expert Advice. 
2014 

In 27^{th}COLT, 2014.


Abstract
We consider the problem of minimizing regret in the setting of adviceefficient multiarmed bandits with expert advice. We give an algorithm for the setting of \(K\) arms and \(N\) experts out of which we are allowed to query and use only \(M\) experts' advice in each round, which has a regret bound of \(\tilde{O}\left(\sqrt{\frac{\min\{K, M\} N}{M} T}\right)\) after \(T\) rounds. We also prove that any algorithm for this problem must have expected regret at least \(\tilde{\Omega}\left(\sqrt{\frac{\min\{K, M\} N}{M}T}\right)\), thus showing that our upper bound is nearly tight. This solves the COLT 2013 open problem of Seldin et al. 

Bargaining for Revenue Shares on Tree
Trading Networks. 
2013 

With Arpita Ghosh, Kevin Lang, and
Benjamin Moseley. In 23^{rd} IJCAI, 2013.


Abstract
We study trade networks with a tree structure, where a seller with a single indivisible
good is connected to buyers, each with some value for the good, via a unique path of intermediaries.
Agents in the tree make multiplicative revenue share offers to their parent nodes, who choose the
best offer and offer part of it to their parent, and so on; the winning path is determined by who
finally makes the highest offer to the seller. In this paper, we investigate how these revenue
shares might be set via a natural bargaining process between agents on the tree, specifically,
egalitarian bargaining between endpoints of each edge in the tree. We investigate the fixed point
of this system of bargaining equations and prove various desirable for this solution concept,
including (i) existence, (ii) uniqueness, (iii) efficiency, (iv) membership in the core, (v) strict
monotonicity, (vi) polynomialtime computability to any given accuracy. Finally, we present
numerical evidence that asynchronous dynamics with randomly ordered updates always converges to the
fixed point, indicating that the fixed point shares might arise from decentralized bargaining
amongst agents on the trade network. 

The Approximability of
the Binary Paintshop Problem 
2013 

With Anupam Gupta, Viswanath Nagarajan, Rishi Saket, and Baruch Schieber. In 16^{th} APPROX, 2013.


Abstract
In the Binary Paintshop problem,
there are $ m$ cars appearing in a sequence of length $2m$, with each car occurring twice. Each car
needs to be colored with two colors. The goal is to choose for each car, which of its occurrences
receives either color, so as to minimize the total number of color changes in the sequence. We show
that the Binary Paintshop problem is equivalent (up to constant factors) to the Minimum Uncut
problem, under randomized reductions. By derandomizing this reduction for hard instances of the
Minimun Uncut problem arising from the Unique Games Conjecture, we show that the Binary Paintshop
problem is $\omega(1)$hard to approximate (assuming the UGC). This answers an open question from
Eppinger et al (2006), Meunier and Sebo (2009) and Andres and Hochstattler (2011). 

NearOptimal Algorithms for Online Matrix Prediction 
2012 

With Elad Hazan and Shai ShalevShwartz. In 25^{th} COLT, 2012.


Abstract
In several online prediction problems of recent interest the
comparison class is composed of matrices with bounded entries. For
example, in the online maxcut problem, the comparison class is
matrices which represent cuts of a given graph and in online
gambling the comparison class is matrices which represent
permutations over n teams. Another important example is online
collaborative filtering in which a widely used comparison class is
the set of matrices with a small trace norm. In this paper we
isolate a property of matrices, which we call
$(\beta,\tau)$decomposability, and derive an efficient online
learning algorithm, that enjoys a regret bound of
$\tilde{O}(\sqrt{\beta \tau T})$ for all problems in which the
comparison class is composed of $(\beta,\tau)$decomposable
matrices. By analyzing the decomposability of cut matrices,
triangular matrices, and low tracenorm matrices, we derive near optimal
regret bounds for online maxcut, online gambling, and online
collaborative filtering. In particular, this resolves (in the
affirmative) an open problem posed by Abernethy (2010) and Kleinberg et al (2010). Finally, we
derive lower bounds for the three problems and show that our upper
bounds are optimal up to logarithmic factors. In particular, our lower bound
for the online collaborative filtering problem resolves another open
problem posed by Shamir and Srebro (2011). 

Projectionfree Online Learning 
2012 

With Elad Hazan. In 29^{th} ICML, 2012.


Abstract
The computational bottleneck in applying online learning to massive data sets is usually the projection step. We present efficient online learning algorithms that eschew projections in favor of much more efficient linear optimization steps using the FrankWolfe technique. We obtain a range of regret bounds for online convex optimization, with better bounds for specific cases such as stochastic online smooth convex optimization.
Besides the computational advantage, other desirable features of our algorithms are that they are parameterfree in the stochastic case and produce sparse decisions. We apply our algorithms to computationally intensive applications of collaborative filtering, and show the theoretical improvements to be clearly visible on standard datasets. 

Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm Regularization 
2012 

With Haim Avron, Shiva Prasad Kasiviswanathan, and Vikas Sindhwani. In 29^{th} ICML, 2012.


Abstract
We describe novel subgradient methods for a broad class of matrix optimization problems involving nuclear norm regularization. Unlike existing approaches, our method executes very cheap iterations by combining lowrank stochastic subgradients with efficient incremental SVD updates, made possible by highly optimized and parallelizable dense linear algebra operations on small matrices. Our practical algorithms always maintain a lowrank factorization of iterates that can be conveniently held in memory and efficiently multiplied to generate predictions in matrix completion settings. Empirical comparisons confirm that our approach is highly competitive with several recently proposed stateoftheart solvers for such problems. 

Contextual Bandit Learning with Predictable Rewards 
2012 

With Alekh Agarwal, Miroslav Dudik, John Langford and Robert E. Schapire. In 15^{th} AISTATS, 2012.


Abstract
Contextual bandit learning is a reinforcement learning problem where
the learner repeatedly receives a set of features (context), takes
an action and receives a reward based on the action and context. We
consider this problem under a realizability assumption: there exists
a function in a (known) function class, always capable of predicting
the expected reward, given the action and context. Under this
assumption, we show three things. We present a new
algorithm  Regressor Elimination  with a regret similar to the
agnostic setting (i.e. in the absence of realizability
assumption). We prove a new lower bound showing no algorithm can
achieve superior performance in the worst case even with the
realizability assumption. However, we do show that for any
set of policies (mapping contexts to actions), there is a
distribution over rewards (given context) such that our new
algorithm has constant regret unlike the previous approaches. 

Newtron: an Efficient Bandit algorithm for Online Multiclass Prediction. 
2011 

With Elad Hazan. In 27^{th} NIPS, 2011.


Abstract
We present an efficient algorithm for the problem of online multiclass prediction with bandit feedback in the fully adversarial setting. We measure its regret with respect to the logloss defined in (Abernethy and Rakhlin, 2009), which is parameterized by a scalar $\alpha$. We prove that the regret of Newtron is O(log T) when $\alpha$ is a constant that does not vary with horizon T, and at most O(T^{2/3}) if $\alpha$ is allowed to increase to infinity with T. For α = O(log T), the regret is bounded by O(√T), thus solving the open problem of Kakade et al (2008) and Abernethy and Rakhlin (2009). Our algorithm is based on a novel application of the online Newton method (Hazan et al, 2007). We test our algorithm and show it to perform well in experiments, even when $\alpha$ is a small constant. 

Efficient optimal learning for contextual bandits. 
2011 

With Miroslav Dudik, Daniel Hsu, Nikos Karampatziakis, John Langford, Lev Reyzin and Tong Zhang. In 27^{th} UAI,
2011.


Abstract
We address the problem of learning in an online setting where the learner repeatedly observes features, selects among a set of actions,
and receives reward for the action taken. We
provide the first efficient algorithm with an
optimal regret. Our algorithm uses a cost
sensitive classification learner as an oracle
and has a running time polylog(N), where N
is the number of classification rules among
which the oracle might choose. This is exponentially faster than all previous algorithms
that achieve optimal regret in this setting.
Our formulation also enables us to create an
algorithm with regret that is additive rather
than multiplicative in feedback delay as in all
previous work. 

Beyond the regret minimization barrier: an optimal algorithm for stochastic stronglyconvex optimization. 
2011 

With Elad Hazan In 24^{th} COLT,
2011.


Abstract
We give a novel algorithm for stochastic stronglyconvex optimization in the gradient oracle model which returns an O(1/T)approximate solution after T gradient updates. This rate of convergence is optimal in the gradient oracle model. This improves upon the previously known best rate of O(log(T)/T)), which was obtained by applying an online stronglyconvex optimization algorithm with regret O(log(T)) to the batch setting. We complement this result by proving that any algorithm has expected regret of Ω(log(T)) in the online stochastic stronglyconvex optimization setting. This lower bound holds even in the fullinformation setting which reveals more information to the algorithm than just gradients. This shows that any onlinetobatch conversion is inherently suboptimal for stochastic stronglyconvex optimization. This is the first formal evidence that online convex optimization is strictly more dicult than batch stochastic convex optimization. 

Who Moderates the Moderators?
Crowdsourcing Abuse Detection in
Usergenerated Content 
2011 

With Arpita Ghosh and R. Preston McAfee. In 12^{th} EC,
2011.


Abstract
A large fraction of usergenerated content on the Web, such as posts or comments on popular online forums, consists of abuse or spam. Due to the volume of contributions on popular sites, a few trusted moderators cannot identify all such abusive content, so viewer ratings of contributions must be used for moderation. But not all viewers who rate content are trustworthy and accurate. What is a principled approach to assigning trust and aggregating user ratings, in order to accurately identify abusive content?
In this paper, we introduce a framework to address the
problem of moderating online content using crowdsourced
ratings. Our framework encompasses users who are untrustworthy
or inaccurate to an unknown extent  that is, both the content
and the raters are of unknown quality. With no knowledge
whatsoever about the raters, it is impossible to do better than
a random estimate. We present efficient algorithms to
accurately detect abuse that only require knowledge about the
identity of a single `good' agent, who rates
contributions accurately more than half the time. We prove that
our algorithm can infer the quality of contributions with error
that rapidly converges to zero as the number of observations
increases; we also numerically demonstrate that the algorithm
has very high accuracy for much fewer observations. Finally, we
analyze the robustness of our algorithms to manipulation by
adversarial or strategic raters, an important issue in
moderating online content, and quantify how the performance of
the algorithm degrades with the number of manipulating
agents. 

CrossValidation and MeanSquare Stability 
2011 

With
Ravi Kumar and Sergei
Vassilvitskii. In 2^{nd} ICS
2011.


Abstract
kfold cross validation is a popular
practical method to get a good estimate of the
error rate of a learning algorithm. Here, the
set of examples is first partitioned into k
equalsized folds. Each fold acts as a test
set for evaluating the hypothesis learned on
the other k1 folds. The average error
across the k hypotheses is used as an
estimate of the error rate. Although widely
used, especially with small values of k
(such as 10), the technique has heretofore
resisted theoretical analysis. With only
sanitycheck bounds known, there is not a
compelling reason to use the kfold
crossvalidation estimate over a simpler
holdout estimate. The complications stem from
the fact that the k distinct estimates have
intricate correlations between
them. 

Combinatorial Approximation Algorithms for
MaxCut using Random Walks 
2011 

With C.
Seshadhri. In 2^{nd} ICS
2011. arXiv
version.


Abstract
We give the first combinatorial approximation
algorithm for Maxcut that beats the trivial
0.5 factor by a constant. The main
partitioning procedure is very intuitive,
natural, and easily described. It essentially
performs a number of random walks and
aggregates the information to provide the
partition. We can control the running time to
get an approximation factorrunning time
tradeoff. We show that for any constant b >
1.5, there is an O(n ^{b}) algorithm that
outputs a (0.5+δ) approximation for
Maxcut, where δ = δ(b) is some
positive constant.
One of the components of our algorithm is a
weak local graph partitioning procedure that
may be of independent interest. Given a
starting vertex i and a conductance
parameter φ, unless a random walk of length
l = O(log n) starting from i mixes rapidly
(in terms of φ and l), we can find a cut
of conductance at most φ close to the
vertex. The work done per vertex found in the
cut is sublinear in n. 

NonStochastic Bandit Slate Problems 
2010 

With Lev
Reyzin and Robert
E. Schapire. In 24^{th} NIPS
2010.


Abstract
We consider bandit problems, motivated
by applications in
online advertising and news story
selection, in which the learner must
repeatedly select a slate, that is,
a subset of size s from K
possible actions, and then receives
rewards for just the selected
actions.
The goal is to minimize the regret
with respect to total reward of the best
slate computed in hindsight. We
consider unordered and ordered versions of
the problem, and give efficient
algorithms which have regret
O(√T), where the constant depends
on the
specific nature of the problem. We also
consider versions of the problem where
we have access to a number of policies
which make recommendations for slates in
every round, and give algorithms with
O(√T) regret for competing with
the best such policy as well. We make use
of the technique of relative
entropy projections combined with the
usual multiplicative weight update
algorithm to obtain our algorithms. 

Learning rotations with little regret 
2010 

With Elad
Hazan and Manfred
K. Warmuth. In 23^{rd} COLT
2010. Corrigendum.


Abstract
We describe online algorithms for learning a
rotation from
pairs of unit vectors in R^{n}.
We show that the expected regret of our online
algorithm
compared to the best fixed rotation chosen
offline
is O(√(nL)), where L is the loss
of the best rotation. We also give a lower
bound that proves
that this expected regret bound is optimal
within a constant factor.
This resolves an open problem posed in COLT
2008.
Our online algorithm for choosing a rotation
matrix
in each trial is based on the
FollowThePerturbedLeader paradigm. It adds
a random spectral perturbation to the matrix
characterizing
the loss incurred so far and then chooses the
best rotation
matrix for that loss.
We also show that
any deterministic algorithm for learning
rotations has Ω(T)
regret in the worst case.
Corrigendum: There is an unfortunate
error in our paper "Learning
rotations with little regret"
which appeared in COLT 2010. A fix is posted
here as a corrigendum to our paper. 

Online Submodular Minimization 
2009 

With Elad
Hazan. In 23^{rd} NIPS
2009.


Abstract
We consider an online decision problem
over a discrete space in which the loss
function is submodular. We give algorithms
which are computationally efficient
and are Hannanconsistent in both the full
information and bandit settings. 

On Stochastic and Worstcase Models for
Investing 
2009 

With Elad
Hazan. In 23^{rd} NIPS
2009.


Abstract
In practice, most investing is done assuming
a probabilistic model of stock
price returns known as the Geometric Brownian
Motion (GBM). While it is often
an acceptable approximation, the GBM model is
not always valid empirically. This motivates a
worstcase approach to investing, called
universal portfolio management, where the
objective is to maximize wealth relative to
the wealth earned by the best fixed portfolio
in hindsight.
In this paper we tie the two approaches, and
design an investment strategy which is
universal in the worstcase, and yet capable
of exploiting the mostly valid GBM model. Our
method is based on new and improved regret
bounds for
online convex optimization with expconcave
loss functions. 

Better Algorithms for Benign Bandits 
2009 

With Elad
Hazan. In 20^{th} SODA
2009. Accepted to Journal of Machine
Learning Research.


Abstract
The online multiarmed bandit problem and its
generalizations are repeated
decision making problems, where the goal is to
select one of several possible
decisions in every round, and incur a cost
associated with the decision, in
such a way that the total cost incurred over
all iterations is close to the
cost of the best fixed decision in hindsight.
The difference in these costs is
known as the regret of the algorithm.
The term bandit refers to the
setting where one only obtains the cost of the
decision used in a given
iteration and no other information.
Perhaps the most general form of this problem
is the nonstochastic bandit
linear optimization problem, where the set of
decisions is a convex set in some
Euclidean space, and the cost functions are
linear. Only recently an efficient
algorithm attaining Õ(√T)
regret was discovered in this setting.
In this paper we propose a new algorithm for
the bandit linear optimization
problem which obtains a regret bound of
Õ(√Q), where Q is the
total variation in the cost functions. This
regret bound, previously
conjectured to hold in the full information
case, shows that it is possible to
incur much less regret in a slowly changing
environment even in the bandit
setting. Our algorithm is efficient and
applies several new ideas to bandit
optimization such as reservoir sampling. 

Noise Tolerance of Expanders and Sublinear
Expander Reconstruction 
2008 

With
Yuval Peres and C.
Seshadhri. In 49^{th}
FOCS, 2008.


Abstract
We consider the problem of online
sublinear expander reconstruction and its
relation to random walks in "noisy"
expanders. Given access to an adjacency list
representation of a boundeddegree graph G, we
want to convert this graph into a
boundeddegree expander G' changing G as
little as possible. The graph G' will be
output by a distributed filter: this is
a sublinear time procedure that given a query
vertex, outputs all its neighbors in G', and
can do so even in a distributed manner,
ensuring consistency in all the answers.
One of the main tools in our analysis is a
result on the behavior of random walks in
graph that are almost expanders: graphs that
are formed by arbitrarily connecting a small
unknown graph (the noise) to a large expander.
We show that a random walk from almost any
vertex in the expander part will have fast
mixing properties, in the general setting of
irreducible finite Markov chains. We also
design sublinear time procedures to
distinguish vertices of the expander part from
those in the noise part, and use this
procedure in the reconstruction
algorithm. 

Extracting Certainty from Uncertainty:
Regret Bounded by Variation in Costs 
2008 

With Elad Hazan. In 21^{st} COLT
2008. Invited to Machine Learning Journal
special issue for COLT 2008 (80(23), 165188,
2010).


Abstract
Prediction from expert advice is a
fundamental problem in machine learning. A
major pillar of the field is the existence of
learning algorithms whose average loss
approaches that of the best expert in
hindsight (in other words, whose average
regret approaches zero). Traditionally the
regret of online algorithms was bounded in
terms of the number of prediction rounds.
CesaBianchi, Mansour and Stoltz posed the
question whether it is be possible to bound
the regret of an online algorithm by the
variation of the observed costs. In
this paper we resolve this question, and prove
such bounds in the fully adversarial setting,
in two important online learning scenarios:
prediction from expert advice, and online
linear optimization. 

An Expansion Tester for Bounded Degree Graphs 
2008 

With C.
Seshadhri. In 35^{th} ICALP
2008. Earlier version: ECCC Report
TR07076. Accepted to SIAM Journal on Computing.


Abstract
We consider the problem of testing graph expansion (either
vertex or edge) in the bounded degree model [GR]. We give
a property tester that given a graph with degree bound
\(d\), an expansion bound $\alpha$, and a parameter
$ \varepsilon > 0$, accepts the graph with high probability if its expansion is
more than $\alpha$, and rejects it with high probability if it
is $\varepsilon$far from any graph with expansion $\alpha'$ with
degree bound $ d$, where $\alpha' < \alpha$ is a function of
$\alpha$. For edge expansion, we obtain $\alpha' = \Omega(\alpha^2/d)$. In either
case, the algorithm runs in time $ O(n^{(1+\mu)/2}d^2/ \varepsilon\alpha^2)$ for any
given constant $\mu > 0$. 

The Uniform Hardcore Lemma via Approximate
Bregman Projections 
2007 

With Moritz
Hardt and Boaz
Barak. In 20^{th} SODA
2009. Earlier version: ECCC Report
TR07131.


Abstract
We give a simple, more efficient and uniform
proof of the hardcore lemma, a fundamental
result in complexity theory with applications
in machine learning and cryptography. Our
result follows from the connection between
boosting algorithms and hardcore set
constructions discovered by Klivans and
Servedio. Informally stated, our result is
the following: suppose we fix a family of
boolean functions. Assume there is an
efficient algorithm which for every input
length and every smooth distribution (i.e.
one that doesn't assign too much weight to any
single input) over the inputs produces a
circuit such that the circuit computes the
boolean function noticeably better than
random. Then, there is an efficient algorithm
which for every input length produces a
circuit that computes the function correctly
on almost all inputs.
Our algorithm significantly simplifies
previous proofs of the uniform and the
nonuniform hardcore lemma, while matching or
improving the previously best known
parameters. The algorithm uses a generalized
multiplicative update rule combined with a
natural notion of approximate Bregman
projection. Bregman projections are widely
used in convex optimization and machine
learning. We present an algorithm which
efficiently approximates the Bregman
projection onto the set of high density
measures when the KullbackLeibler divergence
is used as a distance function. Our algorithm
has a logarithmic runtime over any domain
provided that we can efficiently sample from
this domain. High density measures correspond
to smooth distributions which arise naturally,
for instance, in the context of online
learning. Hence, our technique may be of
independent interest. 

Computational Equivalence of Fixed Points and No Regret
Algorithms, and Convergence to Equilibria 
2007 

With Elad
Hazan. In 21^{st}
NIPS, 2007.


Abstract
We study the relation between notions of gametheoretic equilibria which are
based on stability under a set of deviations, and empirical equilibria which
are reached by rational players. Rational players are modeled by players using
no regret algorithms, which guarantee that their payoff in the long run is
close to the maximum they could hope to achieve by consistently deviating
from the algorithm's suggested action.
We show that for a given set of deviations over the strategy set of a player,
it is possible to efficiently approximate fixed points of a given deviation if
and only if there exist efficient no regret algorithms resistant to the
deviations. Further, we show that if all players use a no regret algorithm,
then the empirical distribution of their plays converges to an
equilibrium. 

Efficient Algorithms using the Multiplicative Weights Update
Method 
2007 

Ph.D. thesis. Princeton
Tech Report TR80407.


Abstract
Algorithms based on convex optimization, especially linear and
semidefinite programming, are ubiquitous in Computer Science.
While there are polynomial time algorithms known to solve such
problems, quite often the running time of these algorithms is
very high. Designing simpler and more efficient algorithms is
important for practical impact.
In this thesis, we explore applications of the Multiplicative
Weights method in the design of efficient algorithms for
various optimization problems. This method, which was
repeatedly discovered in quite diverse fields, is an
algorithmic technique which maintains a distribution on a
certain set of interest, and updates it iteratively by
multiplying the probability mass of elements by suitably
chosen factors based on feedback obtained by running another
algorithm on the distribution.
We present a single metaalgorithm which unifies all known
applications of this method in a common framework. Next, we
generalize the method to the setting of symmetric matrices
rather than real numbers. We derive the following applications
of the resulting Matrix Multiplicative Weights algorithm:
 The first truly general, combinatorial, primaldual method
for algorithms for semidefinite programming. Using these
techniques, we obtain significantly faster algorithms for
obtaining O(√log(n)) approximations to various graph
partitioning problems, such as Sparsest Cut, Balanced
Separator in both directed and undirected weighted graphs, and
constraint satisfaction problems such as Min UnCut and Min
2CNF Deletion.
 An Õ(n^{3}) time derandomization of the AlonRoichman
construction of expanders using Cayley graphs. The algorithm
yields a set of O(log n) elements which generates an expanding
Cayley graph in any group of n elements.
 An Õ(n^{3}) time deterministic O(log n) approximation
algorithm for the quantum hypergraph covering problem.
 An alternative proof of a result of Aaronson that the
gammafatshattering dimension of quantum states on n qubits
is O(n/gamma^{2}).
Using our framework for the classical Multiplicative Weights
Update method, we derive the following algorithmic
applications:
 Fast algorithms for approximately solving several families
of semidefinite programs which beat interior point methods.
Our algorithms rely on eigenvector computations, which are
very efficient in practice compared to the Cholesky
decompositions needed by interior point methods. We also give
a matrix sparsification algorithm to speed up the eigenvector
computation using the Lanczos iteration.
 O(√log(n)) approximation to the Sparsest Cut and the
Balanced Separator problems in undirected weighted graphs in
Õ(n^{2}) time by embedding expander flows in the graph. This
improves upon the previous Õ(n^{4.5}) time algorithm of
Arora, Rao, and Vazirani, which was based on semidefinite
programming.


A Combinatorial, PrimalDual approach to Semidefinite
Programs 
2007 

With Sanjeev
Arora. In 39^{th}
STOC, 2007.


Abstract
Semidefinite programs (SDP) have been used in many recent
approximation algorithms. We develop a general primaldual
approach to solve SDPs using a generalization of the
wellknown multiplicative weights update rule to symmetric
matrices. For a number of problems, such as Sparsest Cut
and Balanced Separator in undirected and directed
weighted graphs, and the Min UnCut problem, this yields
combinatorial approximation algorithms that are significantly
more efficient than interior point methods. The design of our
primaldual algorithms is guided by a robust analysis of
rounding algorithms used to obtain integer solutions from
fractional ones. 

Privacy, Accuracy, and Consistency Too:
A Holistic Solution to Contingency Table Release 
2007 

With Boaz
Barak, Kamalika
Chaudhuri, Cynthia
Dwork,
Frank
McSherry and
Kunal
Talwar. In 26^{th} PODS,
2007.


Abstract
The contingency table is a work horse of official statistics,
the format of reported data for the US Census, Bureau of Labor
Statistics, and the Internal Revenue Service. In many settings
such as these privacy is not only ethically mandated, but
frequently legally as well. Consequently there is an extensive
and diverse literature dedicated to the problems of
statistical disclosure control in contingency table release.
However, all current techniques for reporting contingency
tables fall short on at least one of privacy, accuracy, and
consistency (among multiple released tables). We propose a
solution that provides strong guarantees for all three
desiderata simultaneously.
Our approach can be viewed as a special case of a more general
approach for producing synthetic data:
Any privacypreserving mechanism for contingency table release
begins with raw data and produces a (possibly inconsistent)
privacypreserving set of marginals. From these tables alone
 and hence without weakening privacy  we will find and
output the "nearest" consistent set of marginals.
Interestingly, this set is no farther than the tables of the
raw data, and consequently the additional error introduced by
the imposition of consistency is no more than the error
introduced by the privacy mechanism itself.
The privacy mechanism of Dwork, McSherry, Nissim, Smith '06 gives the strongest
known privacy guarantees, with very little error. Combined
with the techniques of the current paper, we therefore obtain
excellent privacy, accuracy, and consistency among the tables.
Moreover, our techniques are surprisingly efficient.
Our techniques apply equally well to the logical cousin of
the contingency table, the OLAP cube. 

A Fast Random Sampling Algorithm for Sparsifying Matrices 
2006 

With Sanjeev
Arora and Elad
Hazan. In 10^{th} RANDOM
2006.


Abstract
We describe a simple randomsampling based procedure
for producing sparse matrix approximations. Our procedure
and analysis are extremely simple: the analysis uses
nothing more than the ChernoffHoeffding bounds. Despite
the simplicity, the approximation is comparable and
sometimes better than previous work.
Our algorithm computes the sparse matrix approximation
in a single pass over the data. Further, most of the
entries in the output matrix are quantized, and can be
succinctly represented by a bit vector, thus leading to
much savings in space. 

Efficient Aggregation Algorithms for Probabilistic Data 
2006 

With T. S.
Jayram and Erik Vee. In 18^{th} SODA 2007.


Abstract
We study the problem of computing aggregation operators on
probabilistic data in an I/O efficient manner. Algorithms for
aggregation operators such as SUM, COUNT, AVG, and MIN/MAX are
crucial to applications on probabilistic databases. We give a
generalization of the classical data stream model to handle
probabilistic data, called probabilistic streams, in
order to analyze the I/Orequirements of our algorithms.
Whereas the algorithms for SUM and COUNT turn out to be
simple, the problem is harder for both AVG and MIN/MAX.
Although data stream algorithms typically use randomness, all
of the algorithms we present are deterministic.
For MIN and MAX, we obtain efficient onepass data stream
algorithms for estimating each of these quantities with
relative accuracy (1 + ε), using constant update time
per element and O((log R)/ε) space, where each element
has a value between 1 and R. For AVG, we present a new data
stream algorithm for estimating its value to a relative
accuracy (1 + &epsilon) in O(log n) passes over the data with
O((log^{2} n)/ε) space and update time O((log
n)/ε) per element. On the other hand, we prove a space
lower bound of Ω(n) for any exact onepass
deterministic data stream algorithm. Complementing this
result, we also present an O(n(log^{2}n))time exact
deterministic algorithm which uses O(n) space (thus removing
the datastreaming restriction), improving dramatically on the
previous O(n^{3})time algorithm. Our algorithms for
AVG involve a novel technique based on generating functions
and numerical integration, which may be of independent
interest.
Finally, we provide an experimental analysis and show that
our algorithms, coupled with additional heuristics, have
excellent performance over large data sets. 

Logarithmic Regret Algorithms for Online Convex
Optimization 
2006 

With Elad
Hazan
and Amit
Agarwal. In 19^{th}
COLT 2006 (this version in collaboration with Adam
Kalai also). Invited to Machine Learning Journal special
issue for COLT 2006 (69(23), 169192, 2007).


Abstract
In an online convex optimization problem a
decisionmaker makes a sequence of decisions, i.e.,
chooses a sequence of points in Euclidean space, from a
fixed feasible set. After each point is chosen, it
encounters a sequence of (possibly unrelated) convex cost
functions. Zinkevich (2003) introduced this framework,
which models many natural repeated decisionmaking
problems and generalizes many existing problems such as
Prediction from Expert Advice and Cover's Universal
Portfolios. Zinkevich showed that a simple online gradient
descent algorithm achieves additive regret
O(√T), for an arbitrary sequence of T convex cost
functions (of bounded gradients), with respect to the best
single decision in hindsight.
In this paper, we give algorithms that achieve regret
O(log(T)) for an arbitrary sequence of strictly convex
functions (with bounded first and second derivatives).
This mirrors what has been done for the special cases of
prediction from expert advice by Kivinen and Warmuth
(1999), and Universal Portfolios by Cover (1991). We
propose several algorithms achieving logarithmic regret,
which besides being more general are also much more
efficient to implement.
The main new ideas give rise to an efficient algorithm
based on the Newton method for optimization, a new tool in
the field. Our analysis shows a surprising connection to
followtheleader method, and builds on the recent work of
Agarwal and Hazan (2005). We also analyze other
algorithms, which tie together several different previous
approaches including followtheleader, exponential
weighting, Cover's algorithm and gradient descent. 

Algorithms for Portfolio Management based on the Newton
Method 
2006 

With Amit
Agarwal,
Elad
Hazan, and
Robert
Schapire. In 23^{rd}
ICML, 2006.


Abstract
We experimentally study online investment algorithms
first proposed by Agarwal and Hazan and extended by Hazan
et al. which achieve almost the same wealth as the best
constantrebalanced portfolio determined in hindsight.
These algorithms are the first to combine optimal
logarithmic regret bounds with efficient deterministic
computability. They are based on the Newton method for
offline optimization which, unlike previous approaches,
exploits second order information. After analyzing the
algorithm using the potential function introduced by
Agarwal and Hazan, we present extensive experiments on
actual financial data. These experiments confirm the
theoretical advantage of our algorithms, which yield
higher returns and run considerably faster than previous
algorithms with optimal regret. Additionally, we perform
financial analysis using meanvariance calculations and
the Sharpe ratio. 

A Variation on SVD Based Image Compression 
2006 

With Abhiram
Ranade and Srikanth S. M. In
WCVGIP, 2006. Journal paper in Image
and Vision Computing 25(6): 771777 (2007).


Abstract
We present a variation to the well studied SVD based image
compression technique. Our variation can be viewed as a
preprocessing step in which the input image is permuted as
per a fixed, data independent permutation, after which it
is fed to the standard SVD algorithm. Likewise, our
decompression algorithm can be viewed as the standard SVD
algorithm followed by a postprocessing step which applies
the inverse permutation.
On experimenting with standard images we show that our
method performs substantially better than the standard
method. Typically, for any given compression quality, our
method needs about 30% fewer singular values and vectors to
be retained. We also present a bit allocation scheme and
show that our method also performs better than the more
familiar Discrete Cosine Transform (DCT).
We show that the original SVD algorithm as well as our
variation, can be viewed as instances of the Karhunen
Loeve transform (KLT). In fact, we observe that there is a
whole family of variations possible by choosing different
parameter values while applying the KLT. We present
heuristic arguments to show that our variation is likely
to yield the best compression of all these. We also
present experimental evidence which appears to justify our
analysis. 

Fast Algorithms for Approximate Semidefinite Programming
using the Multiplicative Weights Update method 
2005 

With Sanjeev
Arora and Elad
Hazan. In
46^{th} FOCS, 339348, 2005.


Abstract
Semidefinite programming (SDP) relaxations appear in
many recent approximation algorithms but the only general
technique for solving such SDP relaxations is via interior
point methods. We use a Lagrangianrelaxation based
technique (modified from the papers of Plotkin, Shmoys,
and Tardos (PST), and Klein and Lu) to derive faster
algorithms for approximately solving several families of
SDP relaxations. The algorithms are based upon some
improvements to the PST ideas which lead to new results
even for their framework as well as improvements in
approximate eigenvalue computations by using random
sampling. 

The Multiplicative Weights Update Method: A
MetaAlgorithm and Applications 
2012 

With Sanjeev
Arora and Elad
Hazan. In Theory of Computing Volume 8, pages 121164.


Abstract
Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analyses are usually very similar and rely on an exponential potential function.
In this survey we present a simple metaalgorithm that unifies many of these disparate algorithms and derives them as simple instantiations of the metaalgorithm. We feel that since this metaalgorithm and its analysis are so simple, and its applications so broad, it should be a standard part of algorithms courses, like "divide and conquer". 

Analysis and Algorithms for Contentbased Event Matching 
2005 

With
Elad Hazan,
Fengyun Cao and
Jaswinder
Pal Singh. In 4^{th}
DEBS, 2005.


Abstract
Contentbased event matching is an important
problem in largescale eventbased publish/subscribe
systems. However, open questions remain in analysis
of its difficulty and evaluation of its solutions. This
paper makes a few contributions toward analysis,
evaluation and development of matching algorithms.
First, based on a simplified yet generic model, we give
a formal proof of hardness of matching problem by
showing its equivalence to the notoriously hard Partial
Match problem. Second, we compare two major
existing matching approaches and show that countingbased
algorithms are likely to be more computationally
expensive than treebased algorithms in this model.
Third, we observe an important, prevalent
characteristic of realworld publish/subscribe events,
and develop a new matching algorithm called
RAPIDMatch to exploit it. Finally, we propose a new
metric for evaluation of matching algorithms. We
analyze and evaluate RAPIDMatch using both the
traditional and new metrics proposed. Results show
that RAPIDMatch achieves large performance
improvement over the treebased algorithm under
certain publish/subscribe scenarios. 

O(√log n) approximation to SPARSEST CUT
in Õ(n²) time 
2004 

With Sanjeev
Arora and Elad
Hazan. In
45^{th} FOCS, 238247, 2004.


Abstract
We show how to compute O(√log
n)approximations to SPARSEST CUT and BALANCED
SEPARATOR problems in Õ(n²) time,
thus improving upon the recent algorithm of Arora, Rao and
Vazirani (2004). Their algorithm uses semidefinite programming
and required Õ(n^{4.5}) time. Our
algorithm relies on efficiently finding expander flows in the
graph and does not solve semidefinite programs. The existence
of expander flows was also established by Arora, Rao, and
Vazirani. 

Approximating Quadratic Programs with Positive Semidefinite
Constraints 
2004 

With Elad
Hazan.
Princeton Tech Report TR74606.


Abstract
We describe a polynomial time approximation algorithm to the
problem of maximizing a quadratic form subject to quadratic
constraints specified by PSD matrices. A special case, that has
applications for clustering, is optimizing quadratic forms over
the unit cube.
Approximation algorithms with similar guarantees are known, and
there is evidence that this factor is optimal. The following
analysis is particularly simple. 


