Di Wang's Homepage
Chinese: 王帝
PhD candidate
Department of Computer Science and Engineering
State University of New York at Buffalo
Email :
dwang45 "at" buffalo.edu
I am a sixth (final) year PhD student in the
Department of Computer Science and Engineering at
The State University of New York (SUNY) at Buffalo under supervision of Dr. Jinhui Xu . Before that I got
my Master degree in Mathematics at
University of Western Ontario in 2015, and I got my Bachelor degree in Mathematics and Applied
Mathematics at Shandong University in 2014.
My most recent resume (last updated in June, 2020) can be found here.
Dissertation: Some Fundamental Machine Learning Problems in the Differential Privacy Model.
I will be joining the Division of Computer, Electrical and Mathematical Sciences and Engineering (CEMSE), King Abdullah University of Science and Technology (KAUST) as an Assistant Professor. And I will direct the Theoretical, Responsible and trUSTworthy Computing (TRUST) Laboratory.
Current Openings: I am looking for 1 Postdoc, 34 PhD students, several internships and visiting students (all are fully funded). If you are interested
in working with me, feel free to send me your CV and transcripts.
Private Data Analytics: Differential privacy, privacypreserving machine learning, privacypreserving data mining, privacy attack in machine learning
Trustworthy Machine Learning: Robust statistics/estimation, interpretable machine learning, security in machine learning, adversarial machine learning, fairness in machine learning, other trustworthy issues
Statistical Learning Theory: Quantum Machine Learning, Large scale optimization, high dimensional optimization, statistical estimation, learning theory, compressed sensing
Machine Learning : Datadriven Machine Learning
Healthcare: Trustworthy issues in digital healthcare, biomedical imaging and bioinformatics

Statistical Guarantees of Differentially Private (Gradient) Expectation Maximization Algorithm. Abstract▼
As one popular technique for estimating the maximum likelihood
of mixture models or incomplete data problems, (Gradient) Expectation Maximization (EM) algorithm presents a challenge for preserving privacy of sensitive data. While although there are already some Differentially Private (DP) variants of (Gradient) EM algorithm, however, unlike in the nonprivate case, there is still no finite sample statistical guarantees. To address this issue, in this paper we propose the first DP variant of (Gradient) EM algorithm with statistical guarantees. Moreover, we apply our general framework to three canonical models: Gaussian Mixture Model (GMM), Mixture of Regressions Model (MRM) and Linear Regression with Missing Covariates (RMC). Specifically, for GMM in the DP model, our estimation error is near optimal in some cases. And for other two models in the DP model, we provide the first finite sample statistical guarantees. Our theory is supported by thorough numerical results.
Di Wang*, Jiahao Ding*, Zejun Xie, Miao Pan and Jinhui Xu (* equal contribution).

Global Interpretation for Pairwise Learning. Abstract▼
As an important family of learning problems, pairwise learning has received much attention in recent years. Since pairwise learning involves pairs of instances in its loss function, it is more capable of modeling the relative relationships between instances compared with traditional pointwise learning (e.g., classification). In practice, many machine learning and data mining tasks can be categorized as pairwise learning. Although pairwise learning has achieved tremendous success in many realworld applications, the lack of transparency behind the behavior of the learned pairwise model impedes users from trusting the predicted results, which hampers its further applications in the real world. To tackle this problem, in this paper, we investigate how to enable interpretation in pairwise learning and propose a global interpretation method for pairwise learning. Based on the proposed global interpretation method, we can identify a minimal sufficient subset of data features that are sufficient in themselves to justify the global predictions made by the pairwise model. The identified minimal sufficient feature subset can help us better understand the overall behaviors of the learned pairwise model across different subpopulations of instance pairs. To the best of our knowledge, this is the first work that provides global interpretation for pairwise learning. We also conduct extensive experiments on realworld datasets to evaluate the performance of the proposed global interpretation method.
Mengdi Huai, Di Wang, Jiayi Chen, Jinduo Liu and Aidong Zhang.
 Towards Assessment of Randomized Mechanisms for
Certifying Adversarial Robustness. Abstract▼
As a certified defensive technique, randomized smoothing has received considerable attention due to its scalability to large datasets and neural networks. However, several important questions remain unanswered, such as (i) whether the Gaussian mechanism is an appropriate option for certifying $\ell_2$norm robustness, and (ii) whether there is an appropriate randomized mechanism to certify $\ell_\infty$norm robustness
for highdimensional datasets. To shed light on these questions, the main difficulty is how to assess each randomized mechanism. In this paper, we propose a generic framework, which connects the existing frameworks in (Lecuyer et al. 2018; Li et al. 2019), to assess randomized mechanisms.
Under our framework, for a mechanism which can certify a certain extent of robustness, we define the magnitude ({\em i.e.,} the expected $\ell_\infty$ norm) of the randomized noise it adds as the metric for assessing its appropriateness. We also derive lower bounds on the metric for $\ell_2$norm and $\ell_\infty$norm cases as the criteria for assessment.
Based on our framework, we assess the Gaussian and Exponential mechanisms by comparing the magnitude of noise added by these mechanisms and the corresponding criteria. We first conclude that the Gaussian mechanism is an appropriate option to certify $\ell_2$norm robustness. Moreover, surprisingly, we also show that the Gaussian mechanism is also an appropriate option for certifying $\ell_\infty$norm robustness, instead of the Exponential mechanism.
Finally, we verify our theoretical results by evaluations on CIFAR10 and ImageNet.
Tianhang Zheng*, Di Wang*, Baochun Li and Jinhui Xu (* equal contribution).
 Estimating Smooth GLM in Noninteractive Local Differential Privacy Model with Public Unlabeled Data. Abstract▼
In this paper, we study the problem of estimating smooth Generalized Linear Models (GLM) in the Noninteractive Local Differential Privacy (NLDP) model.
Different from its classical setting, our model allows the server to process
some additional public but unlabeled data.
We first show that
there is an $(\epsilon, \delta)$NLDP algorithm for
GLM (under some mild assumptions), if each data record is i.i.d sampled from some subGaussian distribution with bounded $\ell_1$norm.
The sample complexity of both
public and private data, for the algorithm to achieve an $\alpha$ estimation error (in $\ell_\infty$norm), is $\tilde{O}(p^2\alpha^{2}\epsilon^{2})$ if $\alpha$ is not too small
(i.e., $\alpha\geq \Omega(\frac{1}{\sqrt{p}})$), where $p$ is the dimensionality of the data. This is a significant improvement over the previously known quasipolynomial (in $\alpha$) or exponential (in $p$) complexity of convex GLM with no public data.
We then extend our idea to the nonlinear regression problem and show
a similar phenomenon for it.
Finally, we demonstrate the practicality of our algorithms through experiments on both synthethic and real world datasets.
To our best knowledge, this is the first paper showing the existence of efficient and practical
algorithms for GLM and nonlinear regression
in the NLDP model with public unlabeled data.
Di Wang*, Huanyu Zhang*, Marco Gaboardi and Jinhui Xu. (* equal contribution)
 Inferring Ground Truth From Crowdsourced Data Under Local Attribute Differential Privacy. Abstract▼
Recently, the problem of ground truth inference under local differential privacy (LDP) model has been recently studied. However, this problem is still not well understood and even some basic questions have not been solved yet. First, it is still unknown what is the average error of the private estimators to the underlying ground truth. Secondly, we do not known whether we can infer the ability of each user under LDP model and what is the estimation error w.r.t the underlying users ability. Finally, previous work only show that their methods have better performance than the private major voting algorithm through experiments. However, there is still no theoretically result which shows this priority formally or mathematically. In this paper, we partially solve these problems by studying the ground truth inference problem under local attribute differential privacy (LADP) model, and propose a new algorithm called private DawidSkene method, which is motivated by the classical DawidSkene method. Specifically, we first provide the estimation errors for both ability of users and the ground truth under some assumptions of the problem if the algorithm start with some appropriate initial vector. Moreover, we propose an explicit instance and show that the estimation error of the ground truth achieved by the private major voting algorithm is always greater than the error achieved by our method. To our best knowledge, this is the first result on showing the explicit estimation errors for both ability of users and ground truth for the problem. Also, this paper is the first result on theoretically comparing with the private major voting algorithm.
Di Wang and Jinhui Xu.
 On Sparse Linear Regression in the Local Differential Privacy Model. Abstract▼
In this paper, we study the sparse linear regression problem in the Local Differential Privacy (LDP) model. We first show that polynomial dependency on the dimensionality $p$ of the space is unavoidable for the estimation error in both noninteractive and sequential interactive local models, if the privacy of the whole dataset needs to be preserved. Similar limitations also exist for other types of error measurements and in the relaxed local models. This indicates that differential privacy in high dimensional space is unlikely achievable for the problem. With the understanding of this limitation, we then present two algorithmic results. The first one is
a sequential interactive LDP algorithm for the low dimensional sparse case, called Locally Differentially Private Iterative Hard Thresholding (LDPIHT), which achieves a near optimal upper bound. This algorithm is actually rather general and can be used to solve quite a few other problems, such as (Local) DPERM with sparsity constraints and sparse regression with nonlinear measurements. The second one is for the restricted (high dimensional) case where only the privacy of the responses (labels) needs to be preserved. For this case,
we show that the optimal rate of the error estimation can be made logarithmically depending on $p$ (i.e., $\log p$) in the local model,
where an upper bound is obtained by a labelprivacy version of LDPIHT. Experiments on real world and synthetic datasets confirm our theoretical analysis.
Di Wang and Jinhui Xu.
Minor Revision at IEEE Transactions on Information Theory.
 Empirical Risk Minimization in the Noninteractive Local
Model of Differential Privacy. Abstract▼
In this paper, we study the Empirical Risk Minimization (ERM) problem in the noninteractive Local Differential Privacy (LDP) model. We first show that if the loss function is $(\infty, T)$smooth, by using the Bernstein polynomial approximation we can avoid a dependency of the sample complexity, to achieve error $\alpha$, on the exponential of the dimensionality $p$ with base $1/\alpha$ (i.e., $\alpha^{p}$).
This answers a question from (Smith et.al., 2017). Then, we propose playerefficient algorithms with $1$bit communication complexity and $O(1)$ computation cost for each player. The error bound of these algorithms is asymptotically the same as the original one.
With some additional assumptions, we also give an algorithm which is more efficient for the server.
Based on different types of polynomial approximations, we propose (efficient) noninteractive locally differential private algorithms for learning the set of kway marginal queries and the set of smooth queries.
Moreover, we study the case of $1$Lipschitz generalized linear convex loss functions and show that there is an $(\epsilon, \delta)$LDP algorithm whose sample complexity for achieving error $\alpha$ is only linear in the dimensionality $p$ and quasipolynomial in other terms. To prove this, we first show that the conclusion holds for the hinge loss function. Then, we extend the result to any $1$Lipschitz generalized linear convex loss functions by showing that every such a function can be approximated by a linear combination of hinge loss functions and some linear functions. Our results use a polynomial of inner product approximation technique. Then we apply our technique to the Euclidean median problem and show that its sample complexity needs only to be quasipolynomial in $p$, which is the first result with a subexponential sample complexity in $p$ for nongeneralized linear loss functions.
Di Wang, Marco Gaboardi, Adam Smith and Jinhui Xu.
Minor Revision at Journal of Machine Learning Research.
 On Differentially Private Stochastic Convex Optimization with Heavytailed Data. Abstract▼
In this paper, we consider the problem of designing Differentially Private (DP) algorithms for Stochastic Convex Optimization (SCO) on heavytailed data. The irregularity of such data violates some key assumptions used in almost all existing DPSCO and DPERM methods, resulting in failure to provide the DP guarantees. To better understand this type of challenges, we provide in this paper a comprehensive study of DPSCO under various settings. First, we consider the case where the loss function is strongly convex and smooth. For this case, we propose a method based on the sampleandaggregate framework, which has an excess population risk of $\tilde{O}(\frac{d^3}{n\epsilon^4})$ (after omitting other factors), where $n$ is the sample size and $d$ is the dimensionality of the data. Then, we show that with some additional assumptions on the loss functions, it is possible to reduce the \textit{expected} excess population risk to $\tilde{O}(\frac{ d^2}{ n\epsilon^2 })$. To lift these additional conditions, we also provide a gradient smoothing and trimming based scheme to achieve excess population risks of $\tilde{O}(\frac{ d^2}{n\epsilon^2})$ and $\tilde{O}(\frac{d^\frac{2}{3}}{(n\epsilon^2)^\frac{1}{3}})$ for strongly convex and general convex loss functions, respectively, \textit{with high probability}. Experiments on both synthetic and realworld datasets suggest that our algorithms can effectively deal with the challenges caused by data irregularity.
Di Wang*, Hanshen Xiao*, Srini Devadas and Jinhui Xu (* equal contribution).
The 37th International Conference on Machine Learning (ICML 2020).
 Facility Location Problem in Differential Privacy Model Revisited. [Link] Abstract▼
In this paper we study the uncapacitated facility location problem in the
model of differential privacy (DP) with uniform facility
cost. Specifically, we first show that, under the \emph{hierarchically wellseparated tree (HST) metrics} and the superset output setting that was introduced in (Gupta et al 2010), there is an $\epsilon$DP
algorithm that achieves an $O(\frac{1}{\epsilon})$
(expected multiplicative) approximation ratio; this implies an
$O(\frac{\log n}{\epsilon})$
approximation ratio for the general metric case, where $n$ is the size of the input metric. These bounds improve
the bestknown results given by (Gupta et al 2010). In particular, our approximation ratio for HSTmetrics is independent of $n$, and the ratio for general metrics is independent of the aspect ratio of the input metric. On the negative side, we show that the approximation ratio of any $\epsilon$DP algorithm is lower bounded by $\Omega(\frac{1}{\sqrt{\epsilon}})$, even for instances on HST metrics with uniform facility cost, under the superset output setting. The lower bound shows that the dependence of the approximation ratio for HST metrics on $\epsilon$ can not be removed or greatly improved. Our novel methods and techniques for both the upper and lower bound may find additional applications.
[alphabetic order] Yunus Esencayi, Marco Gaboardi, Shi Li and Di Wang
Conference on Neural Information Processing Systems (NIPS/NeurIPS), 2019.
 Differentially Private Empirical Risk Minimization with Nonconvex Loss Functions. [Link] Abstract▼
We study the problem of Empirical Risk Minimization (ERM) with (smooth) nonconvex loss functions under the differentialprivacy (DP) model. Existing approaches for this problem mainly adopt gradient norms to measure the error, which in general cannot guarantee the quality of the solution. To address this issue,
we first study the expected excess empirical (or population) risk, which was primarily used as the utility to measure the quality for convex loss functions. Specifically, we show that
the excess empirical (or population) risk can be upper bounded by $\tilde{O}(\frac{d\log (1/\delta)}{\log n\epsilon^2})$ in the $(\epsilon, \delta)$DP settings, where $n$ is the data size and $d$ is the dimensionality of the space.
The $\frac{1}{\log n}$ term in the empirical risk bound can be further improved to $\frac{1}{n^{\Omega(1)}}$ (when $d$ is a constant) by a highly nontrivial analysis on the timeaverage error.
To obtain more efficient solutions, we also consider the connection between achieving differential privacy and finding approximate local minimum.
Particularly, we show that when the size $n$ is large enough, there are $(\epsilon, \delta)$DP algorithms which can find an approximate local minimum of the empirical risk with high probability in both the constrained and nonconstrained settings.
These results indicate that one can escape saddle points privately.
Di Wang, Changyou Chen and Jinhui Xu.
The 36th International Conference on Machine Learning (ICML 2019).
 On Sparse Linear Regression in the Local Differential Privacy Model. [Link] Abstract▼
In this paper, we study the sparse linear regression problem under the Local Differential Privacy (LDP) model. We first show that polynomial dependency on the dimensionality $p$ of the space is unavoidable for the estimation error in both noninteractive and sequential interactive local models, if the privacy of the whole dataset needs to be preserved. Similar limitations also exist for other types of error measurements and in the relaxed local models. This indicates that differential privacy in high dimensional space is unlikely achievable for the problem. With the understanding of this limitation, we then present two algorithmic results. The first one is
a sequential interactive LDP algorithm for the low dimensional sparse case, called Locally Differentially Private Iterative Hard Thresholding (LDPIHT), which achieves a near optimal upper bound. This algorithm is actually rather general and can be used to solve quite a few other problems, such as (Local) DPERM with sparsity constraints and sparse regression with nonlinear measurements. The second one is for the restricted (high dimensional) case where only the privacy of the responses (labels) needs to be preserved. For this case,
we show that the optimal rate of the error estimation can be made logarithmically depending on $p$ (i.e., $\log p$) in the local model,
where an upper bound is obtained by a labelprivacy version of LDPIHT. Experiments on real world and synthetic datasets confirm our theoretical analysis.
Di Wang and Jinhui Xu.
The 36th International Conference on Machine Learning (ICML 2019).
Selected as Long Talk(Acceptance Rate: 140/3424= 4.1%) .
 Noninteractive Locally Private Learning of Linear Models via Polynomial Approximations. [Link] Abstract▼
In this paper, we study the Empirical Risk Minimization problem in the noninteractive Local Differential Privacy (LDP) model. First, we show that for the hinge loss function, there is an $(\epsilon, \delta)$LDP algorithm whose sample complexity for achieving an error of $\alpha$ is only linear in the dimensionality $p$ and quasipolynomial in other terms. Then, we extend the result to any $1$Lipschitz generalized linear convex loss functions by showing that every such function can be approximated by a linear combination of hinge loss functions and some linear functions. Finally, we apply our technique to the Euclidean median problem and show that its sample complexity needs only to be quasipolynomial in $p$, which is the first result with a subexponential sample complexity in $p$ for nongeneralized linear loss functions. Our results are based on a technique, called polynomial of inner product approximation, which may be applicable to other problems.
Di Wang, Adam Smith and Jinhui Xu.
The 30th International Conference on Algorithmic Learning Theory (ALT 2019).
 Empirical Risk Minimization in Noninteractive Local Differential Privacy Revisited. [Link]
Abstract▼
In this paper, we revisit the Empirical Risk Minimization problem in the noninteractive local model of differential privacy. In the case of constant or low dimensions ($p\ll n$), we first show that if the loss function is $(\infty, T)$smooth, we can avoid a dependence of the sample complexity, to achieve error $\alpha$, on the exponential of the dimensionality $p$ with base $1/\alpha$ (i.e., $\alpha^{p}$),
which answers a question in (Smith et al 2017). Our approach is based on polynomial approximation. Then, we propose playerefficient algorithms with $1$bit communication complexity and $O(1)$ computation cost for each player. The error bound is asymptotically the same as the original one. With some additional assumptions, we also give an efficient algorithm for the server.
In the case of high dimensions ($n\ll p$),
we show that if the loss function is a convex generalized linear function, the error can be bounded by using the Gaussian width of the constrained set, instead of $p$, which improves the one in
Smith et al.
Our techniques can be extended to some related problems, such as $k$way marginal queries and smooth queries.
Di Wang, Marco Gaboardi and Jinhui Xu.
Conference on Neural Information Processing Systems (NIPS/NeurIPS), 2018.
 Differentially Private Empirical Risk Minimization Revisited: Faster and More General. [Link] Abstract▼
In this paper we study the differentially private Empirical Risk Minimization (ERM) problem in different settings. For smooth (strongly) convex loss function with or without (non)smooth regularization, we give algorithms that achieve either optimal or near optimal utility bounds with less gradient complexity compared with previous work. For ERM with smooth convex loss function in highdimensional ($p\gg n$) setting, we give an algorithm which achieves the upper bound with less gradient complexity than previous ones. At last, we generalize the expected excess empirical risk from convex loss functions to nonconvex ones satisfying the PolyakLojasiewicz condition and give a tighter upper bound on
the utility than the one in (Zhang et al 2017).
Di Wang, Minwei Ye and Jinhui Xu.
Conference on Neural
Information Processing Systems (NIPS/NeurIPS), 2017.

Instructor
 CSE 474/574: Introduction to Machine Learning, Summer 2019 @SUNY at Buffalo.
 Teaching assistant:
 CSE 474/574 Introduction to Machine Learning, Spring 2018 @SUNY at Buffalo.
 CSE 431/531 Analysis of Algorithm, Fall 2017, Spring 2017, Fall 2016, Spring 2016 @SUNY at Buffalo.

CSE 115 Introduction to Computer Science for Majors I, Fall 2015 @ @SUNY at Buffalo.
 MATH 1229A Methods of Matrix Algebra, Summer 2015, Spring 2015 @ UWO.
 ATH 1225B Methods of Calculus, Fall 2014 @ UWO.
 Program Committee
 WACV 2020
 ECMLPKDD 2020
 IJCAIPRICAI 2020
 IEEE Symposium on Security and Privacy 2020 (Shadow PC)
 AAAI 2020
 Reviewer
ACCV 2020, NeurIPS 2020, STOC 2020, TAMC 2020, SOCG 2020, ECCV 2020, CVPR 2020, NeuIPS2019, ICDCS 2019, ICCV 2019, CVPR 2019, ICML 2019, AISTATS 2019, KDD 2018, AAAI 2018, CompIMAGE 2018, AAAI 2018, IWCIA 2017
Patterns, Information Science, Neurocomputing, IEEE Transactions on Big Data, ACM Computing Surveys, IEEE Transactions on Information Forensics and Security, IEEE Transactions on Pattern Analysis and Machine Intelligence, Theoretical Computer Science, Information Processing Letters
 School of Computing and Information Systems, University of Melbourne
 Department of Computer Science and Engineering, Chinese University of Hong Kong
 Department of Computer Science, Dalhousie University
 CISPAHelmholtz Center for Information Security
 Department of Computing, Hong Kong Polytechnic University
 Department of Computer Science, University of Memphis
 School of Computer Science, University of Sydney
 Department of Computing, Imperial College London
 Department of Computer Science, University College London
 King Abdullah University of Science and Technology
 Department of Computing and Software, McMaster University
 School of Computer Science, University of Birmingham
 Department of Computer Science, University of Warwick
 Department of Computer Science, City University of Hong Kong
 School of Information System, Singapore Management University
 Department of Computer Science and Engineering, Hong Kong University of Science and Technology
 Department of Computer Science, University of Surrey
 Department of Computer Science, McGill University
 School of Computer Sicence, University of Science and Technology China
 School of Computer Science, Nanjing University
 Department of Computer Science, University of Alberta
 SEAS Dean’s Graduate Achievement Award in 2019, SUNY at Buffalo
 Best CSE Graduate Research Award in 2018, SUNY at Buffalo
 ICML Travel Award, 2019

NIPS Travel Award, 2019, 2018, 2017

Western Graduate Research Scholarship, Western University, 20142015
 Algebraic Geometry Summer School Scholarship, ENCU, Shanghai, 2013