Today, we have a guest post by Sivaraman Balakrishnan. Siva is a graduate student in the School of Computer Science. The topic of his post is an algorithm for clustering. The algorithm finds the connected components of the level sets of an estimate of the density. The algorithm — due to Kamalika Chaudhuri and Sanjoy DasGupta— is very simple and comes armed with strong theoretical guarantees.
Aside: Siva’s post mentions John Hartigan. John is a living legend in the field of statistics. He’s done fundamental work on clustering, Bayesian inference, large sample theory, subsampling and many other topics. It’s well worth doing a Google search and reading some of his papers.
Before getting to Siva’s post, here is a picture of a density and some of its level set clusters. The algorithm Siva will describe finds the clusters at all levels (which then form a tree).
THE DENSITY CLUSTER TREE
by
SIVARAMAN BALAKRISHNAN
1. Introduction
Clustering is widely considered challenging both practically and theoretically. One of the main reasons for this is that often the true goals of clustering are not clear and this makes clustering seem poorly defined.
One of the most concrete and intuitive ways to define clusters when data are drawn from a density is on the basis of level sets of
, i.e. for any
the connected components of
form the clusters at level . This leaves the question of how to select the “correct”
, and typically we simply sweep over
and present what is called the density cluster tree.
However, we usually do not have access to and would like to estimate the cluster tree of
given samples drawn from
. Recently, Kamalika Chaudhuri and Sanjoy Dasgupta (henceforth CD) presented a simple estimator for the cluster tree in a really nice paper: Rates of convergence for the cluster tree (NIPS, 2010) and showed it was consistent in a certain sense.
This post is about the notion of consistency, the CD estimator, and its analysis.
2. Evaluating an estimator: Hartigan’s consistency
The first notion of consistency for an estimated cluster tree was introduced by J.A. Hartigan in his paper: Consistency of single linkage for high-density clusters (JASA, 1981).
Given some estimator of the cluster tree of
(i.e. a collection of hierarchically nested sets), we say it is consistent if:
For any sets , let
(respectively
) denote the smallest cluster of
containing the samples in
(respectively
).
is consistent if, whenever
and
are different connected components of
(for some
),
is disjoint from
as
.
Essentially, we want that if we have two separated clusters at some level then the cluster tree must reflect this, i.e. the smallest clusters containing the samples from each of these clusters must be disconnected from each other.
To give finite sample bounds, CD introduced a notion of saliently separated clusters and showed that these clusters can be identified using a small number of samples (as a by-product their results also imply Hartigan consistency for their estimators). Informally, clusters are saliently separated if they satisfy two conditions.
- Separation in
: We would expect that clusters that are two close cannot be identified in a finite sample.
- Separation in the density
: There should a sufficiently big region of low density separating the clusters. Again, we would expect that if the “bridge” between the clusters doesn’t dip enough then we might (incorrectly) conclude they are the same cluster from a finite sample.
3. An algorithm
The CD estimator is based on the following algorithm:
- INPUT:
- For
, discard all points with
where
is the distance to the
nearest neighbor of
. Connect
if
, to form
.
- OUTPUT: Return connected components of
.
CD show that their estimator is consistent (and give finite sample rates for saliently separated clusters) if we select and
.
It is actually true that any density estimate that is uniformly close to the true density can be used to construct a Hartigan consistent estimator. However, this involves finding the connected components of the level sets of the estimator, which can be hard. The nice thing about the CD estimator is that it is completely algorithmic.
3.1. Detour 1: Single linkage
Single linkage is a popular linkage clustering algorithm, and essentially corresponds to the case of . Given its popularity an important question to ask is whether the single linkage tree is Hartigan consistent. This was answered by Hartigan in his original paper, affirmatively for
but negatively for
.
The main issue is an effect called “chaining” which causes single linkage to merge clusters before fully connecting up the clusters within themselves. The reason for this is that single-linkage is not sufficiently sensitive to the density separation, i.e. even if there is a region of low density between two clusters single linkage might form a “chain” across it because it is mostly oblivious to the density of the sample.
Returning to the CD estimator: one intuitive way to understand the estimator is to observe that for a fixed , the first step discards points on the basis of their distance to their
-th NN. This is essentially cleaning the sample to remove points in regions of low-density (as measured by a
-NN density estimate). This step makes the algorithm density sensitive and prevents chaining.
4. Analysis
So how does one analyze the CD estimator? We essentially need to show that for any two saliently separated clusters at a level , there is some radius
at which:
- We do not clean out any points in the clusters.
- The clusters are internally connected at the radius
.
- The clusters are mutually separated at this radius.
To establish each of these we will first need to understand how the -NN distance of the sample points behave given a finite sample.
4.1. Detour 2: Uniform large deviation inequalities
As before we are given random samples
from a density
on
. Let’s say we are interested in a measurable subset
of the space
. A fundamental question is: How close is the empirical mass of
to the true mass of
? i.e. we would like to relate the quantities
and
. Notice that this is essentially the same as the question: if I toss a coin with bias
,
times, on average how many heads will I see?
A standard way to answer these questions quantitatively is using a large deviation inequality like Hoeffding’s inequality. Often we have multiple sets and we’d like to relate
to
for each of these sets. One quantity we might be interested in is
and quantitative estimates of are called uniform convergence results.
One surprising fact is that even for infinite collections of sets we can sometimes still get good bounds on
if we can control the complexity of
. Typical ways to quantify this complexity are things like covering numbers, VC dimension etc.
To conclude this detour here is an example of this in action. Let be the collection of all balls (with any center and radius) in
, if
then with probability
, for any
The inequalities above are uniform convergence inequalities over the sets of all balls in . Although this is an infinite collection of sets it has a small VC dimension of
, and this lets us uniformly relate the true and empirical mass of each of these sets.
In the context of the CD estimator what this detour assures us is that -NN distance of the every sample point is close to the “true”
-NN distance. This lets us show that for an appropriate radius we will not remove any point from a high-density cluster (because it will have a small
-NN distance) or that we will remove all points in the low-density region that separates salient clusters (because they will have large
-NN distance). Results like this let us establish consistency of the CD estimator.
References.
Chaudhuri, K. and DasGupta, S. (2010). Rates of convergence for the cluster tree. Advances in Neural Information Processing Systems, 23, 343–351.
Hartigan, J. (1981). Consistency of single linkage for high-density clusters. Journal of the American Statistical Association, 76, 388-394.
You must be logged in to post a comment.