Let’s investigate the properties of histograms and kernel density estimates with simulated data. When simulating data, we know the true distribution that generated the data and can compare the true distribution with our histogram and KDE estimates.

We’ll begin by selecting an interesting distribution to generate our data. To highlight the properties of our density estimates, we’d like something that is multimodal. To acheive this, we will simualte from a mixture of Normal densities. Specifically, suppose our random variable of interest has a probability of \(p = 0.7\) being drawn from a \(N(5,7)\) distribtion (where 17 is the standard deviation, not the variance) and probability \(1-p = 0.3\) of being drawn from a \(N(32,5)\) distribution. Mathematically, we can express this as \[\begin{align} Y & = \mathbb{1}_{\{U < 0.7\}} \, X_1 + (1-\mathbb{1}_{\{U < 0.7\}}) \, X_2 \\ X_1 & \sim N(5,7) \\ X_2 & \sim N(32,5) \\ U & \sim Uniform(0,1). \end{align}\]

We can plot the true distribution of \(Y\) in the following manner.

p = 0.7
mu1 = 5
sig1 = 7
mu2 = 32
sig2 = 5
xLower = min(mu1-3*sig1,mu2-3*sig2)
xUpper = max(mu1+3*sig1,mu2+3*sig2)
xGrid = seq(xLower,xUpper,length=10000)
yDens = p*dnorm(xGrid,mu1,sig1)+(1-p)*dnorm(xGrid,mu2,sig2)
plot(xGrid,yDens,type='l')

We can simulate \(N = \{100,1000,10000,100000\}\) observations from this distribution in the following manner.

NVec = c(100,1000,10000,100000)
YSamp = list()
for(i in 1:length(NVec)){
  ySamp = rep(0,NVec[i])
  x1Samp = rnorm(NVec[i],mu1,sig1)
  x2Samp = rnorm(NVec[i],mu2,sig2)
  uSamp = runif(N)
  ySamp[uSamp < p] = x1Samp[uSamp < p]
  ySamp[uSamp >= p] = x2Samp[uSamp >= p]
  YSamp[[i]] = ySamp
}

Notice that I have written a loop for each value of \(N\), but I avoided writing an inner loop for the simulation of each observation. The vectorized simulation code inside the loop is not necessarily the most efficient method of sampling (think about how you could improve it), but it is far better than writing an inner loop. Note also that YSamp is a list object that stores several vectors (one for each value of \(N\)) of simulations from the distribution above. Since each of the vectors has a different length (length \(N\)), we could not store them as columns in a matrix.

With a sample of \(N\) observations from the distribution in hand, we can compute histograms and kernel density estimates.

nBreaks = 50
par(mfrow=c(2,2))
hist(YSamp[[1]],breaks=nBreaks)
hist(YSamp[[2]],breaks=nBreaks)
hist(YSamp[[3]],breaks=nBreaks)
hist(YSamp[[4]],breaks=nBreaks)

adj = 1
par(mfrow=c(2,2))
plot(density(YSamp[[1]],adjust=adj))
plot(density(YSamp[[2]],adjust=adj))
plot(density(YSamp[[3]],adjust=adj))
plot(density(YSamp[[4]],adjust=adj))

The histograms and KDEs demonstrate that the true density is better estimated when you have more data (higher \(N\)). This is almost always true in estimation: more data is better. If you tinker with the breaks and adjust arguments to the hist and density functions, you can further see how your choice of smoothing can effect the estimates. Evidence of oversmoothing and undersmoothing in both the histograms and KDEs will be especially apparent for the simulations with large \(N\); it will be less apparent for small \(N\) because there is less data to smooth.

LS0tCnRpdGxlOiAiS0RFIFNpbXVsYXRpb24gRXhlcmNpc2UiCm91dHB1dDoKICBodG1sX25vdGVib29rOiBkZWZhdWx0Ci0tLQoKTGV0J3MgaW52ZXN0aWdhdGUgdGhlIHByb3BlcnRpZXMgb2YgaGlzdG9ncmFtcyBhbmQga2VybmVsIGRlbnNpdHkgZXN0aW1hdGVzIHdpdGggc2ltdWxhdGVkIGRhdGEuIFdoZW4gc2ltdWxhdGluZyBkYXRhLCB3ZSBrbm93IHRoZSB0cnVlIGRpc3RyaWJ1dGlvbiB0aGF0IGdlbmVyYXRlZCB0aGUgZGF0YSBhbmQgY2FuIGNvbXBhcmUgdGhlIHRydWUgZGlzdHJpYnV0aW9uIHdpdGggb3VyIGhpc3RvZ3JhbSBhbmQgS0RFIGVzdGltYXRlcy4KCldlJ2xsIGJlZ2luIGJ5IHNlbGVjdGluZyBhbiBpbnRlcmVzdGluZyBkaXN0cmlidXRpb24gdG8gZ2VuZXJhdGUgb3VyIGRhdGEuIFRvIGhpZ2hsaWdodCB0aGUgcHJvcGVydGllcyBvZiBvdXIgZGVuc2l0eSBlc3RpbWF0ZXMsIHdlJ2QgbGlrZSBzb21ldGhpbmcgdGhhdCBpcyBtdWx0aW1vZGFsLiBUbyBhY2hlaXZlIHRoaXMsIHdlIHdpbGwgc2ltdWFsdGUgZnJvbSBhIG1peHR1cmUgb2YgTm9ybWFsIGRlbnNpdGllcy4gU3BlY2lmaWNhbGx5LCBzdXBwb3NlIG91ciByYW5kb20gdmFyaWFibGUgb2YgaW50ZXJlc3QgaGFzIGEgcHJvYmFiaWxpdHkgb2YgJHAgPSAwLjckIGJlaW5nIGRyYXduIGZyb20gYSAkTig1LDcpJCBkaXN0cmlidGlvbiAod2hlcmUgMTcgaXMgdGhlIHN0YW5kYXJkIGRldmlhdGlvbiwgbm90IHRoZSB2YXJpYW5jZSkgYW5kIHByb2JhYmlsaXR5ICQxLXAgPSAwLjMkIG9mIGJlaW5nIGRyYXduIGZyb20gYSAkTigzMiw1KSQgZGlzdHJpYnV0aW9uLiBNYXRoZW1hdGljYWxseSwgd2UgY2FuIGV4cHJlc3MgdGhpcyBhcwpcYmVnaW57YWxpZ259CiAgWSAmID0gXG1hdGhiYnsxfV97XHtVIDwgMC43XH19IFwsIFhfMSArICgxLVxtYXRoYmJ7MX1fe1x7VSA8IDAuN1x9fSkgXCwgWF8yIFxcCiAgWF8xICYgXHNpbSBOKDUsNykgXFwKICBYXzIgJiBcc2ltIE4oMzIsNSkgXFwKICBVICYgXHNpbSBVbmlmb3JtKDAsMSkuClxlbmR7YWxpZ259CldlIGNhbiBwbG90IHRoZSB0cnVlIGRpc3RyaWJ1dGlvbiBvZiAkWSQgaW4gdGhlIGZvbGxvd2luZyBtYW5uZXIuCmBgYHtyLGZpZy53aWR0aD04fQpwID0gMC43Cm11MSA9IDUKc2lnMSA9IDcKbXUyID0gMzIKc2lnMiA9IDUKeExvd2VyID0gbWluKG11MS0zKnNpZzEsbXUyLTMqc2lnMikKeFVwcGVyID0gbWF4KG11MSszKnNpZzEsbXUyKzMqc2lnMikKeEdyaWQgPSBzZXEoeExvd2VyLHhVcHBlcixsZW5ndGg9MTAwMDApCnlEZW5zID0gcCpkbm9ybSh4R3JpZCxtdTEsc2lnMSkrKDEtcCkqZG5vcm0oeEdyaWQsbXUyLHNpZzIpCnBsb3QoeEdyaWQseURlbnMsdHlwZT0nbCcpCmBgYApXZSBjYW4gc2ltdWxhdGUgJE4gPSBcezEwMCwxMDAwLDEwMDAwLDEwMDAwMFx9JCBvYnNlcnZhdGlvbnMgZnJvbSB0aGlzIGRpc3RyaWJ1dGlvbiBpbiB0aGUgZm9sbG93aW5nIG1hbm5lci4KYGBge3J9Ck5WZWMgPSBjKDEwMCwxMDAwLDEwMDAwLDEwMDAwMCkKWVNhbXAgPSBsaXN0KCkKZm9yKGkgaW4gMTpsZW5ndGgoTlZlYykpewogIHlTYW1wID0gcmVwKDAsTlZlY1tpXSkKICB4MVNhbXAgPSBybm9ybShOVmVjW2ldLG11MSxzaWcxKQogIHgyU2FtcCA9IHJub3JtKE5WZWNbaV0sbXUyLHNpZzIpCiAgdVNhbXAgPSBydW5pZihOKQogIHlTYW1wW3VTYW1wIDwgcF0gPSB4MVNhbXBbdVNhbXAgPCBwXQogIHlTYW1wW3VTYW1wID49IHBdID0geDJTYW1wW3VTYW1wID49IHBdCiAgWVNhbXBbW2ldXSA9IHlTYW1wCn0KYGBgCk5vdGljZSB0aGF0IEkgaGF2ZSB3cml0dGVuIGEgbG9vcCBmb3IgZWFjaCB2YWx1ZSBvZiAkTiQsIGJ1dCBJIGF2b2lkZWQgd3JpdGluZyBhbiBpbm5lciBsb29wIGZvciB0aGUgc2ltdWxhdGlvbiBvZiBlYWNoIG9ic2VydmF0aW9uLiBUaGUgdmVjdG9yaXplZCBzaW11bGF0aW9uIGNvZGUgaW5zaWRlIHRoZSBsb29wIGlzIG5vdCBuZWNlc3NhcmlseSB0aGUgbW9zdCBlZmZpY2llbnQgbWV0aG9kIG9mIHNhbXBsaW5nICh0aGluayBhYm91dCBob3cgeW91IGNvdWxkIGltcHJvdmUgaXQpLCBidXQgaXQgaXMgZmFyIGJldHRlciB0aGFuIHdyaXRpbmcgYW4gaW5uZXIgbG9vcC4gTm90ZSBhbHNvIHRoYXQgYFlTYW1wYCBpcyBhIGxpc3Qgb2JqZWN0IHRoYXQgc3RvcmVzIHNldmVyYWwgdmVjdG9ycyAob25lIGZvciBlYWNoIHZhbHVlIG9mICROJCkgb2Ygc2ltdWxhdGlvbnMgZnJvbSB0aGUgZGlzdHJpYnV0aW9uIGFib3ZlLiBTaW5jZSBlYWNoIG9mIHRoZSB2ZWN0b3JzIGhhcyBhIGRpZmZlcmVudCBsZW5ndGggKGxlbmd0aCAkTiQpLCB3ZSBjb3VsZCBub3Qgc3RvcmUgdGhlbSBhcyBjb2x1bW5zIGluIGEgbWF0cml4LgoKV2l0aCBhIHNhbXBsZSBvZiAkTiQgb2JzZXJ2YXRpb25zIGZyb20gdGhlIGRpc3RyaWJ1dGlvbiBpbiBoYW5kLCB3ZSBjYW4gY29tcHV0ZSBoaXN0b2dyYW1zIGFuZCBrZXJuZWwgZGVuc2l0eSBlc3RpbWF0ZXMuCmBgYHtyLGZpZy53aWR0aD04fQpuQnJlYWtzID0gNTAKcGFyKG1mcm93PWMoMiwyKSkKaGlzdChZU2FtcFtbMV1dLGJyZWFrcz1uQnJlYWtzKQpoaXN0KFlTYW1wW1syXV0sYnJlYWtzPW5CcmVha3MpCmhpc3QoWVNhbXBbWzNdXSxicmVha3M9bkJyZWFrcykKaGlzdChZU2FtcFtbNF1dLGJyZWFrcz1uQnJlYWtzKQpgYGAKCmBgYHtyLGZpZy53aWR0aD04fQphZGogPSAxCnBhcihtZnJvdz1jKDIsMikpCnBsb3QoZGVuc2l0eShZU2FtcFtbMV1dLGFkanVzdD1hZGopKQpwbG90KGRlbnNpdHkoWVNhbXBbWzJdXSxhZGp1c3Q9YWRqKSkKcGxvdChkZW5zaXR5KFlTYW1wW1szXV0sYWRqdXN0PWFkaikpCnBsb3QoZGVuc2l0eShZU2FtcFtbNF1dLGFkanVzdD1hZGopKQpgYGAKVGhlIGhpc3RvZ3JhbXMgYW5kIEtERXMgZGVtb25zdHJhdGUgdGhhdCB0aGUgdHJ1ZSBkZW5zaXR5IGlzIGJldHRlciBlc3RpbWF0ZWQgd2hlbiB5b3UgaGF2ZSBtb3JlIGRhdGEgKGhpZ2hlciAkTiQpLiBUaGlzIGlzIGFsbW9zdCBhbHdheXMgdHJ1ZSBpbiBlc3RpbWF0aW9uOiBtb3JlIGRhdGEgaXMgYmV0dGVyLiBJZiB5b3UgdGlua2VyIHdpdGggdGhlIGBicmVha3NgIGFuZCBgYWRqdXN0YCBhcmd1bWVudHMgdG8gdGhlIGBoaXN0YCBhbmQgYGRlbnNpdHlgIGZ1bmN0aW9ucywgeW91IGNhbiBmdXJ0aGVyIHNlZSBob3cgeW91ciBjaG9pY2Ugb2Ygc21vb3RoaW5nIGNhbiBlZmZlY3QgdGhlIGVzdGltYXRlcy4gRXZpZGVuY2Ugb2Ygb3ZlcnNtb290aGluZyBhbmQgdW5kZXJzbW9vdGhpbmcgaW4gYm90aCB0aGUgaGlzdG9ncmFtcyBhbmQgS0RFcyB3aWxsIGJlIGVzcGVjaWFsbHkgYXBwYXJlbnQgZm9yIHRoZSBzaW11bGF0aW9ucyB3aXRoIGxhcmdlICROJDsgaXQgd2lsbCBiZSBsZXNzIGFwcGFyZW50IGZvciBzbWFsbCAkTiQgYmVjYXVzZSB0aGVyZSBpcyBsZXNzIGRhdGEgdG8gc21vb3RoLgo=