Mar 7, 2018 - on Ç«j is equivalent to beta Be(1,1) distribution, which is a conjugate prior for the Bernoulli distribution. Integrating out zjk from m...

0 downloads 35 Views 270KB Size

No documents

Bayesian nonparametric regression using complex wavelets Norbert Rem´ enyia and Brani Vidakovicb b

a Research Group, Sabre Holdings, Southlake, Texas School of Industrial and Systems Engineering, Georgia Institute of Technology

Abstract. In this paper we propose a new adaptive wavelet denoising methodology using complex wavelets. The method is based on a fully Bayesian hierarchical model in the complex wavelet domain that uses a bivariate mixture prior on the wavelet coefficients. The heart of the procedure is computational, where the posterior mean is computed through Markov chain Monte Carlo (MCMC) simulations. We show that the method has good performance, as demonstrated by simulations on the well-known test functions and by comparison to a well-established complex wavelet-based denoising procedure. An application to real-life data set is also considered.

1 Introduction In the present paper we consider a novel Bayesian model as a solution to the classical nonparametric regression problem yi = f (xi ) + εi ,

i = 1, . . . , n,

(1)

where xi , i = 1, . . . , n, are equispaced sampling points, and the errors εi are i.i.d. normal random variables, with zero mean and variance σ 2 . The interest is to estimate the function f using the observations yi . After applying a linear and orthogonal wavelet transform, the equation in (1) becomes djk = θjk + εjk ,

(2)

where djk , θjk and εjk are the wavelet coefficients (at resolution j and position k) corresponding to y, f and ε respectively. Note that εi and εjk are equal in distribution due to orthogonality of wavelet transforms. Due to the the whitening property of the wavelet transforms (Flandrin, 1992) many existing methods assume independence of the coefficients, and omit the double indices jk to model a generic wavelet coefficient as d = θ + ε,

ǫ ∼ N (0, σ 2 ).

(3)

Keywords and phrases. Bayesian estimation, Hierarchical Bayes model, Markov Chain Monte Carlo, Kotz distribution, Wavelet shrinkage, Complex wavelets, Nonparametric regression

1

2

Rem´enyi and Vidakovic

When indices are needed for the clarity of exposition they will be used. To estimate θ in model (3) Bayesian shrinkage rules have been proposed in the literature by many authors. By a shrinkage rule the observed wavelet coefficients d are replaced with their shrunk version θˆ = δ(d). Then f is ˆ Empirical distributions of estimated as the inverse wavelet transform of θ. detail wavelet coefficients for signals encountered in practical applications are (at each resolution level) centered around and peaked at zero (Mallat, 1989). A range of models, for which unconditional distribution of wavelet coefficients mimic these properties, have been considered in the literature. The traditional Bayesian models consider prior distribution on the wavelet coefficient θ as π(θ) = ǫδ0 + (1 − ǫ)ξ(θ),

(4)

where δ0 is a point mass at zero, ξ is a symmetric and unimodal distribution, and ǫ is a fixed parameter in [0,1], usually level dependent, that controls the extent of shrinkage for values of d close to 0. This type of model was considered by Abramovich et al. (1998), Vidakovic (1998), Vidakovic and Ruggeri (2001) and Johnstone and Silverman (2005), among others. A recent overview of Bayesian strategies in wavelet shrinkage can be found in Rem´enyi and Vidakovic (2012). Wavelet shrinkage methods using complex-valued wavelets provide additional insights to shrinkage process due to the information contained in the phase. After taking a complex wavelet transform of a real-valued signal, the model remains as in (2), however, the observed wavelet coefficients djk become complex numbers at resolution j and location k. Several papers considering Bayesian wavelet shrinkage with complex wavelets are available. Lina and Mayrand (1995) describes the complex-valued Daubechies’ wavelets in detail, Lina and Macgibbon (1997), Lina (1997), and Lina et al. (1999) focus on image denoising, in which the phase of the observed wavelet coefficients is preserved, but the modulus of the coefficients is shrunk by the Bayes rule. Barber and Nason (2004) develops the complex empirical Bayes (CEB) procedure, which modifies both the phase and modulus of wavelet coefficients by a bivariate shrinkage rule. The wavelet coefficients are represented as bivariate real-valued random variables and the authors formulate a bivariate model in the complex wavelet domain. The resulting estimators are in closed-form and the hyperparameters of the model are estimated by the empirical Bayes procedure. We build on the results above, and formulate a fully Bayesian hierarchical model which accounts for the uncertainty of the prior parameters by placing hyperpriors on them. Since a closed-form solution for the Bayes estimator

Bayesian nonparametric regression using complex wavelets

3

does not exist, the Markov Chain Monte Carlo (MCMC) methodology is applied and an approximate estimator (posterior mean) is computed from the output of simulational runs. Although the simplicity of a closed-form solution is lost, the procedure is fully Bayesian, adaptive to the underlying signal, and the estimation of the hyperparameters is automatic via the MCMC sampling algorithm. The estimation is governed by the data and hyperprior distributions on the parameters, and this adaptivity ensures good denoising performance. The paper is organized as follows. Section 2 formalizes the model and presents some results related to it. Section 3 details the MCMC sampling scheme. Section 4 discusses the selection of hyperparameters and presents simulation results and comparisons to existing methods. In Section 5 we apply the proposed shrinkage to a real data set, and in Section 6 conclusions are provided.

2 Fully Bayesian Model In this section we describe a novel, fully Bayesian hierarchical model in the complex wavelet domain. After applying the complex wavelet transform to a real-valued signal, the observed wavelet coefficients djk at resolution j and location k become complex numbers. Building on the approach taken by Barber and Nason (2004), we represent the complex-valued wavelet coefficients as bivariate real-valued random variables. We consider the following Bayesian model djk |θjk , σ 2 ∼ N2 (θjk , σ 2 Σj ) θjk |ǫj , Cj

∼ (1 − ǫj )δ0 + ǫj EP 2 (µ, Cj , β),

(5)

where EP 2 stands for the bivariate exponential power distribution. The multivariate exponential power distribution is an extension of the class of normal distributions in which the heaviness of tails can be controlled. Its definition and properties can be found in Gomez et al. (1998). The prior on the location θjk is a bivariate extension of the standard mixture prior in the Bayesian wavelet shrinkage literature, consisting of a point mass at zero and a heavy-tailed distribution. As a prior, Barber and Nason (2004) considered a mixture of point mass and bivariate normal distribution. A heavy-tailed mixture prior in our proposal better captures the sparsity of wavelet coefficients common in most applications; however, a closed-form solution is lost, and we need to rely on MCMC simulations to compute estimators. To calibrate the exponential power prior in (5) for wavelet shrinkage, we use µ = 0, because the detail wavelet coefficients are centered around zero

4

Rem´enyi and Vidakovic

by their definition. We also fix β = 1/2, which gives the prior on θjk the following form: 1 ′ −1 1/2 1 exp − θjk Cj θjk . (6) π(θjk |Cj ) = 2 8π|Cj |1/2 The prior specified above is equivalent to the bivariate double exponential distribution. The univariate double exponential prior was found to have desirable properties in the real-valued wavelet denoising context (Vidakovic and Ruggeri, 2001; Johnstone and Silverman, 2005), hence it is natural to extend it to the bivariate case. From model (5) it is apparent that the mixture prior on θjk is set depending on the dyadic level j, which ensures scale-adaptivity of the method. Quantity σ 2 Σj represents the scaled covariance matrix of the noise at each decomposition level, and Cj represents the levelwise scale matrix in the exponential power prior. Explicit expression for the covariance (Σj ) induced by white noise in complex wavelet shrinkage was derived in Barber and Nason (2004). We adopt the approach described in their paper to model the covariance structure of the noise and compute Σj for each dyadic level j from the expressions Cov{Re(ε), Im(ε)} = −σ 2 Im(W W T )/2,

Cov{Re(ε), Re(ε)} = σ 2 {In + Re(W W T )}/2, 2

(7)

T

Cov{Im(ε), Im(ε)} = σ {In − Re(W W )}/2. We assume a common i.i.d normal noise model e ∼ Nn (0, σ 2 In ), as in (1). After taking complex wavelet transform, the real and imaginary parts of the transformed noise ε = W e become correlated, which is expressed by (7) above. Note that W is a complex-valued unitary matrix representing ¯ TW = WW ¯ T = I, where W ¯ denotes the the wavelet transform, therefore W complex conjugate of W . Instead of estimating hyperparameters σ 2 , ǫj , and Cj in (5) in empirical Bayes fashion, we elicit hyperprior distributions on them in a fully Bayesian manner. We specify a conjugate inverse gamma prior on the noise variance σ 2 , and an inverse Wishart prior on the matrix Cj describing the covariance structure of the spread prior of θjk . Mixing weight ǫj regulates the strength of shrinkage of a wavelet coefficient to zero. We specify a “noninformative” uniform prior on this parameter, allowing the estimation to be governed mostly by the data. For computational purposes, we represent the exponential power prior in (6) as a scale mixture of multivariate normal distributions, which is an

Bayesian nonparametric regression using complex wavelets

5

essential step for efficient Monte Carlo simulation. From Gomez et al. (2008), the bivariate exponential power distribution with µ = 0 and β = 1/2 can be represented as Z ∞ 1 N2 (0, vCj ) EP 2 (µ = 0, Cj , β = 1/2) = v 1/2 e−v/8 dv, Γ(3/2)83/2 0 which is a scale mixture of bivariate normal distributions with mixing distribution gamma. Using the specified hyperpriors and the mixture representation, the basic model in (5) extends to djk |θjk , σ 2 ∼ N2 (θjk , σ 2 Σj ) σ 2 ∼ IG(a, b)

θjk |zjk , vjk , Cj

zjk |ǫj

∼ (1 − zjk )δ0 + zjk N2 (0, vjk Cj ) ∼ Ber(ǫj )

(8)

ǫj ∼ U (0, 1)

vjk ∼ Ga(3/2, 8)

Cj ∼ IW(Aj , w).

For computational purposes, we introduce a latent variable zjk . Variable zjk is a Bernoulli variable indicating whether parameter θjk comes from a point mass at zero (zjk = 0) or from a bivariate normal distribution (zjk = 1), with prior probability of 1 − ǫj or ǫj , respectively. The uniform U (0, 1) prior on ǫj is equivalent to beta Be(1, 1) distribution, which is a conjugate prior for the Bernoulli distribution. Integrating out zjk from model (9) gives back the original mixture expression in model (5). By representing the exponential power prior as a scale mixture of normals, the hierarchical model in (9) becomes tractable, because the full conditional distributions of all the parameters become explicit. Therefore, we can develop a fast Gibbs sampling algorithm to update all the necessary parameters σ 2 , zjk , ǫj , θjk , vjk and Cj . Note that in order to run the Gibbs sampling algorithm we only have to specify hyperparameters a, b, Aj and w. The details of Gibbs sampling scheme will be discussed in the next section.

3 Gibbs sampling scheme To conduct posterior inference on the wavelet coefficients θjk , a standard Gibbs sampling procedure is adopted. In this section we provide details of how to develop a Gibbs sampler for the model in (9). Gibbs sampling is an iterative algorithm that simulates from a joint posterior distribution through

6

Rem´enyi and Vidakovic

iterative simulation over the list of full conditional distributions. For more details on Gibbs sampling, see Casella and George (1992), or Robert and Casella (1999). For the model in (9), the full conditionals for parameters σ 2 , zjk , ǫj , θjk , vjk and Cj can be specified exactly. Specification of hyperparameters a, b, Aj and w will be discussed in Section 4. Technical details of this section are deferred to Appendix. 3.1 Updating σ2 Using a conjugate IG(a, b) prior on σ 2 results in a full conditional which is inverse gamma. Therefore, update σ 2 as σ

2 (i)

∼ IG a + n, 1/b + 1/2

X

(i−1)

djk − θjk

jk

′

−1 (i−1) , Σ−1 djk − θjk j

(9)

where n = 2J − 2J0 denotes the sample size, and i denotes the ith simulation run. 3.2 Updating zjk and ǫj In model (9) the latent variable zjk has Bernoulli prior with parameter ǫj . Its full conditional remains Bernoulli and is updated as follows:

(i) zjk

=

0,

1,

wp.

wp.

(i) f djk |0, σ 2 (i−1) (i−1) (i−1) (i−1) f djk |0, σ 2 (i) + ǫj m djk |σ 2 (i) , vjk , Cj 1 − ǫj , (i) (i−1) (i−1) (i−1) ǫj m djk |σ 2 , vjk , Cj (i−1) (i−1) (i−1) (i−1) f djk |0, σ 2 (i) + ǫj m djk |σ 2 (i) , vjk , Cj 1 − ǫj (i−1)

1 − ǫj

(10)

where f (djk |0, σ 2 )

=

m(djk |σ 2 , vjk , Cj )

=

1 1 ′ −1 , exp − d Σ d jk jk j 2π|σ 2 Σj |1/2 2σ 2 −1 1 ′ 1 2 exp − . d σ Σ + v C d j j jk jk jk 2 2π|σ 2 Σj + vjk Cj |1/2

Parameter ǫj is given a conjugate Be(1, 1) prior. This results in a full conditional distributed as beta. Therefore we update ǫj as (i) ǫj

∼ Be 1 +

X k

(i) zjk , 1

+

X k

1−

(i) zjk

!

.

(11)

Note that other choices from the Be(α, β) family are possible as the prior for ǫj . However, we used the noninformative choice of α = 1 and β = 1 to facilitate data-driven estimation of ǫj .

Bayesian nonparametric regression using complex wavelets

7

3.3 Updating θjk From the conjugate setup of model (9) and using the latent variable zjk , it follows that the full conditional distribution of θjk is either a point mass at zero (zjk = 0), or a bivariate normal distribution (zjk = 1). Therefore we update θjk as follows: (i) θjk

∼

(

where

δ0(θjk ),

f θjk |djk , σ

2 (i)

(i−1) (i−1) , vjk , Cj

,

(i)

if

zjk = 0

if

(i) zjk

=1

,

(12)

f (θjk |djk , σ 2 , vjk , Cj )

=

1 ′ ˜ −1 1 exp − µ ˜jk Σjk µ ˜jk , ˜ jk |1/2 2 2π|Σ

µ ˜jk

=

˜ jk Σ

˜ jk Σ

=

3.4 Updating vjk

Σ−1 j djk , σ2 −1 2 −1 Σ−1 . j /σ + Cj /vjk

In model (9) for the scale mixture of normals representation we placed a gamma prior on vjk . The full conditional distribution of vjk depends on the value of zjk , and the updating scheme is: ( (i) (i)

vjk ∼

Ga(3/2, 8), o−1 n (i) (i) ′ (i−1) θjk , 1/2 , GIG 1/4, θjk Cj

if

zjk = 0

if

zjk = 1

(i)

.

(13)

Here GIG(a, b, p) denotes the generalized inverse Gaussian distribution (Johnson et al., 1994, p.284) with probability density function f (x|a, b, p) =

(a/b)p/2 p−1 −(ax+b/x)/2 √ x e , 2Kp ( ab)

x > 0; a, b > 0,

where Kp denotes the modified Bessel function of the third kind. Simulation c of GIG random variates is available through a MATLAB implementation based on Dagpunar (1989). 3.5 Updating Cj Placing a conjugate inverse Wishart prior on covariance matrix Cj results in a full conditional distribution which remains inverse Wishart. Therefore, Cj is updated as: (i) (i) ′ X X θ θ (i) (i) jk jk (i) Cj ∼ IW Aj + zjk ,w + zjk . (14) (i) vjk k k

8

Rem´enyi and Vidakovic

The implementation of the described Gibbs sampler requires simulation routines for standard distributions such as inverse gamma, Bernoulli, beta, normal, and also a specialized routine to simulate from the generalized inverse Gaussian. The procedure was implemented in MATLAB and available from the authors. The Gibbs sampling procedure can be summarized as (i) (ii) (iii) (iv) (v) (vi) (vii) (viii)

Choose initial values for parameters Repeat steps (iii) - (viii) for l = 1, . . . , M Update σ 2 Update zjk for j = J0 , . . . , log2 (n) − 1, k = 0, . . . , 2j − 1 Update ǫj for j = J0 , . . . , log2 (n) − 1 Update θjk for j = J0 , . . . , log2 (n) − 1, k = 0, . . . , 2j − 1 Update vjk for j = J0 , . . . , log2 (n) − 1, k = 0, . . . , 2j − 1 Update Cj for j = J0 , . . . , log2 (n) − 1.

The denoising method based on the Gibbs sampling algorithm above will be called Complex Gibbs Sampling Wavelet Smoother (CGSWS ). In the following section we explain the specification of hyperparameters a, b, Aj and w, and apply the CGSWS algorithm to denoise simulated test functions.

4 Simulations and Comparisons In this section we discuss the performance of the proposed CGSWS estimator and compare it to an established method from the literature considering complex Bayesian wavelet denoising. Within each replication of the simulations we performed 10,000 Gibbs sampling iterations, of which the first 5,000 P (i) was burn-in. We used the sample average θˆjk = i θjk /N as the usual estimator for the posterior mean. In our set-up N = 5, 000. First we discuss the selection of hyperparameters, then explain the simulation setup and results. 4.1 Selection of Hyperparameters In any Bayesian modeling task the selection of hyperparameters is critical for good performance of the model. It is also desirable to have a default way of selecting the hyperparameters which makes the shrinkage procedure automatic. In order to apply the CGSWS method we need to specify hyperparameters a, b, Aj and w in the hyperprior distributions. This selection is governed by the data and hyperprior distributions. The advantage of the fully Bayesian approach is that once the hyperpriors are set, the estimation of parameters σ 2 , ǫj , θjk , vjk and Cj is automatic via the Gibbs sampling scheme. Another

9

Bayesian nonparametric regression using complex wavelets

advantage is that the method is relatively robust to the choice of hyperparameters since they enter the model at higher level of hierarchy. Parameters a and b. For simplicity we set a = 2 and b = 1/ˆ σ 2 , where 2 2 , j = log2 (n) − 1. + MAD(dim σ ˆ 2 = MAD(dre jk /0.6745) jk /0.6745)

This ensures that the mean of the inverse gamma prior on σ 2 is the standard robust estimator of the noise variation (Donoho and Johnstone, 1994). Here MAD stands for the median absolute deviation of the wavelet coefficients, which we calculate at the finest level of detail from both the real and imaginary parts of wavelet coefficients (Barber and Nason, 2004). Parameters Aj and w. Hyperparameters Aj and w play an important role in the prior on the covariance matrix Cj . Since in the Gibbs sampler P (i) (i) (i) ′ (i) P (i) updates k zjk θjk θjk /vjk can possibly be zero, a k zjk and therefore noninformative Jeffreys prior on Cj is not computationally feasible. Also note that the mean of the inverse Wishart prior is Aj /(w − p − 1), where p is the dimension of Aj , which is equal to 2 in our case. Therefore we set Aj = (w − 2 − 1)Cˆj , which forces the mean of the prior to be a pre-specified estimate of Cj . In the case of the mixture bivariate double exponential prior, the covariance of the signal part is Cov(θjk ) = ǫ2j 12 Cj , where 12 Cj is the covariance of a bivariate double exponential random variable (Gomez et al., 1998). Since the model assumes independence of signal and error parts, we have that Cov(djk ) = ǫ2j 12 Cj + σ 2 Σj , where Cov(djk ) is the covariance of the ob√ servations djk at j th dyadic level. We choose ǫj = 1/ 12 as a reasonable estimate, which additionally simplifies the equation in hand. Therefore a reasonable estimator for Cj is Cˆj = Cov(dj ) − σ ˆ 2 Σj ,

J0 ≤ j ≤ log2 n − 1,

(15)

where Cov(dj ) is the sample covariance estimator using observations djk at j th dyadic level. Note that Σj is known, and σ ˆ 2 is the usual robust estimator of the variance of wavelet coefficients introduced before. Also note that when Cˆj is not positive definite, we regularize it by adding a multiple of the identity matrix. Finally, we set w = 10. Note, that w = 4 is the least informative choice in our case, however, we found that a slightly higher values for w worked better in practice.

10

Rem´enyi and Vidakovic

4.2 Simulation Results In the following section we present results of a simulation study in which we compare the denoising performance of the CGSWS method to two complex wavelet-based denoising methods introduced by Barber and Nason (2004). The first one (CMWS-Hard) is a phase preserving estimator based on hard thresholding of a “thresholding statistic” d′jk Σ−1 j djk . The second one (CEBPosterior mean) is a bivariate posterior mean estimator based on an empirical Bayes procedure. Four standard test functions (Blocks, Bumps, Doppler, Heavisine) were considered (Donoho and Johnstone, 1994) in the simulations. The functions were rescaled so that the added noise produced preassigned signalto-noise ratio (SNR), as standardly done. The test functions were simulated at n = 256, 512, and 1024 equally spaced points in the interval [0, 1]. Four commonly considered SNR’s were selected, SNR=3, SNR=5, SNR=7 and SNR=10. We used the symmetric complex-valued Daubechies wavelet base with 3 vanishing moments for all the test functions. The coarsest decomposition level was J0 = 3 which matches J0 = ⌊log2 (log(n)) + 1⌋ suggested by Antoniadis et al. (2001). Reconstruction of the theoretical signal was measured by the average mean squared error (AMSE), calculated as M n 2 1 XX ˆ fk (ti ) − f (ti ) , Mn k=1 i=1

where M is the number of simulation runs and f (ti ), i = 1, . . . , n are known values of the test functions considered. We denote by fˆk (ti ), i = 1, . . . , n the estimator from the k-th simulation run. Note, that in each of these simulation runs we perform 10,000 Gibbs iterations to get the estimators θˆjk . We set M = 100. The results are summarized in Table 1, where boldface numbers indicate the smallest AMSE result for each test scenario. The results convey that the proposed CGSWS method outperforms both estimators for majority of the test scenarios, and in most other cases it is very close in performance to the superior method. The improvement is most pronounced at small sample sizes (n = 256) and for the test functions Bumps and Heavisine. This result confirms the adaptiveness of the method and the advantage of using a heavytailed prior as prior distribution for the location of wavelet coefficients. Note, however, that the computational cost of the algorithm is higher than for the competitors. The CEB-Posterior mean method can be a good compromise in terms of performance and computational efficiency.

11

Bayesian nonparametric regression using complex wavelets Table 1 AMSE of CGSWS method compared to estimators CMWS-Hard and CEB-Posterior mean. Signal Blocks

N 256

512

1024

Bumps

256

512

1024

Method CGSWS CMWS-H CEB-PM CGSWS CMWS-H CEB-PM CGSWS CMWS-H CEB-PM CGSWS CMWS-H CEB-PM CGSWS CMWS-H CEB-PM CGSWS CMWS-H CEB-PM

SNR=3 0.4293 0.4929 0.4343 0.2954 0.3481 0.3028 0.1991 0.2372 0.1980 0.4631 0.5972 0.4855 0.3273 0.3983 0.3295 0.1965 0.2137 0.1919

SNR=5 0.4533 0.5476 0.4675 0.3180 0.3627 0.3202 0.2013 0.2230 0.1988 0.4825 0.5946 0.4996 0.3274 0.3760 0.3315 0.1970 0.2151 0.1986

SNR=7 0.4610 0.5490 0.4715 0.3138 0.3457 0.3126 0.1991 0.2098 0.1947 0.4946 0.5853 0.5120 0.3235 0.3538 0.3287 0.2009 0.2134 0.2034

SNR=10 0.4499 0.5021 0.4547 0.3051 0.3166 0.2995 0.1924 0.1944 0.1879 0.5181 0.5809 0.5390 0.3203 0.3317 0.3228 0.2090 0.2223 0.2122

Signal Doppler

N 256

512

1024

Heavisine

256

512

1024

Method CGSWS CMWS-H CEB-PM CGSWS CMWS-H CEB-PM CGSWS CMWS-H CEB-PM CGSWS CMWS-H CEB-PM CGSWS CMWS-H CEB-PM CGSWS CMWS-H CEB-PM

SNR=3 0.3093 0.3332 0.3137 0.1854 0.2048 0.1845 0.1034 0.1160 0.1087 0.1198 0.1547 0.1338 0.0799 0.0959 0.0881 0.0487 0.0557 0.0564

SNR=5 0.3119 0.3351 0.3158 0.2073 0.2217 0.2007 0.1209 0.1329 0.1225 0.1640 0.2075 0.1838 0.1050 0.1202 0.1167 0.0650 0.0746 0.0730

SNR=7 0.3251 0.3644 0.3351 0.2052 0.2192 0.2035 0.1310 0.1432 0.1302 0.1900 0.2144 0.2098 0.1258 0.1357 0.1340 0.0747 0.0793 0.0794

SNR=10 0.3619 0.4000 0.3723 0.2095 0.2289 0.2132 0.1467 0.1601 0.1419 0.2030 0.2198 0.2188 0.1429 0.1371 0.1427 0.0843 0.0791 0.0835

5 Application to Inductance Plethysmography Data For illustration we apply the described CGSWS method to a real-life data set from anesthesiology collected by inductance plethysmography. The recordings were made by the Department of Anaesthesia at the Bristol Royal Infirmary and represent measure of flow of air during breathing. The data set was analyzed by several authors, for example Nason (1996) and Abramovich et al. (1998, 2002) where more information about the data can be found. Figure 1 shows a section of plethysmograph recording lasting approximately 80 s (n = 4096 observations), while Figure 2 shows the reconstruction of the signal with the CGSWS method. In the reconstruction process we applied N = 10, 000 iterations of the Gibbs sampler of which the first 5,000 was burn-in. The aim of smoothing was to preserve features such as peak heights while eliminating spurious high-frequency variation. The result provided by the proposed method satisfies these requirements providing a very smooth result. Abramovich et al. (2002) report the heights of the first peak while analyzing this data set. In our case the height is 0.8342, which is quite close to the result 0.8433, obtained by Abramovich et al. (2002), and better compared to the results obtained by other established methods analyzed in their paper.

6 Conclusions In this paper we proposed the Complex Gibbs Sampling Wavelet Smoother (CGSWS ), a complex wavelet-based method for nonparametric regression. A fully Bayesian approach was taken, in which a hierarchical model was formulated that accounts for the uncertainty of the prior parameters by placing hyperpriors on them. A mixture prior was specified on the complex

12

Rem´enyi and Vidakovic

Inductance plethysmography data

1

Voltage

0.8 0.6 0.4 0.2 0 −0.2 0

0.2

0.4

0.6

0.8

1

Time

Figure 1 A section of inductance plethysmography data with n = 4096.

CGSWS reconstruction

1

Voltage

0.8 0.6 0.4 0.2 0 −0.2 0

0.2

0.4

0.6

0.8

1

Time

Figure 2 Reconstruction of the inductance plethysmography data by CGSWS.

Bayesian nonparametric regression using complex wavelets

13

wavelet coefficients with a bivariate double exponential spread distribution to account for the large wavelet coefficients. Since all the full conditional distributions were available in an explicit distributional form, an efficient Gibbs sampling estimation procedure was proposed. The CGSWS method provided excellent denoising performance, which was demonstrated by simulations on well-known test functions and by comparison to a well-established wavelet denoising method that uses complex wavelets. The methodology was also illustrated on a real-life data set from inductance plethysmography. There the proposed method performed well in both smoothing and preserving the important features of the phenomenon.

14

Rem´enyi and Vidakovic

7 Appendix In this Appendix we provide some results used for setting the Gibbs sampling algorithm in (9). To derive the full conditional distribution for a parameter of interest we start with the joint distribution of all the parameters and collect the terms which contain the desired parameter. Denote d = {djk : j = J0 , . . . , log2 (n) − 1, k = 0, . . . , 2j − 1}, θ = {θjk : j = J0 , . . . , log2 (n) − 1, k = 0, . . . , 2j − 1},

where djk and θjk are bivariate components. Similarly, denote z = {zjk : j = J0 , . . . , log2 (n) − 1, k = 0, . . . , 2j − 1}, ǫ = {ǫj : j = J0 , . . . , log2 (n) − 1},

v = {vjk : j = J0 , . . . , log2 (n) − 1, k = 0, . . . , 2j − 1}, and C = {Cj : j = J0 , . . . , log2 (n) − 1} the vector containing matrices Cj for resolution levels j. The joint distribution of the data and parameters is

1 √ · 2π|σ 2 Σj |1/2 j,k 1 (d − θ ) · exp − 2 (djk − θjk )′ Σ−1 jk jk j 2σ Y 1 1 1 2 −a−1 − σ2 b (σ ) e {(1 − zjk )δ0 + Γ(a)ba

f (d, θ, z, ǫ, v, σ 2 , C) =

Y

j,k

)# 1 ′ −1 1 exp − · θ C θjk zjk √ 2vjk jk j 2π|vjk Cj |1/2 Y z Y ǫj jk (1 − ǫj )(1−zjk ) 1{0 ≤ ǫj ≤ 1} · j,k

Y j,k

Y j

j

1 3/2−1 −vjk /8 · v e Γ(3/2)83/2 jk

1 |Cj |−(w+d+1)/2 exp − tr Aj Cj−1 . 2

Bayesian nonparametric regression using complex wavelets

15

From the joint distribution, the full conditional distribution of σ 2 is

p(σ 2 |θ, d) ∝ =

=

X −a−1 − 1 1 1 1 ′ −1 exp − (d − θ ) Σ (d − θ ) e σ2 b σ2 jk jk jk jk j 2 2 2σ σ j,k 1 X (djk − θjk )′ Σ−1 (d − θ ) (σ 2 )−a−n−1 exp − 2 1/b + 1/2 jk jk j σ j,k −1 X IG a + n, 1/b + 1/2 (djk − θjk )′ Σ−1 . j (djk − θjk )

n

j,k

The conditional distribution of zjk remains Bernoulli with posterior success probability

P (zjk

ǫj m djk |σ 2 , vjk , Cj = 1|djk , σ , ǫj , vjk , Cj ) = , (1 − ǫj ) f (djk |0, σ 2 ) + ǫj m (djk |σ 2 , vjk , Cj ) 2

where 2

f (djk |0, σ ) = m(djk |σ 2 , vjk , Cj ) =

1 ′ −1 1 exp − 2 djk Σj djk , 2σ 2π|σ 2 Σj |1/2 −1 1 ′ 1 2 exp − djk σ Σj + vjk Cj djk . 2 2π|σ 2 Σj + vjk Cj |1/2

The marginal distribution m(djk |σ 2 , vjk , Cj ) is a bivariate normal distribution with zero mean and covariance matrix σ 2 Σj + vjk Cj . This follows form conjugate multivariate normal-normal structure (Lindley and Smith, 1972). The full conditional distribution of ǫj is

p(ǫj |z) ∝

"

Y

k P

= ǫj

k

z ǫj jk (1

zjk

− ǫj )(1−zjk ) 1{0 ≤ ǫj ≤ 1} P

(1 − ǫj )

= Be 1 +

#

X k

k (1−zjk )

zjk , 1 +

X k

!

(1 − zjk ) .

16

Rem´enyi and Vidakovic

The full conditional distribution of θjk is 1 ′ −1 p(θjk |djk , zjk , σ , vjk , Cj ) ∝ exp − 2 (djk − θjk ) Σj (djk − θjk ) · 2σ [(1 − zjk )δ0 + # 1 ′ −1 1 exp − θ C θjk zjk √ 2vjk jk j 2π|vjk Cj |1/2 ( δ0 (θjk ), if zjk = 0 , = 2 f θjk |djk , σ , vjk , Cj , if zjk = 1 2

where

2

f (θjk |djk , σ , vjk , Cj ) = µ ˜jk

=

˜ jk Σ

=

1 1 ′ ˜ −1 ˜ Σ µ ˜jk , exp − µ ˜ jk |1/2 2 jk jk 2π|Σ Σ−1 j djk , σ2 −1 −1 2 Σ−1 . j /σ + Cj /vjk

˜ jk Σ

Derivation of f (θjk |djk , σ 2 , vjk , Cj ) is also a standard result contained for example in Lindley and Smith (1972) and was used in the wavelet shrinkage context by Barber and Nason (2004). The full conditional distribution of vjk is proportional to p(vjk |θjk , zjk , Cj )

∝

"

(1 − zjk )δ0 + zjk √

n v o jk 3/2−1 vjk exp − . 8

1 2π|vjk Cj |1/2

# 1 ′ −1 · θ C θjk exp − 2vjk jk j

For zjk = 0, this becomes p(vjk |θjk , zjk = 0, Cj )

n v o jk exp − 8 = Ga(3/2, 8), 3/2−1

∝ vjk

and when zjk = 1, it becomes p(vjk |θjk , zjk = 1, Cj )

n v o 1 1 ′ −1 jk 3/2−1 exp − exp − θjk Cj θjk vjk vjk 2vjk 8 1 1 1 1/2−1 ′ vjk + θjk Cj−1 θjk = vjk exp − 2 4 vjk −1 ′ = GIG 1/4, θjk Cj θjk , 1/2 .

∝

Bayesian nonparametric regression using complex wavelets

17

Here GIG(a, b, p) denotes the generalized inverse Gaussian distribution (Johnson et al., 1994, p.284) with probability density function f (x|a, b, p) =

(a/b)p/2 p−1 −(ax+b/x)/2 √ x e , 2Kp ( ab)

x > 0; a, b > 0,

where Kp denotes the modified Bessel function of the third kind. Finally, the full conditional distribution of Cj is given as p(Cj |θj , zj , vj ) ∝

=

∝ =

# 1 ′ −1 · exp − θ C θjk (1 − zjk )δ0 + zjk √ 2vjk jk j 2π|vjk Cj |1/2 k 1 |Cj |−(w+d+1)/2 exp − tr Aj Cj−1 2 " Y 1 · (1 − zjk )δ0 + zjk √ 2π|vjk Cj |1/2 k ′ θjk θjk 1 1 −1 −1 −(w+d+1)/2 exp − tr |Cj | exp − tr Aj Cj Cj 2 vjk 2 !) # " ( ′ X P θ θ 1 jk jk Cj−1 Aj + zjk |Cj |−( k zjk +w+d+1)/2 exp − tr 2 vjk k ! ′ X X θjk θjk IW Aj + zjk ,w + zjk , vjk Y

"

1

k

where IW denotes the inverse Wishart distribution.

k

18

Rem´enyi and Vidakovic

References Abramovich, F., Besbeas, P., and Sapatinas T. (2002). Empirical Bayes Approach to Block Wavelet Function Estimation. Computational Statistics and Data Analysis 39, 435–451. Abramovich, F., Sapatinas T., and Silverman B.W. (1998). Wavelet thresholding via a Bayesian Approach. Journal of the Royal Statistical Society, Ser. B 60, 725–749. Antoniadis, A., Bigot, J., and Sapatinas, T. (2001). Wavelet estimators in nonparametric regression: a comparative simulation study. Journal of Statistical Software 6, 1–83. Barber, S. and Nason, G.P. (2004). Real nonparametric regression using complex wavelets. Journal of the Royal Statistical Society, Ser. B 66, 927–939. Casella, G. and George, E.I. (1992). Explaining the Gibbs Sampler. The American Statistician 46, 167–174. Dagpunar, J.S. (1989). An easily implemented generalized inverse Gaussian generator. Communications in Statistics - Simulation and Computation 18, 703–710. Donoho, D.L. and Johnstone, I.M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika 81, 425–455. Flandrin, P. (1992). Wavelet analysis and synthesis of fractional Brownian motion. IEEE Transactions on Information Theory 38, 910–917. Gomez, E., Gomez-Villegas, M.A., and Marin, J.M. (1997). A multivariate generalization of the power exponential family of distributions Communications in Statistics - Theory and Methods 27, 589–600. Gomez, E., Gomez-Villegas, M.A., and Marin, J.M. (1997). Multivariate Exponential Power Distributions as Mixtures of Normal Distributions with Bayesian Applications Communications in Statistics - Theory and Methods 37, 972–985. Johnson, N.L., Kotz, S., and Balakrishnan, N. (1999) Continuous Univariate Distributions, Volume 1. Wiley-Interscience. Johnstone, I.M. and Silverman, B.W. (2005). Empirical Bayes selection of wavelet thresholds. Annals of Statistics 33, 1700–1752. Lina, J.M. (1997). Image Processing with Complex Daubechies Wavelets. Journal of Mathematical Imaging and Vision 7, 211–233. Lina, J.M. and Macgibbon, B. (1997). Non-Linear Shrinkage Estimation with Complex Daubechies Wavelets. Proceedings of SPIE, Wavelet Applications in Signal and Image Processing V, 67–79. Lina, J.M. and Mayrand, M. (1995). Complex Daubechies Wavelets. Applied and Computational Harmonic Analysis, 2, 219–229. Lina, J.M., Turcotte, P., and Goulard, B. (1999). Complex Dyadic Multiresolution Analyses. Advances in Imaging and Electron Physics, 109, 163–197. Lindley, D.V. and Smith, A.F.M. (1972). Bayes Estimates for the Linear Model. Journal of the Royal Statistical Society, Series B, 1, 1–41. Mallat, S. (1989). A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 11, 674–693. Nason, G.P. (1996). Wavelet shrinkage using cross-validation. Journal of the Royal Statistical Society, Series B 58, 463–479. Rem´enyi, N. and Vidakovic, B. (2012). Bayesian Wavelet Shrinkage Strategies - New Developments. In Multiscale Signal Analysis and Modeling, Lecture Notes in Electrical Engineering, Editors X. Shen and A. Zayed. Springer-Verlag, New York, 317–346. Robert, C.P. and Casella, G. (1999). Monte Carlo Statistical Methods. Springer-Verlag, New York. Vidakovic, B. (1998). Nonlinear wavelet shrinkage with Bayes rules and Bayes factors. Journal of the American Statistical Association 93, 173–179.

Bayesian nonparametric regression using complex wavelets

19

Vidakovic, B. and Ruggeri, F. (2001). BAMS Method: Theory and Simulations. Sankhy¯ a, Series B 63, 234–249.