Introduction to distributed speech enhancement algorithms for ad hoc microphone arrays and wireless acoustic sensor networks

Introduction to distributed speech enhancement algorithms for ad hoc microphone arrays and wireless acoustic sensor networks Part II: DANSE-based dist...
Author: Justina West
3 downloads 0 Views 6MB Size
Introduction to distributed speech enhancement algorithms for ad hoc microphone arrays and wireless acoustic sensor networks Part II: DANSE-based distributed speech enhancement in WASNs

Sharon Gannot1 and Alexander Bertrand2 1 Faculty 2 KU

of Engineering, Bar-Ilan University, Israel

Leuven, E.E. Department ESAT-STADIUS, Belgium

EUSIPCO 2013, Marrakech, Morocco S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

1 / 83

Outline

1

Introduction and motivation

2

The DANSE algorithm in fully-connected WASNs

3

DANSE in WASNs with a tree topology (T-DANSE)

4

LCMV-based DANSE (LC-DANSE)

5

Bibliography

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

2 / 83

Introduction and motivation

Ad hoc microphone arrays Ad hoc microphone arrays No tedious calibration Improved spatial resolution and sound field sampling. High probability to find microphones close to a relevant sound source. Possibility to put (arrays of) microphones at strategic places

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

3 / 83

Introduction and motivation

Wireless acoustic sensor networks (WASNs) Wired ad hoc arrays: Tedious deployment Unaesthetic Not flexible (e.g., adding/removing/repositioning microphones) Not suitable for wearable or mobile applications (e.g., hearing aids)

Aim for wireless ad hoc microphone arrays. A.k.a. wireless acoustic sensor network (WASN) (due to similarities with wireless sensor networks)

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

4 / 83

Introduction and motivation

Wireless acoustic sensor networks (WASNs) Possible applications: Cooperative hearing devices (e.g., binaural hearing aids) Hearing devices supported by external microphones or other audio devices Domotics, smart homes and ambient intelligence Surveillance ...

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

5 / 83

Introduction and motivation

Wireless acoustic sensor networks (WASNs) Challenges Wireless link delay (e.g., in case of real-time constraints) Different sampling clocks (see also Part III) The ‘data deluge’ (see next slide)

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

6 / 83

Introduction and motivation

WASNs and the data deluge The ‘data deluge’

[Baraniuk, 2011]

WASNs generate a massive amount of data: Requires a large communication bandwidth Sensor nodes consume a large amount of transmission energy Requires high computing power at the receiver end (fusion center)

=big problem, in particular when battery-powered (even in small-scale WASNs such as binaural hearing aids)

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

7 / 83

Introduction and motivation

Distributed signal processing in WASNs Tackle the data deluge by physically shifting the signal processing to the microphone nodes themselves Goals: Minimize data exchange Distribute computational burden over all nodes Let nodes cooperate in signal processing task(s)

Algorithm design=challenging (e.g., no access to full correlation matrix)

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

8 / 83

Introduction and motivation

Distributed signal processing The field of distributed signal processing: Mainly driven by the concept of wireless sensor networks Theory and methods often build upon results from other fields, e.g., Parallel and distributed computing for multi-core processors Modelling and control of multi-agent systems Game theory Graph theory

Two fundamentally different approaches: 1

Distributed parameter estimation techniques (DPE) (e.g., diffusion

2

[Sayed et al., 2013],

consensus

[Olfati-Saber et al., 2007],

gossip

[Shah, 2009],

...)

Distributed signal estimation techniques (DSE) (e.g., DANSE-family, distributed/cooperative beamforming, distributed/remote source coding, ...)

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

9 / 83

Introduction and motivation

Distributed parameter estimation (DPE) General script: 1 2

Extract initial parameter vector estimate from sensor observations Repeat until convergence (or other stop criterion): Share intermediate estimate with neighbors Refine intermediate estimate using estimates from neighbors

Note: target parameter vector is fixed over iterations, or varies only slowly

Rx from other node

wli

Rx from other node

i wm

refine estimate

wki

wki+1

to other node

sensor signal

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

10 / 83

Introduction and motivation

DPE for speech enhancement in WASNs Collect L microphone signal samples at each node and iterate on L-dimensional vector until the estimate converges. Then collect L new samples, etc. © DPE techniques usually have no network topology constraints § Large communication cost: re-estimate and re-transmit same L samples many times (freeze time index until convergence) § Communication cost depends on convergence speed (and hence also on network size) § Not time-recursive: full reset between blocks

Rx from other node

wli

Rx from other node

See, e.g.,

i wm

refine estimate

wki

wki+1

to other node

[Zeng and Hendriks, 2012, sensor signal Heusdens et al., 2012]

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

11 / 83

Introduction and motivation

Distributed signal estimation (DSE)

Avoid iterations over the signal sample estimates themselves ⇒ In-network data flow and iterative process are uncoupled Instead: time-recursive iterative refinement of in-network fusion rules Assumption: spatial coherence of sensor signals is fixed over iterations (or varies slowly) Rx from other node

zl

Fi

Rx from other node

zm

sensor signal

yk

S. Gannot (BIU) and A. Bertrand (KUL)

zk

Distributed speech enhancement

to other node

EUSIPCO 2013

12 / 83

Introduction and motivation

DSE for speech enhancement in WASNs? No iterative refinement of sample estimates: © Each block of samples is transmitted only once © Fixed per-node communication cost, independent of convergence speed/network size § Price to pay: specific order in data flow generally requires topology constraints (star, tree, fully-connected,...) Rx from other node

zl

Fi

Rx

See, e.g.,

from other node

zm

sensor signal

yk

zk

to other node

[Doclo et al., 2009, Bertrand and Moonen, 2009, Markovich-Golan et al., 2010, Markovich-Golan et al., 2013,

Lawin-Ore and Doclo, 2011, Himawan et al., 2011, Hioka and Kleijn, 2011, Szurley et al., 2013]

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

13 / 83

The DANSE algorithm in fully-connected WASNs

1

Introduction and motivation

2

The DANSE algorithm in fully-connected WASNs

3

DANSE in WASNs with a tree topology (T-DANSE)

4

LCMV-based DANSE (LC-DANSE)

5

Bibliography

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

14 / 83

The DANSE algorithm in fully-connected WASNs

Multi-channel Wiener filtering

Preliminary case study: binaural hearing aids

[Doclo and Moonen, 2002]

Goal: estimate speech component at reference microphone Optimal filter-and-sum operation based on input statistics

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

15 / 83

The DANSE algorithm in fully-connected WASNs

Preliminary case study: binaural hearing aids

Preliminary case study: binaural hearing aids Two hearing aids (HAs) with wireless link (=2-node WASN) Goal: compute MWF including extra signal(s) from other HA Each HA uses a local microphone as reference to preserve binaural cues of target speaker

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

16 / 83

The DANSE algorithm in fully-connected WASNs

Preliminary case study: binaural hearing aids

Preliminary case study: binaural hearing aids Problem statement

[Doclo et al., 2009, Srinivasan and Den Brinker, 2009]

Wireless link only allows exchange of 1 signal (in duplex) Which signal should be transmitted?

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

17 / 83

The DANSE algorithm in fully-connected WASNs

Preliminary case study: binaural hearing aids

Preliminary case study: binaural hearing aids Result from

[Doclo et al., 2009]

Copy part of the local MWF coefficients and use it as fusion rule to generate transmit signal (=optimal for single target speaker) Iterative computation (details omitted, see later) Will extend this result to more general WASN scenarios in this tutorial PS: similar result exists for binaural MVDR BF

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

[Markovich-Golan et al., 2010]

EUSIPCO 2013

18 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

DANSE in fully-connected WASNs Assumptions: Multiple mics per node (array or hierarchical architecture) Network is fully connected (=easiest case, will be extended to multi-hop topologies later) Each node is a data sink, and requires a node-specific estimate of the target source(s) to preserve spatial cues ⇒ Distributed adaptive node-specific signal estimation (DANSE)

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

19 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

Notation WASN with N nodes {1, . . . , N} = J Node k ∈ J collects an Mk -channel microphone signal yk (ω, t) (represented in short-time fourier transform (STFT) domain) Will often omit (ω, t) in the sequel for conciseness, keep in mind that all operations are performed in STFT domain. Additive noise: yk = dk + nk nk is noise and dk is the desired speech signal. T T T Stacked P vector y = [y1 . . . yN ] defines M-channel signal with M = k∈J Mk .

Similar for d and n, i.e., y = d + n. ykm denotes the m-th microphone of node k, and ekm = [0 . . . 0 1 0 . . . 0]T is a selection vector such that ykm = eT km y. S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

20 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

Centralized per-node MWFs At each node: choose 1st mic as reference microphone (w.l.o.g.) Assume all nodes have access to all signals: node k ∈ J computes ˆHy dˆk1 = w k

ˆ k is node k’s MWF with H denoting conjugate transpose and w −1 ˆ k = arg min E {|dk1 − wkH y|2 } = Ryy w Rdd ek1 wk

where Ryy = E {yyH } and Rdd = E {ddH } = Ryy − Rnn (VAD) PS: will only focus on MWF, but can easily be extended to SDW-MWF.

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

21 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

DANSE signal exchange

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

22 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

DANSE signal exchange Node k broadcasts the fused signal zk to the other nodes: zki = fki H yk where fki is an Mk -dimensional fusion vector and i is an iteration index. Data compression: Mk -channel signal yk → single-channel signal zki Between iteration i and i + 1, node k collects samples of   yk i ei + n e eik yk = i =d k z−k i i with zi−k = [z1i . . . zk−1 zk+1 . . . zNi ]T .

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

23 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

DANSE signal exchange

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

24 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

DANSE per-node MWFs Node k will compute local MWF e vki that minimizes min E {|dk1 − e vkH e yki |2 } . e vk

This yields e vki = Riy˜k y˜k

−1

Rid˜

˜

k dk

e1

where e1 = [1 0 . . . 0], Riy˜k y˜k = E {e yki e yki H }, Rid˜

˜

k dk

With VAD: Rid˜

˜

k dk

ei d ei H = E {d k k }.

= Riy˜k y˜k − Rin˜k n˜k (PS: nodes can share VAD info)

Between iterations i and i + 1, estimated speech signal at node k: i

d k1 = e vki H e yki

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

25 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

Equivalent network-wide filter? ⇒ how does equivalent network-wide filter wki look like? i

d k1 = e vki H e yki = wki H y ⇒ wki ?

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

26 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

Equivalent network-wide filter? Local MWF ↔ network-wide filter  i w11 i fi  , w1i =  g12 2 i fi g13 3 

 i fi g21 1 i , w2i =  w22 i g23 f3i 

 i fi g31 1 i fi  w3i =  g32 2 i w33 

i is the coefficient that node k applies to the z i signal from node q. gkq q

  i i  i wk1 gk1 f1  wi   g i f i   k2   k2 2   ..   ..   .   .  i    wk =   wi  =  wi   kk   kk   ..   ..   .   .  i i fi gkN wkN N 

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

27 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

DANSE parametrization Choice of fki ’s i , i.e., wi serves both as compressor and estimator DANSE sets fki = wkk kk  i i  gk1 w11   .. i i wk =   (gkk = 1, by definition) . i wi gkN NN

PS: chicken-and-egg problem: need samples of zk signals to compute local MWFs, but need MWFs to compute samples of zk ’s

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

28 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

DANSE parametrization Example of DANSE parametrization (3-node case)  i w11 i wi  , w1i =  g12 22 i i g13 w33 

 i wi g21 11 i , w2i =  w22 i i g23 w33 

 i wi g31 11 i wi  w3i =  g32 22 i w33 

PS: similar to zik,−k , introduce notation i i . . . gi i i T gk,−k = [gk1 k,k−1 gk,k+1 . . . gkN ] . S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

29 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

Algorithm description (for fixed frequency index ω) DANSE1 algorithm

[Bertrand and Moonen, 2010a]

1

Initialize: i ← 0, u ← 1 0 and g0 Initialize wkk k,−k with random vectors, ∀ k ∈ J

2

Each node k ∈ J performs the following operation cycle: Collect B new sensor observations yk (ω, iB + n), n = 0 . . . B − 1. Compress these Mk -dimensional observations to iH zki (ω, iB + n) = wkk yk (ω, iB + n), n = 0 . . . B − 1 .

Broadcast B samples of zki to other nodes. Collect B samples of zi−k from other nodes. i+1 i+1 Compute new estimator parameters wkk and gk,−k (see next slide). Compute B samples of speech estimate (for n = 0 . . . B − 1) i

i+1 H i+1 H i d k1 (ω, iB + n) = wkk yk (ω, iB + n) + gk,−k z−k (ω, iB + n) . 3

Set i ← i + 1, u ← (u mod N) + 1, and return to step 2

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

30 / 83

The DANSE algorithm in fully-connected WASNs

DANSE1

Algorithm description (continued) i+1 i+1 and gk,−k DANSE1 algorithm: computation of wkk i+1 i+1 Computation of wkk and gk,−k :

Node u re-estimates Riy˜u y˜u and Rid˜ in

zi−u (ω, iB

˜

u du

, based on the collected samples

+ n) and yu (ω, iB + n), n = 0 . . . B − 1.

∀ k ∈ J , update: "

i+1 wkk i+1 gk,−k

#

 −1  Rid˜ d˜ e1 if k = u  Riy˜k y˜k  k k i = wkk   if k 6= u i gk,−k

Note: Sequential round-robin updating Several DANSE algorithms in parallel (one for each frequency bin ω) S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

31 / 83

The DANSE algorithm in fully-connected WASNs

Convergence and optimality of DANSE

Convergence and optimality of DANSE?

Convergence Does DANSE converge to an equilibrium? ⇒ Does limi→∞ wki exist, ∀ k ∈ J ?

Optimality If DANSE converges to an equilibrium setting, does it have the same estimation performance as the centralized MWF? ˆk, ∀ k ∈ J ? ⇒ Is limi→∞ wki = w

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

32 / 83

The DANSE algorithm in fully-connected WASNs

Convergence and optimality of DANSE

1st result ˆ k , ∀ k ∈ J , in the solution space of DANSE? First question: are w

Theorem In case of a single desired speech source, and if all nodes in J can ’hear’ this source, then the solution space defined by the parametrization of ˆk, ∀ k ∈ J . DANSE contains the optimal (centralized) MWFs w Proof outline: Single desired speech source: ∀ k ∈ J : dk (ω, t) = ak (ω)s(ω, t) where s(ω, t) contains desired speech source and steering vector ak (ω) contains Mk transfer functions from source to Mk microphones. T T Let a = [aT 1 . . . aN ] , then d(ω, t) = a(ω)s(ω, t). S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

33 / 83

The DANSE algorithm in fully-connected WASNs

Convergence and optimality of DANSE

Proof (continued) Centralized MWF at node k: ˆ k = R−1 w yy Rdd ek1 2 H = R−1 yy aE {|s| }a ek1 ∗ 2 = R−1 yy a · ak1 E {|s| }

It follows that ∀ k, q ∈ J : ˆ k = αkq w ˆq w with αkq =

∗ ak1 ∗ aq1

.

i =α i ˆ kk , ∀ k, q ∈ J In DANSE: set gkq kq and wkk = w  i i    ˆ 11 gk1 w11 αk1 w     .. .. ˆk . ∀ k ∈ J : wki =  = =w . . i i ˆ NN gkN wNN αkN w S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

34 / 83

The DANSE algorithm in fully-connected WASNs

Convergence and optimality of DANSE

2nd result

Theorem (Convergence and optimality of DANSE

)

[Bertrand and Moonen, 2010a]

In case of a single desired speech source, and if ak1 6= 0, ∀ k ∈ J , then ˆk, ∀ k ∈ J . limi→∞ wki = w In other words: each node obtains the speech estimate of its corresponding centralized MWF, as if it had access to all the microphone signals. (proof omitted)

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

35 / 83

The DANSE algorithm in fully-connected WASNs

Convergence and optimality of DANSE

DANSE vs. Centralized MWF Advantages of DANSE Reduced communication bandwidth and reduced transmission energy All nodes contribute/cooperate in the processing ⇒ Small per-node processing power Inherent dimensionality reduction ⇒ Many small problems vs. single large problem ⇒ Often smaller overall processing power (due to O(M 2 ) or O(M 3 ) complexity)

Disadvantages of DANSE Reduced tracking performance due to iterative nature (per-node tracking can be improved [Szurley et al., 2013]) Ripple of errors to other nodes (will be addressed later) S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

36 / 83

The DANSE algorithm in fully-connected WASNs

DANSEQ

Multiple target speakers What if desired signal dk1 is a mixture of Q desired speech sources? ˆ k = αkq w ˆ q does not hold anymore (see next slide) ⇒w ˆ k not in solution space of DANSE § ⇒w

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

37 / 83

The DANSE algorithm in fully-connected WASNs

DANSEQ

Multiple target speakers Centralized MWF at node k (for Q = 2): ˆ k = R−1 w yy Rdd ek1 = R−1 yy [a1 a2 ]



E {|s1 |2 } 0 0 E {|s2 |2 }



 aH 1 ek1 aH 2

= R−1 yy [a1 a2 ] · bk It follows that ∀ k ∈ J : ˆ k = W · bk w with W = R−1 yy [a1 . . . aQ ] an unknown M × Q matrix.

Conclusion ˆ k , ∀ k ∈ J , span a Q-dimensional subspace! All MWF’s w ⇒ Need to capture this subspace with DANSE S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

38 / 83

The DANSE algorithm in fully-connected WASNs

DANSEQ

Generalization: DANSEQ Choose Q − 1 auxiliary reference microphones at each node Q-channel desired signal, e.g., dk,ref = [dk1 . . . dkQ ]T (w.l.o.g.) Compute Q different MWF’s (M × Q matrix): ˆ k = R−1 W yy Rdd [ek1 . . . ekQ ] ˆk =W ˆ q Akq . From previous slide: ∀ k, q ∈ J , ∃ Akq ∈ CQ×Q : W Q×Q If dk,ref = Ak,ref · s, with Ak,ref ∈ C containing the Q-speakers to H Q ref.-mic acoustic transfer functions, then Akq = A−H q,ref · Ak,ref

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

39 / 83

The DANSE algorithm in fully-connected WASNs

DANSEQ

Generalization: DANSEQ Q-channel signal broadcasts i H y with a Q-channel signal zi = Wi H y . Replace single-channel zki = wkk k k kk k ⇒ Communication cost increases linearly with # target speakers

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

40 / 83

The DANSE algorithm in fully-connected WASNs

DANSEQ

DANSEQ parametrization Example of DANSEQ parametrization (3-node case)  i W11 i Gi  , W1i =  W22 12 i Gi W33 13 

S. Gannot (BIU) and A. Bertrand (KUL)

 i Gi W11 21 i , W2i =  W22 i i W33 G23 

Distributed speech enhancement

 i Gi W11 31 i Gi  W3i =  W22 32 i W33 

EUSIPCO 2013

41 / 83

The DANSE algorithm in fully-connected WASNs

DANSEQ

DANSEQ parametrization

DANSEQ parametrization  i Gi W11 k1   .. Wki =   . i i WNN GkN 

where Gikq ∈ CQ×Q , Gikk = IQ

ˆk =W ˆ q Akq , the optimal MWF’s are Since ∀ k, q ∈ J , ∃ Akq ∈ CQ×Q : W i ˆ kk and Gi = Akq ). in the DANSE solution space (set Wkk = W kq

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

42 / 83

The DANSE algorithm in fully-connected WASNs

DANSEQ

Algorithm description

i+1 and Gi+1 DANSEQ algorithm: computation of Wkk k,−k T T Let Gik,−k = [Gik1T . . . Gik,k−1 Gik,k+1 . . . GikNT ]T . Update at node k:

"

i+1 Wkk i+1 Gk,−k

#

 −1 i  Rid˜ d˜ [e1 . . . eQ ] if k = u R  y˜k y˜k k k   i = Wkk   if k 6= u i Gk,−k

ei are defined as earlier (but with Q-channel zi signals). where e yki and d k k

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

43 / 83

The DANSE algorithm in fully-connected WASNs

DANSEQ

Convergence and optimality of DANSEQ

Theorem (Convergence and optimality of DANSEQ ) In case of Q desired speech sources, and if Ak,ref is full rank, ∀ k ∈ J , ˆ k, ∀ k ∈ J . then limi→∞ Wki = W

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

44 / 83

The DANSE algorithm in fully-connected WASNs

DANSEQ

Other scenarios What if the centralized solution is not in DANSEQ solution space, e.g., DANSEQ with Q < number of desired speakers? DANSEQ where nodes have ‘different interests’

Theorem (Existence of equilibrium

)

[Bertrand and Moonen, 2012b]

Under some technical conditions (details omitted), the DANSEQ algorithm i always has an equilibrium point, i.e., a choice of the local parameters Wkk i and Gkq , ∀ k, q ∈ J , such that none of the nodes wants to change them. Convergence to equilibrium is not proven, but is generally observed in simulations. Equilibrium = suboptimal due to selfish updates. Game-theoretic framework (selfish nodes) → Nash equilibria S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

45 / 83

The DANSE algorithm in fully-connected WASNs

DANSE with simultaneous node-updating

Simultaneous node-updating In DANSE, the nodes update in a sequential round-robin fashion ⇒ Slow overall convergence, and slow per-node adaptation Can we also let all nodes update simultaneously? Sometimes convergence... ... but often no convergence § (limit cycle behavior) Reason: ‘optimal’ local update immediately becomes suboptimal due to simultaneous changes in the filters at other nodes Solution: Relaxation (details omitted, see

[Bertrand and Moonen, 2010b]

)

unrelaxed update i+1 i Wkk = (1 − α)Wkk + αWkk

with 0 < α ≤ 1.

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

46 / 83

The DANSE algorithm in fully-connected WASNs

DANSE with simultaneous node-updating

Relaxed simultaneous DANSE (rS-DANSE) i+1 and Gi+1 rS-DANSEQ algorithm: computation of Wkk k,−k

Update at all nodes k ∈ J simultaneously:  new  −1 i Wkk = Riy˜k y˜k Rd˜ d˜ [e1 . . . eQ ] i+1 k k Gk,−k i+1 i new Wkk = (1 − α)Wkk + αWkk 40

optimal cost

38

S−DANSE1

36

rS−DANSE1 with αi=0.3

rS−DANSE with αi=0.7 1

rS−DANSE with αi=1/i 1

34

LS cost [dB]

32 30 28 26 24 22 20 18 0

S. Gannot (BIU) and A. Bertrand (KUL)

5

10

15

20

25 iteration

30

35

40

Distributed speech enhancement

45

50

EUSIPCO 2013

47 / 83

The DANSE algorithm in fully-connected WASNs

Robustified DANSE

Robustified DANSE (R-DANSE)

Sometimes ill-conditioned nodes: ak1 ≈ 0 or Ak,ref ≈rank deficient E.g.: low-SNR node k can be useful as noise reference, but ak1 ≈ 0. DANSE suffers from error ripple: erroneous update at one node has an impact on the performance at all other nodes.

Solution At ill-conditioned node k: choose zqi as reference signal, where node q is a high-SNR node. Note: ‘desired’ signal at node k changes with iteration index i!

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

48 / 83

The DANSE algorithm in fully-connected WASNs

Robustified DANSE

Convergence and optimality of R-DANSE w22 (1) node 2 w22 (2)

w11 (1) node 1 w11 (2)

w33 (1) node 3 w33 (2)

w44 (1) node 4

Dependency graph: i (m) of Wi , ∀ k ∈ J , Each column wkk kk ∀ m ∈ {1, . . . , Q} is a vertex. i (m) corresponds to a Note: each wkk particular reference mic i (m) → wi (n) if update Draw edge wkk qq i of wkk (m) is based on the reference signal zqi (n) instead of a local microphone.

w44 (2)

If dependency graph contains no loops: convergence and optimality of R-DANSE [Bertrand and Moonen, 2009]. S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

49 / 83

DANSE in WASNs with a tree topology (T-DANSE)

1

Introduction and motivation

2

The DANSE algorithm in fully-connected WASNs

3

DANSE in WASNs with a tree topology (T-DANSE)

4

LCMV-based DANSE (LC-DANSE)

5

Bibliography

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

50 / 83

DANSE in WASNs with a tree topology (T-DANSE)

Multi-hop WASNs Fully-connected WASNs may require significant transmit power Low-power nodes may not be able to reach all other nodes

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

51 / 83

DANSE in WASNs with a tree topology (T-DANSE)

Passing on information The relay case Make network virtually fully connected

A A

Complex routing problem B

Per-node communication cost grows with network size

B

Filter-and-sum combination of inputs No routing problems

A λ1 A + λ2 B B

S. Gannot (BIU) and A. Bertrand (KUL)

Per-node communication cost independent of network size

Distributed speech enhancement

EUSIPCO 2013

52 / 83

DANSE in WASNs with a tree topology (T-DANSE)

First attempt Fully-connected DANSE:

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

53 / 83

DANSE in WASNs with a tree topology (T-DANSE)

First attempt Disconnect red and green node...

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

54 / 83

DANSE in WASNs with a tree topology (T-DANSE)

First attempt ... and add new neighbors instead:

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

55 / 83

DANSE in WASNs with a tree topology (T-DANSE)

First attempt Blue node’s data is blocked and does not travel beyond red node:

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

56 / 83

DANSE in WASNs with a tree topology (T-DANSE)

First attempt Change definition of transmitted signal zik (‘wild guess’):

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

57 / 83

DANSE in WASNs with a tree topology (T-DANSE)

First attempt Data from blue node travels beyond single-hop region:

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

58 / 83

DANSE in WASNs with a tree topology (T-DANSE)

First attempt Apply similar idea in all nodes:

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

59 / 83

DANSE in WASNs with a tree topology (T-DANSE)

First attempt

K

M1

y1

11 d 00 00 1 11

W11 G12

node 1

z1 K

M2

y2

11 00 00z2 11

11 00 11 d2 00

W22

Will this ‘wild guess’ work??? Nk = neighbours of k (k excl.) i Implicit definition Pof zk : i H i i i H zk = Wkk yk + q∈Nk Gkq zq

G21 z3 M3

y3

G23

K

node 2 11 00 11 d3 00

W33

Problem 1: acausality in data flow Deadlock: nodes wait for each other’s z-signals

G32

node 3

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

60 / 83

DANSE in WASNs with a tree topology (T-DANSE)

First attempt Problem 2: feedback Feedback path considerably changes algorithm dynamics Centralized MWF’s are not in solution space (provable)

How to get rid of this feedback and causality problem? S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

61 / 83

DANSE in WASNs with a tree topology (T-DANSE)

2 types of feedback

Direct feedback

Indirect feedback

1

2

3

1

2

3

4

5

6

4

5

6

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

62 / 83

DANSE in WASNs with a tree topology (T-DANSE)

Eliminating direct feedback Direct feedback:

1

2

3

Transmitter feedback cancellation (TFC): send different signal to each neighbour X iH zikq = Wkk yk + GiklH zilk l∈Nk \{q}

4

5

6

Better alternative: Receiver feedback cancellation (RFC), i.e., single broadcast signal to all neighbors (details omitted [Bertrand and Moonen, 2011]) RFC vs. TFC: no influence on algorithm! (will assume TFC in sequel w.l.o.g.)

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

63 / 83

DANSE in WASNs with a tree topology (T-DANSE)

Eliminating indirect feedback Indirect feedback: Prune to tree topology 1

2

3

In combination with TFC: all feedback eliminated Definition of zikq ’s can be resolved:

4

5

6

1

2

3

4

5

6

Start at leaf nodes (|Nk | = 1) iH Leaf node k: zikq = Wkk yk , i.e., no dependency on other z-signals Rest follows in natural order as dictated by the tree

Similarly, also causality problem in data flow (deadlock) is resolved: 1 2

S. Gannot (BIU) and A. Bertrand (KUL)

Fusion flow from leaf nodes to root... ... followed by diffusion flow from root to leafs

Distributed speech enhancement

EUSIPCO 2013

64 / 83

DANSE in WASNs with a tree topology (T-DANSE)

Data-driven signal exchange Data-driven paradigm: each block ‘fires’ if all of its inputs are available ⇒ no global coordination needed to organize data flow Fusion and diffusion flow emerge automatically z13

z31

From node 1

To node 1

G31

9

1 3

11 00 00 11 00 11 00 11

To node 2

z32

2

4 5

z23

8

6

From node 2

00 11 00 11 00 11

11 00 00 00 11 1111 00 00 11 11 00 11 00 11 00

G32

G34

From node 4

z43 z34 To node 4

7 W33 y3 d3

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

65 / 83

DANSE in WASNs with a tree topology (T-DANSE)

Parametrization: example  i W11  ∗   i i   W33 G13   i i i   W G G i 44 34 13  W1 =   ∗    ∗    ∗ i i i i W88 G48 G34 G13  i i i  W11 G31 G43 ∗   i i   W33 G43   i    W i 44  W4 =  ∗    ∗    ∗  i i W88 G48 

9

1 3 2

4 5

1

W11 G13

7

3

y3 z31

4

G31

z13 y1

8

6

+

z34

8

G43

+ W33

z48

y4

+ G34

G84

W44

z43

+

S. Gannot (BIU) and A. Bertrand (KUL)

G48

z84

W88

y8

Distributed speech enhancement

EUSIPCO 2013

66 / 83

DANSE in WASNs with a tree topology (T-DANSE)

General parametrization of Tree-DANSE (T-DANSE) General parametrization of T-DANSE  i Gi W11 k←1   .. Wki =   . i i WNN Gk←N 

Gip1 ←pt = Gipt−1 pt Gipt−2 pt−1 . . . Gip2 p3 Gip1 p2 with order defined by unique path Ppt →p1 = (pt , pt−1 , . . . , p2 , p1 ) from pt to p1 . By definition: Gik←k = IQ Compare with fully-connected DANSE:   i Gi W11 k1   .. Wki =   . i i WNN GkN S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

67 / 83

DANSE in WASNs with a tree topology (T-DANSE)

Parametrization: example

9

1 3 2

4 5

8

6 7

Complete parametrization of network-wide filter W4i :  i i   i i i  W11 G4←1 W11 G31 G43 i Gi i Gi Gi   W22   W22 31 43   i 4←2   i Gi   W33 Gi4←3   W33  i   i 43   W  W i 44 44    = i i i  W4 =  i i    W55 G4←5   W55 G65 G46    W i Gi   W i Gi   66 4←6   66 46 i i i i i W G  W G G  77

4←7

i Gi W88 4←8

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

77

67

i Gi W88 48

EUSIPCO 2013

46

68 / 83

DANSE in WASNs with a tree topology (T-DANSE)

Centralized MWF in T-DANSE solution space? Theorem In case of Q desired speech sources, and if Ak,ref is full rank, ∀ k ∈ J , then the solution space defined by the parametrization of T-DANSE ˆ k, ∀ k ∈ J . contains the optimal MWFs W Proof: ˆk =W ˆ q Akq , where Reminder: ∀ k, q ∈ J : W H Akq = A−H q,ref · Ak,ref

Therefore: ∀ k, q, n ∈ J : Anq Akn = Akq Set Gimn = Amn , then Gik←q = Apt−1 q · Apt−2 pt−1 . . . Ap2 p3 · Akp2 = Akq where Pk←q = (q, pt−1 , pt−2 , . . . , p3 , p2 , k) i =W ˆ kk and Gimn = Amn , then Wi = W ˆ k , Q.E.D. Hence, set Wkk k S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

69 / 83

DANSE in WASNs with a tree topology (T-DANSE)

T-DANSE updating procedure Let zi→k = [zin1T . . . zinNT ]T . k

Node k sets internal fusion rules iT h i Wkk and Gik,−k = Gin1T . . . GinNT k

with nj ∈ Nk and Nk = |Nk |. 1

3 G31

z13 y1

W11 G13

S. Gannot (BIU) and A. Bertrand (KUL)

y3 z31

4 +

z34

8

G43

+ W33

z48

y4

+ G34

G84

W44

z43

+

G48

z84

Distributed speech enhancement

W88

y8

EUSIPCO 2013

70 / 83

DANSE in WASNs with a tree topology (T-DANSE)

T-DANSE updating procedure i+1 and Gi+1 T-DANSEQ algorithm: computation of Wkk k,−k i+1 i and Gi+1 = Gi If k 6= u, then Wkk = Wkk k,−k k,−k

If k = u: " # (  )  i+1

 H H  yk 2 Wkk

= arg min E i

dk − Wkk Gk,−k z Gi+1 W ,G →k k,−k kk k,−k −1 i = Riy˜k y˜k Rd˜ d˜ [e1 . . . eQ ] k k

T ]T , and similarly for d ei . where e yki = [ykT zi→k k

Identical to fully-connected DANSE updates (but less input signals per node) Note: sequential updates (only one node updates in each iteration) S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

71 / 83

DANSE in WASNs with a tree topology (T-DANSE)

Convergence and optimality of T-DANSE

Theorem (Convergence and optimality of T-DANSE

)

[Bertrand and Moonen, 2011]

In case of Q desired speech sources, if Ak,ref is full rank, ∀ k ∈ J , and if the node-per-node updating order of T-DANSE is defined by a path ˆ k, through the network that visits all nodes, then limi→∞ Wki = W ∀k ∈ J. Note: updating order must follow a path through the network Random order updating also works in general, but no proof However: path-based updating converges faster (experimental observation)

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

72 / 83

LCMV-based DANSE (LC-DANSE)

1

Introduction and motivation

2

The DANSE algorithm in fully-connected WASNs

3

DANSE in WASNs with a tree topology (T-DANSE)

4

LCMV-based DANSE (LC-DANSE)

5

Bibliography

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

73 / 83

LCMV-based DANSE (LC-DANSE)

LCMV beamforming revisited Centralized node-specific LCMV BF at node k:   ˆ k = arg min wkH Ryy wk , s.t. AH wk = fk w wk

 −1 H −1 = R−1 A A R A fk yy yy A M × Q steering matrix from Q ‘relevant’ sources to M microphones fk node-specific response for each of the Q sources Relevant sources may also contain interferers!

PS: Will assume in sequel that A is known. For unknown A, refer to

[Markovich et al., 2009]

or

[Bertrand and Moonen, 2012a] S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

74 / 83

LCMV-based DANSE (LC-DANSE)

Linearly-constrained DANSE (LC-DANSE) DANSE ↔ (SDW-)MWF LC-DANSE ↔ LCMV Similar idea, similar block scheme

Note: Q is # constraints S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

75 / 83

LCMV-based DANSE (LC-DANSE)

Linearly-constrained DANSE (LC-DANSE) −1 H −1 ˆ k = R−1 w fk yy A A Ryy A ˆ k = W · fk , ∀ k ∈ J . ⇒ joint Q-dim subspace: w Add Q − 1 auxiliary LCMV-problems:     ˆ k = arg min Tr WH Ryy Wk , s.t. AH Wk = Fk W k Wk

−1  H −1 Fk A = R−1 A A R yy yy with Fk a Q × Q matrix of full rank, with fk in first column. ˆk =W ˆ q Akq with ∀ k, q ∈ J : W Akq = F−1 q Fk Conclusion: Centralized LCMV solutions are in (LC-)DANSE solution i =W ˆ kk and Gi = Akq ) space! (set Wkk kq S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

76 / 83

LCMV-based DANSE (LC-DANSE)

Linearly-constrained DANSE (LC-DANSE) Match constraints with compressed signals:     y1 A1     y =  ...  ↔ A =  ...  yN AN     Ak yk i i e e ↔ Ak = yk = i Ci−k z−k  i   i  z1 C1  ..   ..   .   .   i   i    C  z i i k−1 k−1    z−k =  i  ↔ C−k =  i    zk+1   Ck+1   ..   ..   .   .  ziN

CiN

iH iH zik = Wkk yk ↔ Cik = Wkk Ak S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

77 / 83

LCMV-based DANSE (LC-DANSE)

LC-DANSE Algorithm description

i+1 and Gi+1 LC-DANSEQ algorithm: computation of Wkk k,−k

Update at node k:    −1 −1 −1 " #  i H i i i  ei e e A Fk if k = u Ak Ak Ry˜k y˜k  Ry˜k y˜k i+1 k Wkk   = i+1 i Gk,−k Wkk    if k 6= u Gik,−k e i requires exchange of Wi ’s (negligible compared Note: computation of A k kk to data rate of zik ’s).

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

78 / 83

LCMV-based DANSE (LC-DANSE)

LC-DANSE: final remarks

Provable convergence and optimality Further reading:

[Bertrand and Moonen, 2012a]

Q constraints ⇒ Q-channel broadcast signals If node-specific aspect is removed (same fk in all nodes): single-channel zki ’s are sufficient! [Bertrand and Moonen, 2013] Related GSC implementation:

S. Gannot (BIU) and A. Bertrand (KUL)

[Markovich-Golan et al., 2013]

Distributed speech enhancement

(covered in part III)

EUSIPCO 2013

79 / 83

Bibliography

References and Further Reading I Baraniuk, R. (2011). More is less: Signal processing and the data deluge. Science, 331:717–719. Bertrand, A. and Moonen, M. (2009). Robust distributed noise reduction in hearing aids with external acoustic sensor nodes. EURASIP Journal on Advances in Signal Processing, 2009. Bertrand, A. and Moonen, M. (2010a). Distributed adaptive node-specific signal estimation in fully connected sensor networks – part I: sequential node updating. IEEE Transactions on Signal Processing, 58:5277–5291. Bertrand, A. and Moonen, M. (2010b). Distributed adaptive node-specific signal estimation in fully connected sensor networks – part II: simultaneous & asynchronous node updating. IEEE Transactions on Signal Processing, 58:5292–5306. Bertrand, A. and Moonen, M. (2011). Distributed adaptive estimation of node-specific signals in wireless sensor networks with a tree topology. IEEE Trans. Signal Processing, 59(5):2196–2210. Bertrand, A. and Moonen, M. (2012a). Distributed node-specific LCMV beamforming in wireless sensor networks. IEEE Transactions on Signal Processing, 60:233–246. Bertrand, A. and Moonen, M. (2012b). Distributed signal estimation in sensor networks where nodes have different interests. Signal Processing, 92(7):1679 –1690.

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

80 / 83

Bibliography

References and Further Reading II Bertrand, A. and Moonen, M. (2013). Distributed LCMV beamforming in a wireless sensor network with single-channel per-node signal transmission. IEEE Transactions on Signal Processing, 61:3447–3459. Doclo, S. and Moonen, M. (2002). GSVD-based optimal filtering for single and multimicrophone speech enhancement. IEEE Transactions on Signal Processing, 50(9):2230 – 2244. Doclo, S., van den Bogaert, T., Moonen, M., and Wouters, J. (2009). Reduced-bandwidth and distributed MWF-based noise reduction algorithms for binaural hearing aids. IEEE Trans. Audio, Speech and Language Processing, 17:38–51. Heusdens, R., Zhang, G., Hendriks, R. C., Zeng, Y., and Kleijn, W. B. (2012). Distributed MVDR beamforming for (wireless) microphone networks using message passing. In Proc. International Workshop on Acoustic Signal Enhancement (IWAENC). Himawan, I., McCowan, I., and Sridharan, S. (2011). Clustered blind beamforming from ad-hoc microphone arrays. IEEE Trans. Audio, Speech, and Language Processing, 19(4):661 –676. Hioka, Y. and Kleijn, W. B. (2011). Distributed blind source separation with an application to audio signals. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 233 –236. Lawin-Ore, T. C. and Doclo, S. (2011). Analysis of rate constraints for MWF-based noise reduction in acoustic sensor networks. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages pp. 269–272, Prague, Czech Republic.

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

81 / 83

Bibliography

References and Further Reading III Markovich, S., Gannot, S., and Cohen, I. (2009). Multichannel eigenspace beamforming in a reverberant noisy environment with multiple interfering speech signals. IEEE Trans. Audio, Speech, and Language Processing, 17(6):1071–1086. Markovich-Golan, S., Gannot, S., and Cohen, I. (2010). A reduced bandwidth binaural MVDR beamformer. In Proc. of the International Workshop on Acoustic Echo and Noise Control (IWAENC), Tel-Aviv, Israel. Markovich-Golan, S., Gannot, S., and Cohen, I. (2013). Distributed multiple constraints generalized sidelobe canceler for fully connected wireless acoustic sensor networks. IEEE Transactions on Audio, Speech, and Language Processing, 21(2):343–356. Olfati-Saber, R., Fax, J., and Murray, R. (2007). Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE, 95(1):215–233. Sayed, A., Tu, S.-Y., Chen, J., Zhao, X., and Towfic, Z. (2013). Diffusion strategies for adaptation and learning over networks: an examination of distributed strategies and network behavior. IEEE Signal Processing Magazine, 30(3):155–171. Shah, D. (2009). Gossip algorithms. Foundations and Trends in Networking, 3(1):1–125. Srinivasan, S. and Den Brinker, A. C. (Article ID 257197, 14 pages, 2009.). Rate-constrained beamforming in binaural hearing aids. EURASIP Journal on Advances in Signal Processing, 2009. S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

82 / 83

Bibliography

References and Further Reading IV

Szurley, J., Bertrand, A., and Moonen, M. (2013). Improved tracking performance for distributed node-specific signal enhancement in wireless acoustic sensor networks. In Proc. of the IEEE International Conference on Acoustics, Speech and Signal processing (ICASSP), pages 336–340, Vancouver, Canada. Zeng, Y. and Hendriks, R. (2012). Distributed delay and sum beamformer for speech enhancement in wireless sensor networks via randomized gossip. In IEEE Int. Conf. Acoustics, Speech and Signal Processing (ICASSP), pages 4037–4040.

S. Gannot (BIU) and A. Bertrand (KUL)

Distributed speech enhancement

EUSIPCO 2013

83 / 83

Suggest Documents