智林的科学之路分享 http://blog.sciencenet.cn/u/oliviazhang Sparse Signal Recovery, Bayesian Learning, Biomedical Signal Processing, Smart-Watch, Smart-Home, Health Monitoring

博文

Yes, let's Move From Sparsity to Structured Sparsity

已有 4357 次阅读 2012-10-2 19:31 |个人分类:经验交流|系统分类:科研笔记| 压缩传感, 稀疏贝叶斯学习, 稀疏信号恢复, 非稀疏信号恢复

申明:本篇博文转自我的英文博客:http://marchonscience.blogspot.com/

Igor wrote a cool post in Nuit Blanche yesterday: Pushing the Boundaries in Compressive Sensing by proposing the following questions:
  • Given that compressive sensing is based on an argument of sparsity 
  • Given that sparsity is all around us because most data seem to be sparse or compressible in a wavelet basis
  • Given that wavelet decomposition not only shows not just simple compressibility but structured compressibility of most datasets
  • Given that we now have some results showing better compressive sensing with a structured sparsity argument
Isn't it time we stopped talking about RIP ? the Donoho-Tanner phase transition ? L_1 recovery ?
 My answer is "YES"! There are a lot of practical scenarios where signals (or the regression coefficients) have rich structure. Merely exploiting sparsity without exploiting structure is far from enough! Even in some cases where structure information is not obvious and generally traditional L1 recovery algorithms are used, we can still find a way to use structured-sparsity-based algorithms.

In the following I'll give an example on the recovery of compressed audio signals, which is a typical application. I've seen a number of work which used traditional L1 algorithms to do the job. Let's see how a block-structure-exploited algorithm can improve the result.

Below is an audio signal with the length of 81920. 
In the compression stage, it was evenly divided into T segments x_i (i=1,2,...,T). We considered five cases, i.e., T choosing 160, 80, 40, 20, and 10. Accordingly, the segments had the length of N = 512, 1024, 2048, 4096, and 8192.  Each segment, x_i, was compressed into y_i. The sensing matrix A was a random Gaussian matrix with the dimension N/2 by N.

In the recovery stage, we first recovered the DCT coefficients of each segment, and then recovered the original segment. This is a tradition method used by most L1 algorithms for this task, since audio signals are believed to be more sparse than in the time domain.

We used six algorithms which do not consider any structure information. They were: Smooth L0 (SL0), EM-BG-AMP, SPGL-1, BCS, OMP, and SP. Their recovery performance was measured by total speed and the normalized MSE (calculated in dB). The results are given in the following table:

Table: Performance of all algorithms measured in terms of normalized MSE in dB (and speed in second).

From the Table, we can see if we want to obtain good quality, we need to increase the segment length. However, the cost is that the recovery time is significantly increased. For example, to achieve the quality of MSE = -21dB, SL0 needed to recover segments of the length 8192, and the total time was 2725 seconds!

Now let's perform the BSBL-BO algorithm, a sparse Bayesian learning algorithm exploiting block structure and intra-block correlation,  to do the same task. As I have said in many places, BSBL-BO needs users to define a block partition, and the block partition is not needed to be consistent with the true block structure of signals. So, we defined the block partition as (in Matlab language): [1:16:N]. The complete command for recovering the i-th segment was:
Result = BSBL_BO(A,  y_i, [1:16:N], 0, 'prune_gamma',-1, 'max_iters',15);

The recovery result for the case N=512 is given in the above Table. Clearly, by exploiting the structure information, BSBL-BO achieved -23.3 dB but only cost 82 seconds! The quality was 2 dB higher than that of SL0 but cost only 3% of the time cost by SL0.

What this result tells us? 

First, exploiting both structure and sparsity is very important; merely exploiting sparsity is outdated in many practical applications.

Second, conclusions on which algorithms are fast or slow are weakened if not mentioning which practical task is finished by these algorithms. Based on specific tasks, the answer could be varied case by case. For example, in the above experiment SL0 was very faster than BSBL-BO given the same segment length N. However, if the goal is to achieve a good quality, it took much longer time than BSBL-BO (even could not achieve the same quality as BSBL-BO, no matter how large the N was).

Note, I was not trying to compare BSBL-BO with the other algorithms. The comparison was not meaningful, since BSBL-BO exploits structure while the other algorithms do not. The point here is, even for some traditional applications where L1 algorithms were often used, exploiting structure information can provide you a better result!


PS: Several months ago we published a review paper on SBL algorithms exploiting structure, titled "From Sparsity to Structured Sparsity: Bayesian Perspective", in the Chinese journal Signal Processing. One can download the paper from here.



https://blog.sciencenet.cn/blog-765184-618639.html

上一篇:人生如茶,空杯以对
下一篇:MIT某位老师的办公室
收藏 IP: 128.54.30.*| 热度|

0

该博文允许注册用户评论 请点击登录 评论 (0 个评论)

数据加载中...
扫一扫,分享此博文

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-5-17 22:42

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部