# Matrix Norms vs. Vector Norms

ÒÑÓÐ 4545 ´ÎÔÄ¶Á 2013-1-13 16:12 |¸öÈË·ÖÀà:Machine Intelligence|ÏµÍ³·ÖÀà:¿ÆÑÐ±Ê¼Ç| Matrix, Vector, norm

The four vector norms that play signi cant roles in the compressed sensing framework are the $\iota_0$ , the $\iota _1$ , $\iota_2$, and $\iota_\infty$ norms, denoted by $\|x\|_0$, $\|x\|_1$, $\|x\|_2$ and $\|x\|_\infty$ respectively.

Given a vector $x\in R^m$.

$\|x\|_0$ is equal to the number of the non-zero elements in the vector $x$.

$\|x\|_1=\sum_{i=1}^{m}|x_i|$.
$\|x\|_2=\sqrt{x_1^2+x_2^2+...+x_m^2}$.
$\|x\|_\infty=\max_i |x_i|$.
The vector norm $\|x\|_p$ for $p=1, 2, 3, ...$ is defined as
$\|x\|_p=(\sum_{i=1}^m |x_i|^p )^\frac{1}{p}.$
The $p$-norm of vector $x$ is implemented as Norm[$x$, $p$], with the 2-norm being returned by Norm[$x$][1].

These norms have natural generalizations to matrices, inheriting many appealing properties from
the vector case. In particular, there is a parallel duality structure.

For two rectangular matrices $X\in R ^{m \times n}$, $X\in R ^{m \times n}$ .

Define the inner product as $<X,Y>:= \sum_{i=1}^m \sum_{j=1}^n X_{ij}Y_{ij}= \sqrt{Tr(X^T Y)}.$

The norm associated with this inner product is called the Frobenius (or Hilbert-Schmidt) norm
$\|X\|_F$ . The Frobenius norm is also equal to the Euclidean, or $\iota_2$, norm of the vector of singular
values, i.e.,

$\|X\|_F:= \sqrt{<X,X>}=\sqrt{X^T X}=({\sum_{i=1}^r \delta_i^2})^{\frac{1}{2}}.$

The operator norm (or induced 2-norm) of a matrix is equal to its largest singular value (i.e., the $\iota_1$ norm of the singular values):

$\|X\|:= \delta_ 1(X).$

The nuclear norm of a matrix is equal to the sum of its singular values, i.e.,

$\|X\|_*:= \sum_{i=1}^r {\delta_i(X)}$,
and is alternatively known by several other names including the Schatten 1-norm, the Ky Fan r-norm, and the trace class norm. Since the singular values are all positive, the nuclear norm is equal to the $\iota_1$ norm of the vector of singular values. These three norms are related by the following inequalities which hold for any matrix $X$ of rank at most $r$:

$\|X\| \leq \|X\|_F \leq \|X\|_{*} \leq \sqrt{r}\|X\|_F \leq r\|X\|.$

Table 1: A dictionary relating the concepts of cardinality and rank minimization[2].

 parsimony concept cardinality rank Hilbert Space norm Euclidean Frobenius sparsity inducing norm $\iota_1$ nuclear dual norm $\iota_{\infty}$ operator norm additivity disjoint support orthogonal row and column spaces convex optimization linear programming semide nite programming

The rank of a matrix $X$ is equals to the number of the singular values, and the singular values are all non-zeros.
The $\iota_0$ norm of a vector is the number of the non-zero elements in the vector.

In compressed sensing (in sparse representation), the original objective function is:

$\min \|x\|_0$  s.t. $Ax=b$.

In low rank representation, the original objective function is:

$\min rank(Z)$  s.t. $X=XZ$.

I'm curious to the relation ship between sparse representation and low rank representation.

References:

[1]   Weisstein, Eric W. "Vector Norm." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/VectorNorm.html.

[2] Recht, Benjamin, Maryam Fazel, and Pablo A. Parrilo. "Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization." SIAM review 52, no. 3 (2010): 471-501.

Apendix:

Vector and matrix norms

Syntax

n = norm(X,2)
n = norm(X)
n = norm(X,1)
n = norm(X,Inf)
n = norm(X,'fro')
n = norm(v,p)
n = norm(v,Inf)
n = norm(v,-Inf)

Description

The norm function calculates several different types of matrix and vector norms. If the input is a vector or a matrix:

n = norm(X,2) returns the 2-norm of X.

n = norm(X) is the same as n = norm(X,2).

n = norm(X,1) returns the 1-norm of X.

n = norm(X,Inf) returns the infinity norm of X.

n = norm(X,'fro') returns the Frobenius norm of X.

In addition, when the input is a vector v:

n = norm(v,p) returns the p-norm of v. The p-norm is sum(abs(v).^p)^(1/p).

n = norm(v,Inf) returns the largest element of abs(v).

n = norm(v,-Inf) returns the smallest element of abs(v).

http://blog.sciencenet.cn/blog-621576-652662.html

ÏÂÒ»Æª£º»úÆ÷Ñ§Ï°¿Î³ÌÍÆ¼ö

Êý¾Ý¼ÓÔØÖÐ...