Data Processing Exercise
Topics in Large Dimensional Data Processing Exercise
Wei Dai November 16, 2023
EEE Department, Imperial College London
Coursework 3
Code Help, Add WeChat: cstutorcs
8.3. Blind Deconvolution: Convex Relaxation
In this section, we observe the output of the cyclic convolution
𝒚 = 𝒙 ~𝑛 𝒉
and try to find the signal 𝒙 and the channel 𝒉 using convex optimization.
The assumption is that 𝒙 has small total variation TV(𝒙)=X|𝑥𝑚+1 −𝑥𝑚|,
and 𝒉 is sparse and hence small 𝑙1-norm.
We formulate the following convex optimization problem: min 1∥𝒚−A(𝑿1)∥2
+𝜆1 ∥𝑿2∥∗ +𝜆2TVcol (𝑿1;𝑿3)+𝜆3 ∥𝑿4∥2,1
s.t. 𝑿2 = 𝑿1, 𝑿3 = (𝑿1)2:𝑚,: − (𝑿1)1:𝑚−1,: , 𝑿4 = 𝑿1,
where ∥·∥∗ denotes the nuclear norm, TVcol represents row-wise total
TVcol (𝑿1; 𝑿3) := X (𝑿1)𝑚+1,: − (𝑿1)𝑚,:
and ∥·∥2,1 stands for the 𝑙2,1-norm
∥𝑿∥2,1 =X∥𝑿:,𝑛∥2. 𝑛
We shall solve the above problem using ADMM.
= 𝑿T , 2 32,1
1. Write the explicit form of the augmented Lagrangian (similar to (8.1)). [1] 2. Write a function to update 𝑿1 when all other variables are fixed. Save the value of 𝑿1 into the data file. [4]
3. Write a function to update 𝑿2:4 when all other variables are fixed. Save the values of 𝑿1 into the data file. [9]
4. Write a function to update 𝑼2:4 when all other variables are fixed.
Save the values of 𝑼 1 into the data file. [3] 2:4
5. NowwriteafunctiontoimplementtheADMMalgorithmincluding the stopping criteria mentioned in the lecture notes. Here we use the error tolerance constants 𝜖abs = 𝜖rel = 10−4. Save your final result 𝑿ˆ into the data file. [4] Note that the hyperparameters affect the critical point and conver- gence rate. In this particular part, you are allowed to change the hyperparameters in order to get good numerical results.
8. Real Sparse Problems 27
Computer Science Tutoring
8.4. Blind Deconvolution
Similar to the last section, we observe the output of the cyclic convolu- tion
𝒚 = 𝒙 ~𝑛 𝒉
and try to find the signal 𝒙 and the channel 𝒉. The assumption is that 𝒙
has small total variation
TV(𝒙)=X|𝑥𝑚+1 −𝑥𝑚|,
and 𝒉 is sparse and hence small 𝑙1-norm. We also assume that ∥𝒙∥2 = 1.
We formulate the following optimization problem: min 1 ∥𝒚 − 𝒙 ~ 𝒉∥2 + 𝛿 (∥𝒙∥2 = 1)
+𝜆1 ∥TV(𝒙)∥0 +𝜆2 ∥𝒉∥0 .
To proceed, we relax it into
min 1∥𝒚−𝒙1~𝒉1∥2+𝛿(∥𝒙2∥2=1)
𝒙1:3,𝒉1:2 2
+𝜆1 ∥𝒙3∥0 +𝜆2 ∥𝒉2∥0
+ 𝛼2 ∥ 𝒙 2 − 𝒙 1 ∥ 2 2
+ 𝛼2 ∥𝒙3 − ((𝒙1)2:𝑚 − (𝒙1)1:𝑚−1)∥2 + 𝛼2 ∥ 𝒉 2 − 𝒉 1 ∥ 2 2 ,
where 𝛼 > 0.
We solve the above optimization problem using alternating minimiza-
1. With fixed 𝒉1:2, we update 𝒙1:3 using proximal gradient method (with only one iteration).
a) Find the Hessian matrix of
𝛼2 ∥𝒙2 − 𝒙1 ∥2 + 𝛼2 ∥𝒙3 − ((𝒙1)2:𝑚 − (𝒙1)1:𝑚−1)∥2 .
b) Find the upper bound (𝜏ub) of the step size in the proximal gradient step to update 𝒙1:3. [1]
c) Choose the step size as 0.8𝜏ub and update 𝒙1:3 using one iteration of the proximal gradient method. Save your result into the data file. [2+2+2=6]
d) Evaluate the subgradient with respect to 𝒙1:3 [3]
2. With fixed 𝒙1:3, we update 𝒉1:2 using proximal gradient method (with only one iteration).
8. Real Sparse Problems 28
Programming Help
a) Find the Hessian matrix of
𝛼2 ∥ 𝒉 2 − 𝒉 1 ∥ 2 2
b) Find the upper bound (𝜏ub) of the step size in the proximal gradient step to update 𝒉1:2. [1]
c) Choose the step size as 0.8𝜏ub and update 𝒉1:2 using one iteration of the proximal gradient method. Save your result into the data file. [2+2=4]
d) Evaluate the subgradient with respect to 𝒉1:2 [2]
3. Evaluate the subgradient with respect to 𝒙1:3 using the updated 𝒙1:3 and 𝒉1:2. [2] 4. Complete the alternating minimization method where each itera- tion involves one proximal gradient step to update 𝒙1:3 and that to
update 𝒉1:2.
Include that the number of alternating minimization is at most 500 into your stopping criteria.
Save your results into the data file.
Note that the hyperparameters affect the critical point and conver- gence rate. In this particular part, you are allowed to change the hyperparameters in order to get good numerical results. [4]
8. Real Sparse Problems 29