ELEC70037 Coursework

Data Processing Exercise
Topics in Large Dimensional Data Processing Exercise
Wei Dai October 30, 2023
EEE Department, Imperial College London
Code Help, Add WeChat: cstutorcs
Contents i
Teaching Organisaion 1
Teaching Organisation 2
1.1. Overview……………………………………….. 2
1.2. Coursework ……………………………………… 2 1.2.1. Schedule ……………………………………. 2 1.2.2. Marking(forCoursework1-3) …………………………. 3
1.3. SoftwareforCoursework1-3 ……………………………… 4 1.3.1. JuliaforProgramming …………………………….. 4
Feedback 5
2.1. YouSaidWeDid……………………………………. 5 2.2.YourFeedback…………………………………….. 5 2.3. OurFeedbacktoYou …………………………………. 5
Important Notes 6
3.1. Plagiarism ………………………………………. 6
3.2. NotesforCourseworkSubmission…………………………… 6 3.2.1. HandlingSolutions ………………………………. 6
Coursework 1 8
Linear Algebra: Basics 9
4.1. TraceandInnerProductofMatrices………………………….. 9 4.2. TheMatrixoftheLinearMap …………………………….. 9 4.3. AdjointOperator……………………………………. 10
Least Squares 14
5.1. TheNetflixProblem………………………………….. 14 5.2. EstimationAccuracy………………………………….. 14 5.3. LSRecoveryfromConvolution…………………………….. 15 5.4. LSRecoveryforMRI………………………………….. 16
III. Coursework 2 17
6. Proximal Operator 18
7. Sparse Inverse Problems 20
7.1. GreedyAlgorithms ………………………………….. 20 7.2. ProximalGradientMethod………………………………. 20 7.3. MRICSRecovery:Wavelet ………………………………. 20 7.4. MRICSRecovery:DCT ………………………………… 21 7.5. MRICSRecovery:WaveletandDCT………………………….. 21
IV. Coursework 3 23

Coursework 4 25
Learn and Tell Coursework Guideline 26
8.1. CourseworkFormat………………………………….. 26 8.1.1. PresentationSessions……………………………… 26
8.2. MarkingScheme……………………………………. 27 8.2.1. MarkingCriteriaforPresentations……………………….. 27
8.3. TopicSelection…………………………………….. 27 8.3.1. PossibleTopics…………………………………. 27

Teaching Organisaion

Teaching Organisation 1.
• Every Monday between 9/10 and 27/11, 14:00-15:50, Room 305.
Officehours
• Every Monday between 9/10 and 27/11, 12:00-13:00, Room 503.
Communications:Shouldyouhaveanyquestionsorinquiries:
• Attend the designated office hours.
• Post them on the Q&A channel.
• Forprivatematters,pleasecontactusatweidai.gta.course@
gmail.com.
• Refrainfromusingmycollegeemailaddress,asitmightresult
in your email getting overlooked.
Assessment
4 coursework: each counts 25%
• Coursework 1-3 are composed of technical questions. • Coursework 4 is a “Learn and tell” presentation.
Assignment Due Feedback
Assignment Due Feedback
Assignment Due Feedback
Assignment Group/Topic decisions Slides Due Presentations
Peer marking Feedback
1.2.1. Schedule
Coursework
Table 1.1.: Coursework Schedule
Coursework 1
Coursework 2
Coursework 3
Coursework 4
Due Date/Time
(Week 3) 16/10 (Week 5) 30/10 09:59 (Week 6) 6/11
(Week 5) 30/10 (Week 7) 13/11 09:59 (Week 8) 20/11
(Week 7) 13/11 (Week 11) 11/12 09:59 Marks in Jan. 2024
(Week 5) 30/10 (Week 7) 13/11 (Week 10) 4/12 08:59 (Week 10) 4-6/12 (Week 10) 7/12 23:59 (Week 11) 11/12

Note that coursework 4 will be finished before coursework 3 ends. The procedure for coursework 1-3 is given below.
▶ Eachindividualstudentwillreceivetheirowndatafilefromthe GTAs.
▶ Studentscanformgroupsinfinishingthecoursework.
▶ Before the due time, students need to submit their individual coursework submissions, and specify their group and their contri- butions to the group. Please note that the relevant files will not be
allowed to change after the deadline.
▶ GTAswillmarkindividualcourseworksubmission.Let𝑀𝑖,0denote
the raw mark received by the student 𝑖. Their adjusted mark will be calculated based on the equation (1.1). The adjusted mark will be returned to the student.
▶ GTAs will choose some coursework questions to discuss. The students who are chosen to lead specific discussions will be given extra marks in order to encourage participation.
The process of coursework 4 is detailed in Chapter 8. 1.2.2. Marking (for Coursework 1-3)
The marking formula for the first three coursework is given by
!! 𝑀𝑖=min X𝑀𝑘,0 ∗𝐶𝑖∗𝑊𝑙,𝑀𝑖,0 ,
▶ 𝑀𝑖 is the adjusted mark received by the student 𝑖,
▶ 𝑀𝑘,0 is the raw mark obtained by the student 𝑘,
▶ P𝑘∈G𝑙 𝑀𝑘,0 is the total raw mark obtained by the group G𝑙,
▶ 𝐶𝑖 is the contribution of the student 𝑖 in the group,1
▶ 𝑊𝑙 is the weighting coefficient for the group G𝑙 , depending on the
size of the group,
▶ andweapplytheminfunctiontoensuretheadjustedmarkdoes
not exceeds the raw mark.
The weighting coefficients for groups are detailed in Table 1.2. There are several considerations behind the design.
▶ We encourage both individual work and team work.
▶ The default size of the group is 3 or 4.
▶ Studentsfromthesamegroupmaysharethesamecodes.
▶ Cheating offences and plagiarism are taken very seriously and
are dealt with according to the College’s Academic Misconduct Policy and Procedure.
1: Note that
X 𝐶𝑘 = 1.00 = 100%.
1. Teaching Organisation 3
Size of the group Weighting coefficient 𝑊
1 2 1.00 0.96
3 4 5 0.92 0.90 0.70
Table 1.2.: The weighting coefficient 𝑊 for the group is decided by the size of the group.
Remark 1.2.1 (Extra marks) If
1. a student indicates that they are comfortable to share their

1. Teaching Organisation 4
coursework solutions in the feedback/discussion sessions,
2. their coursework solutions are indeed chosen by GTAs and the
lecturer for feedback/discussion, and
3. the student participates in explaining the key ideas behind their
solutions,
then extra marks will be given the student.
The amount of extra marks given to the student will be decided by the GTAs and the lecturer, based on the quality of the solutions and the explanation of key ideas. The amount of extra marks will be announced to the student.
1.3. Software for Coursework 1-3
The coursework requires Julia programming. The coursework submission files (except coursework 4) include a Jupyter notebook file and a data file in JLD2 format. We recommend VSCode as the default editor.
The software that you need to install/have
▶ Jupyter Notebook
Download and install the package management software Anaconda. By default, it will install Python and Jupyter Notebook to your system.
▶ Julia programming language
• Download and install the current stable version of Julia.
• Check whether Julia has been installed properly: Run Julia interactive session (a.k.a. REPL) by following the instructions.
• Download and install VSCode.
• We need to install some extensions for VSCode. See here for
how to install VSCode extensions.
* Anaconda should have automatically installed VSCode Extensions Python and Pylance for you. If not, install them.
* Install extension Julia for Julia programming in VS- Code. See here and here for more details.
1.3.1. Julia for Programming
▶ Acollectionoftutorials.
▶ Introduction to Scientific Programming and Machine Learning
with Julia.
▶ Atutorialinpdfformat.

Feedback 2. 2.1. You Said We Did
Based on students’ feedback in the past several years, we have done the following changes
▶ Lecturenotes
• We have changed the lecture notes from slides to a book format.
* Contents are more self-contained.
* WeusekaobookLaTeXtemplatesothatwehaveenough
space to make notes in the lectures.
• Less theory and more examples
▶ Assessment
• The format has been changed from a final exam to four coursework.
* Emphasise a lot more on the applications of theory.
* Help students horn analysis and programming skills.
• Significantly reduce the weighting of peer marking.
• Introduceandrefinethecourseworkcontentsandthemarking
scheme to encourage both individual efforts and teamworking
* Each individual student get their own data.
* Each individual student can specify their own contribu-
tions to the group.
* Individual student’s mark depends on the size of the
2.2. Your Feedback
Towards the end of the term, please do the MEQ (previously known as
SOLE) questionaries from the college.
Towards the end of the term, GTAs may contact you for our own ques- tionaries for this module.
2.3. Our Feedback to You
▶ We target at finishing the coursework marking within 10 working days. We will return the relevant information to you.
▶ Inthelectures,wewilldiscusssomeinteresting/challengingcourse- work questions together.

Important Notes 3.
PLAGIARISM/COLLUSION DECLARATION
Coursework submitted for assessment must be the original work of you and your group. Assignments are subjected to regular checks for plagiarism and/or collusion. Plagiarism is the presentation of another person’s thoughts or words (those outside your group) as if they were your own. Collusion involves obtaining help from someone outside your group to complete your work. In preparing your coursework, you should not seek help, or copy from any other person or source, including the Internet, without proper and explicit acknowledgement.
There is a procedure in place for you to declare individual contributions within your group for courswork. You must declare the contributions fairly and accurately.
You must not disclose your solutions or insights related to coursework with anyone else, including future students or the Internet.
By acknowledging the the statements above, you are declaring that both this and all subsequent pieces of coursework are, and will remain, the original work of you and your group.
▶ Submissions will not be accepted without the aforementioned declaration.
▶ Membersofagrouparedeemedtohavecollectiveresponsibility for the integrity for work submitted and are liable for any penalty imposed, proportionate to their contributions.
3.2. Notes for Coursework Submission
▶ Eachregisteredstudentwillgetadatafile.Thedatainthedatafile can be unique.
▶ Each registered student needs to submit the solutions related to their own data, no matter whether they are in groups or not.
3.2.1. Handling Solutions
In our coursework, we use the following convention.
▶ Ifthesolutiontoaquestionisauniqueinteger,youneedtoassign an integer value to your solution variable.
▶ Ifthesolutiontoaquestiondoesnotexist,youneedtocreatea1-D array of length zero.
▶ Ifthesolutiontoaquestionisnotunique,youneedtocreatea1-D array, of which the length is the number of distinct solutions, and then specify the values in the ascending order.
3.1. Plagiarism

3. Important Notes 7
y = Array{Int64,1}(undef,0);
z = Array{Int64,1}(undef,3);
z[1] = 3; z[2] = 4; z[3] = 5;

Coursework 1

Linear Algebra: Basics 4. 4.1. Trace and Inner Product of Matrices
Give a sqaure matrix 𝑨 ∈ R𝑛×𝑛, its trace is defined as the sum of the diagonal elements:
𝑛 trace(𝑨) := X𝐴𝑖,𝑖.
Use the parameters in your data file to answer the following questions.
1. Let𝑨,𝑩∈R𝑚×𝑛.
a) Identify the dimension of 𝑨𝑩T. [2] b) Identifythedimensionof𝑨T𝑩. [2]
c) Is the following statement true or false? trace 𝑨𝑩T  = trace 𝑨T 𝑩 .
2. For any two matrices 𝑨,𝑩 ∈ R𝑚×𝑛, define the inner product between them:
⟨𝑨, 𝑩⟩ := trace 𝑨T𝑩 . Is the following statement true or false?
⟨𝑨, 𝑩⟩ = ⟨vect(𝑨), vect(𝑩)⟩ . 3. Let𝑨∈R𝑘×𝑙 and𝑩∈R𝑚×𝑛.
a) Specify the dimensions of 𝑼 and 𝑽 such that 𝑼𝑨𝑽T,𝑩
is well-defined. [4] b) Is the following statement true or false?
𝑼𝑨𝑽T,𝑩 = 𝑨,𝑼T𝑩𝑽 .
4.2. The Matrix of the Linear Map
Many of the coursework questions are using the above fact.
Remark 4.2.1 A linear map (or linear transformation) between two finite-dimensional vector spaces can always be represented by a matrix, called the matrix of the linear map.

In the following problem, the linear map is from a vector space of matrices of dimension 𝑚 × 𝑛 to a vector space of matrices of dimension 𝑠 × 𝑡. This linear map can be represented by a matrix of proper dimension.
C: R𝑚×𝑛 →R𝑠×𝑡
𝑿 ↦→ 𝒀 = 𝑨𝑿𝑩.
Find the matrix of the linear map, denoted by 𝑪, using Kronecker product such that
vect(𝒀) = 𝑪vect(𝑿).
1: 𝑪 = 𝑩T ⊗ 𝑨
4.3. Adjoint Operator
Let A be a linear operator mapping from R𝑛 to R𝑚 . It always can be represented by a matrix 𝑨 ∈ R𝑚×𝑛, that is,
𝒙 ↦→ 𝒚 = 𝑨𝒙.
4. Linear Algebra: Basics 10
Definition 4.2.1 Let 𝑨 ∈ R𝑚×𝑛 and 𝑩 ∈ R𝑝×𝑞. The Kronecker product 𝑨 ⊗ 𝑩 is the 𝑝𝑚 × 𝑞𝑛 (block) matrix given by
𝐴1,1𝑩 ··· 𝐴1,𝑛𝑩  . . . 
𝑨 ⊗ 𝑩 :=  . . . .  .
𝐴𝑚,1𝑩 ··· 𝐴𝑚,𝑛𝑩 
Remark 4.2.2 It is important to note that although the above linear map can be represented by the matrix 𝑪, the evaluation of the linear map in practice is via 𝒀 = 𝑨𝑿𝑩 rather than \reshape( C * X[:], s, t ).
The approach to evaluate the linear map without using its matrix representation is sometimes called the matrix-free approach.
In the following coursework questions, sometimes you are asked to find the matrix of the linear map, sometimes you are expected to evaluate the linear map without using the matrix representation.
Definition 4.3.1 The adjoint of the linear operator A, denoted by A∗, is defined via
⟨A(𝒙), 𝒚⟩ = ⟨𝒙, A∗(𝒚)⟩ forall𝒙∈R𝑛 and𝒚∈R𝑚.
It can be verified that
A∗(𝒚) = 𝑨T𝒚.

If A: C𝑛 → C𝑚, then Ais generally represented by a matrix 𝑨 ∈ C𝑚×𝑛. That is,
A∗(𝒚) = 𝑨H𝒚,
where the superscript H denotes ‘conjugate transpose’.
The following exercise questions involve identification of linear operators and their adjoint operators. One way to check the correctness of your adjoint operator is to see whether the equality
⟨A(𝒙), A(𝒚)⟩ = ⟨𝒙, A∗ (A(𝒚))⟩
1. Consider the subsampling operator PΩ where Ω = {𝑖1 , 𝑖2 , · · · , 𝑖𝑚 }
is a subset of [1 : 𝑛] := {1, 2, · · · , 𝑛} (𝑚 ≤ 𝑛), defined as
PΩ : C𝑛 → C𝑚 𝒙↦→𝒚with𝑦𝑖 =𝑥𝑖𝑙.
a) Write a function to find the corresponding matrix representa- tions of PΩ and P∗ .
Use the data in the data file for a test. [4] b) Write a function to evaluate 𝒚 = PΩ(𝒙) without using the
matrix representation of the linear map.
Use the data in the data file for a test. [2] c) Write a function to evaluate 𝒛 = P∗ (𝒚) without using the
matrix representation of the linear map.
Use the data in the data file for a test. [2]
2. Consider the Hankel operator given by H𝑝,𝑞 : C𝑝+𝑞−1 → C𝑝×𝑞
𝑥1 𝑥1𝑥2𝑥3···𝑥𝑞
 𝑥2  𝑥2 𝑥3 𝑥4 ··· 𝑥𝑞+1  𝒙 =  .  ↦→ 𝑯 =  . . . . . .  .
.….. 𝑥𝑝+𝑞−1 𝑥𝑝 𝑥𝑝+1 𝑥𝑝+2 · · · 𝑥𝑝+𝑞−1
Or equivalently,
𝐻𝑚,𝑛 = 𝑥𝑚+𝑛−1.
a) Write a function to find the corresponding matrix representa-
tions of H and H∗ . 𝑝,𝑞 𝑝,𝑞
Use the data in the data file for a test. [4] b) Write a function to evaluate 𝑯 = H𝑝,𝑞(𝒙) without using the
matrix representation of the linear map.
Use the data in the data file for a test. [2] c) Write a function to evaluate 𝒛 = H∗ (𝑯) without using the
matrix representation of the linear map.
Use the data in the data file for a test. [2] d) Evaluate 𝒛./𝒙 where the symbol ./ denotes element division.
3. Let 𝒉 ∈ R𝑚 be given. For any 𝒙 ∈ R𝑛 , the corresponding convolu-
4. Linear Algebra: Basics 11

tion map is given by
C𝒉 : R𝑛 → R𝑛+𝑚−1
𝒙 ↦→ 𝒚 = 𝒙 ⊛ 𝒉
𝑦[𝑡]:= X 𝑥[𝑡+1−𝜏]×h[𝜏]. 1≤𝜏≤𝑡
a) Write a function to find the corresponding matrix representa- tions of C𝒉 and C∗ .
Use the data in the data file for a test. [4]
b) Write a function to evaluate 𝒚 = 𝒙 ⊛ 𝒉 = C𝒉(𝒙) without using
the matrix representation of the linear map.
Use the data in the data file for a test. [2]
c) Write a function to evaluate 𝒛 = C∗ (𝒚) without using the
matrix representation of the linear map.
Use the data in the data file for a test. [2]
4. Let 𝒉 and 𝒙 be sequences of infinite length that are periodic with a period 𝑛, i.e.,
h[𝑘]=h[𝑙], ∀𝑘,𝑙∈Zwith𝑘−𝑙≡0(mod𝑛), 𝑥[𝑘]=𝑥[𝑙], ∀𝑘,𝑙∈Zwith𝑘−𝑙≡0(mod𝑛).
With slight abuse of notations, use 𝒉 ∈ R𝑛 and 𝒙 ∈ R𝑛 to represent these two infinite sequences.
Define the cyclic convolution of them as
C𝒉,𝑛 : R𝑛 →R𝑛 𝒙↦→𝒚=𝒙⊛𝑛 𝒉
𝑦[𝑡]:= X 𝑥[𝑡+1−𝜏]×h[𝜏]. 1≤𝜏≤𝑛
a) Write a function to find the corresponding matrix representa- tionsofC𝒉,𝑛andC∗ .
Use the data given in the data file as a test. [4]
b) Write a function to evaluate 𝒚 = 𝒙 ⊛𝑛 𝒉 = C𝒉,𝑛(𝒙) without using the matrix representation of the linear map. Use the Julia function DSP.conv.
Use the data given in the data file as a test. [2]
c) Write a function to evaluate 𝒛 = C∗ (𝒚) without using the 𝒉,𝑛
matrix representation of the linear map. Use the Julia function DSP.conv.
Use the data given in the data file as a test. [2]
It is important to note that DSP.conv is implemented using the convolution theorem.
The seminal convolution theorem states that the Fourier transform of a convolution of two signals is the pointwise product of their Fourier transforms. That is,
𝒚=𝒙⊛𝑛 𝒉 ⇔ F(𝒚)=F(𝒙)⊙F(𝒉).
For large dimensional problems, you can feel the speed difference between your own convolution and the Julia DSP.conv imple- mentations.
5. (A linear map related to Magnetic Resonance Imaging (MRI) sub-
4. Linear Algebra: Basics 12
CS Help, Email: tutorcs@163.com
This question requires background in
▶ 2DDiscreteFourierTransform.SeeWikipediapagehttps: //en.wikipedia.org/wiki/Discrete_Fourier_transform, Sections ‘Definition’, ‘Inverse Transform’, and ‘Multidimen-
sional DFT’. Also see https://www.matecdev.com/posts/ julia-fft.html#fftshift-and-fftfreq.
▶ fftshift: See https://www.matecdev.com/posts/ julia-fft.html#fftshift-and-fftfreq.
Let 𝑿 ∈ R𝑚×𝑛 be an MRI image. The MRI data compressed sensing can be mathematically expressed as
𝒚 = PΩ ◦ S◦ F2D(𝑿),
▶ F2D denotes the 2D Discrete Fourier Transform,
▶ Sdenotesthe2Dfftshiftoperator(whichisthestraight-
forward extension of 1D fftshift), and ▶ PΩ is the subsampling operator.
In the following, you are allowed to use FFTW.fft, FFTW.ifft, and FFTW.fftshift.
Make sure that your function works for both odd and even 𝑚 and 𝑛.
a) Write a function to evaluate
𝒚 = PΩ ◦ S◦ F2D(𝑿)
without using matrix representation of the linear map.
The outputs of your function must include 𝑶1 = F2D(𝑿), 𝑶2 = S◦ F2D(𝑿), and 𝑶3 = PΩ ◦ S◦ F2D(𝑿).
Use the data in the data file for a test. [6]
b) Write a function to evaluate
𝒁 = (PΩ ◦ S◦ F2D)∗ (𝒚)
without using matrix representation of the linear map.
The outputs of your function must include 𝑶1 = P∗ (𝒚),
𝑶2=S∗◦P∗(𝒚),and𝑶3=F∗ ◦S∗◦P∗(𝒚). Ω 2D Ω
Use the data in the data file for a test.
4. Linear Algebra: Basics 13

Least Squares 5. 5.1. The Netflix Problem
1. The task is to recover an unknown matrix 𝑿 from its partial observations given by
𝒚 = PΩ ( 𝑿 ) + 𝒘 ,
where 𝒘 denotes noise.
The alternating minimization method to solve the Netflix problem is stated below. Suppose the prior information that the rank of 𝑿 is at most 𝑟. Then one would find the matrices 𝑼 ∈ R𝑚×𝑟 and 𝑽 ∈ R𝑛×𝑟 such that 𝑿 = 𝑼𝑽T. In the 𝑘-th iteration of the alternating minimization, one solves the following two least squares sequentially:
The partial observations 𝒚 and the index sets of the observed entries
Ω are given in the data file. Suppose that rank(𝑿) = 10.
a) Write a function to update 𝑽 for a given 𝑼 . Use the 𝑼0 in the data file for a test. [2]
b) Write a function to update 𝑼 for a given 𝑽 . Use the 𝑽 from the previous sub-question for a test. [2]
c) Write a function for the alternating miminimization process. Use the 𝑼0 in the data file as the initial point. Run a test. [2]
5.2. Estimation Accuracy
1. Alice, Bob, and Charlie’s jobs are to estimate unknown 𝒙 from their respective measurement vector 𝒚 given by
𝒚 = 𝑨𝒙0 + 𝒘, by solving the minimization problem
𝒙ˆ=arg min 1∥𝒚−𝑨𝒛∥2. 𝒛2
Daniel knows the ground truth 𝒙0 and has the power to choose the noise vector 𝒘 with ∥𝒘∥2 = 1 in generating 𝒚.
a) Daniel is a friend of Alice. He would like to choose a noise vector 𝒘 so that the estimation error
∥ 𝒙ˆ − 𝒙 0 ∥ 2
𝑽𝑘+1=argmin 𝒚−P 𝑼𝑘𝑽T2, Ω
𝑽2 𝑼𝑘+1=argmin 𝒚−P 𝑼𝑽𝑘+1,T2.

is minimised.
Find the best 𝒘 and compute the corresponding estimation error for Alice. [2]
b) Daniel is a foe of Bob. He would like to choose a noise vector 𝒘 so that the estimation error
∥ 𝒙ˆ − 𝒙 0 ∥ 2
is maximised.
Find the best 𝒘 and compute the corresponding estimation error for Bob. [2]
c) For Charlie, Daniel randomly generates a noise vector 𝒘 with i.i.d. Gaussian entries and then normalises it so that ∥𝒘∥2 = 1 (i.e., 𝒘 has unit 𝑙2 norm and statistically invariant under rotations). Find the expectation of the estimation error, i.e.,
E  ∥ 𝒙ˆ − 𝒙 0 ∥ 2 
that Charlie may expect. [2]
5.3. LS Recovery from Convolution
𝒚 = PΩ ( 𝒉 ⊛ 𝑛 𝒙 ) + 𝒘 ,
where 𝒚, 𝒉, and Ω are given. The task is to recover 𝒙 from the noisy and
partial observation vector 𝒚.
Towards this goal, we solve the following least squares problem
𝒙ˆ=argmin 1∥𝒚−PΩ(𝒉⊛𝑛𝒙)∥2+𝛼∥𝒙∥2, 𝒙22
where 𝛼 > 0 is a small positive number to ensure the matrix involved in the least square problem is not rank deficient.
The standard form of a least squares problem is given by 𝒙ˆ=argmin 1𝒙T𝑨𝒙+𝒃T𝒙+𝑐.
𝒙2 1. (Matrix based approach)
a) Write a Julia function to compute the corresponding 𝑨 and 𝒃 from the data given in the data file. [4] b) Use Julia’s operator \to get 𝑥ˆ. [1]
2. (Matrix free approach)
a) Write a function that uses DSP.conv to compute 𝒖T𝑨𝒗 and 𝒃T𝒖 for arbitrary 𝒖 and 𝒗 of appropriate dimension. Note that 𝑨 and 𝒃 are defined as the previous question. Your implementation should not use the explicit forms of 𝑨 and 𝒃. Use the data in the data file for a test. [4]
b) Use the Julia library LinearOperators.jl so as to use matrix-free functions to support matrix operations.
5. Least Squares 15

Use IterativeSolvers.cg and your matrix free linear operator to solve the aforementioned least squares problem. Note the difference in runtime between the matrix approach and the matrix free approach. [4]
5.4. LS Recovery for MRI
Consider the dimension of the MRI image data, we use matrix free approach for least squares recovery of undersampled MRI data. The least squares problem under consideration is given by
𝑿ˆ =argmin 1∥𝒚−PΩ◦S◦F2D(𝑿)∥2+𝛼∥𝑿∥2𝐹. 𝑿22
With vectorization of 𝑿, the above formulation can be written as the standard form of least squares:
𝒙ˆ=argmin 1𝒙T𝑨𝒙+𝒃T𝒙+𝑐. 𝒙2
1. Usethedatagiveninthedatafile,findtheobservationvector𝒚.[2] 2. Write a Julia function that uses FFTW.fft and FFTW.fftshift to compute 𝒖T𝑨𝒗 and 𝒃T𝒛 for arbitrary and appropriate 𝒖, 𝒗, and
Note the size of 𝑿. Your implementation should use the matrix free approach. [6]
3. Use the Julia library LinearOperators.jl so as to use matrix- free functions to support matrix operations.
Use IterativeSolvers.cg and your matrix free linear operator to solve the aforementioned least squares problem.
[4] Note the difference between the groundtruth 𝑿 and your least squares estimate 𝑿ˆ due to the subsampling and the regularization term 𝛼2 ∥𝑿∥2𝐹.Innextcourseworks,sparserecoverytechniqueswill
be applied to improve the recovery quality.
5. Least Squares 16

Coursework 2

Proximal Operator 6.
1. Usethedatainthedatafiletocomputethesolutionsofthefollowing optimization problems.
min 1𝒙T𝑨𝒙+𝒃T𝒙+ 1 ∥𝒙−𝒛∥2. 𝒙2 2𝛾
min𝛿(𝒂≤𝒙≤𝒃)+ 1 ∥𝒙−𝒛∥2, 𝒙 2𝛾
where 𝒂 ≤ 𝒙 ≤ 𝒃 denotes element-wise inequalities.
min𝛿(∥𝒙∥0≤𝑘)+ 1 ∥𝒙−𝒛∥2. 𝒙 2𝛾
min ∥𝒙∥0+ 1 ∥𝒙−𝒛∥2. 𝒙 2𝛾
min𝛿(rank(𝑿)≤𝑟)+ 1 ∥𝑿−𝒁∥2𝐹. 𝑿 2𝛾
minrank(𝑿)+ 1 ∥𝑿−𝒁∥2𝐹. 𝑿 2𝛾
min𝛿(𝑿⪰0)+ 1 ∥𝑿−𝒁∥2𝐹, 𝑿 2𝛾
where 𝑿 ⪰ 0 denotes that 𝑿 is a positive semi-definite matrix, i.e., all the eigenvalues of the symmetric matrix 𝑿 are non- negative.
min𝛿(∥𝒙∥2=1)+ 1 ∥𝒙−𝒛∥2. 𝒙 2𝛾

2. (Proximaloperatorsrelatedtol1-norm)Usethedatainthedatafile to compute the solutions of the following optimization problems.
min𝜆∥𝒙∥1+ 1 ∥𝒙−𝒛∥2. 𝒙 2𝛾
min𝜆∥𝒙∥1+ 1 ∥diag(𝒂)𝒙−𝒛∥2, 𝒙 2𝛾
where diag(𝒂) turns the vector 𝒂 into a diagonal matrix with the diagonal vector 𝒂.
min𝜆∥𝒙∥1+ 1 ∥𝑼𝒙−𝒛∥2, 𝒙 2𝛾
where the matrix 𝑼 is a unitary matrix, i.e., 𝑼𝑼T and 𝑼T𝑼 = 𝑰.
min𝜆∥𝑼𝒙∥1+ 1 ∥𝒙−𝒛∥2, 𝒙 2𝛾
where the matrix 𝑼 is a unitary matrix, i.e., 𝑼𝑼T and 𝑼T𝑼 = 𝑰.
6. Proximal Operator 19

Sparse Inverse Problems 7. Greedy Algorithms
ImplementtheOMPalgorithm.Testitsperformanceusingthedata given in the data file. [4] Implement the SP algorithm. Test its performance using the data given in the data file. [4] Implement the HIT algorithm. Test its performance using the data given in the data file. [4]
Proximal Gradient Method
Alice ang Bob are trying to solve the following optimization problem: min𝜆∥𝒙∥1+ 1 ∥𝑨𝒙−𝒛∥2,
where the matrix 𝑨 is invertible and its inverse is denoted by 𝑨−1.
1. Bob claims that the optimal solution admit the following closed form
𝑥ˆ𝑖 =sign𝑨−1𝒛𝑖·max𝑨−1𝒛𝑖 ,𝜆𝛾.
Write a function to implement Bob’s closed form solution 𝒙ˆ using the data given in the data file. [2] Compute the subgradient at the 𝒙ˆ and check whether 0 ∈ 𝜕 𝑓 (𝒙ˆ). [2]
2. Alice decides to solve this problem differently. She applies the proximal gradient method and uses Bob’s solution as the initial point of her algorithm.
Based on the data given in the data file, find the upper bound of the step size in the proximal gradient method. [2] Set the step size of Alice’s proximal gradient method as half of its upper bound. Implement it. Run it for 5 iterations. Record the value of the objective function at the end of each iteration. [3]
7.3. MRI CS Recovery: Wavelet
Let 𝑿 be an unknown MRI image. We consider the following MRI compressed sensing recovery formulation:
𝑿ˆ =argmin 𝑓(𝑿) 𝑿
:= arg min 𝜆 ∥ W(𝑿)∥1 + 1 ∥𝒚 − PΩ ◦ S◦ F2D (𝑿)∥2 𝑿2
=argmin 𝜆∥W(𝑿)∥1 + 1 ∥𝒚−A(𝑿)∥2 𝑿2

where ∥·∥1 is the sum of the absolute values of the entries, Wdenotes the 2D discrete Daubechies wavelet transform (which can be represented by an orthonormal matrix) (See Wavelets.jl for Julia functions dwt and idwt), the operators PΩ , S, F2D have been defined in coursework 1, and for simplicity we have used the notation
A = PΩ ◦ S ◦ F2D .
We use proximal gradient method to solve the above optimization
1. Use the 𝑿0 given in the data file to compute the gradient
1 2 ∇𝑿 2 ∥𝒚 − A(𝑿0)∥2 .
Let𝑿 ∈R𝑛×𝑛.Setthestepsize𝜏= 1 .Findthe𝑿1 afterthefirst 2𝑛
iteration of the proximal gradient method. [2] Implementtheproximalgradientmethod.Runituntilitconverges. Draw your recovery. [3]
MRI CS Recovery: Wavelet and DCT
[3] 2. Let𝑿 ∈R𝑛×𝑛.Setthestepsize𝜏= 1 .Findthe𝑿1 afterthefirst
iteration of the proximal gradient method. [3]
3. Implementtheproximalgradientmethod.Runituntilitconverges. Draw your recovery. [4]
7.4. MRI CS Recovery: DCT
Let 𝑿 be an unknown MRI image. We consider the following MRI compressed sensing recovery formulation:
𝑿ˆ =argmin𝜆∥D(𝑿)∥1+1∥𝒚−A(𝑿)∥2 𝑿2
where D denotes the 2D discrete cosine transform (which can be repre- sented by an orthonormal matrix) (See FFTW.jl for Julia functions dct and idct), and A is defined as in the previous question.
We use proximal gradient method to solve the above optimization problem.
Let 𝑿 be an unknown MRI image. We consider the following MRI compressed sensing recovery formulation:
𝑿ˆ=argmin 𝜆∥𝒁1∥1+𝜆∥𝒁2∥1 𝑿 ,𝒁1 ,𝒁2
+ 𝛼2 ∥𝒁1 − W(𝑿)∥2 + 𝛼2 ∥𝒁2 − D(𝑿)∥2 + 12 ∥ 𝒚 − A ( 𝑿 ) ∥ 2 2 .
7. Sparse Inverse Problems 21

We use alternating minimization method to solve the above optimization problem.
1. Use the data in the data file including 𝒁10, 𝒁20 to find 𝑿1. [3] 2. Use the obtained 𝑿1 to find 𝒁1 and 𝒁21. [3] 3. Implement the alternating minimization method. Run it until it
converges. Draw your recovery. [4]
7. Sparse Inverse Problems 22

Coursework 3

To be announced

Coursework 4

Learn and Tell Coursework Guideline
The purpose of this coursework is to encourage students to
▶ Exploretheliterature,findinterestingorusefulstufftostudyand present;
▶ Learnfromeachotherandbroadenhorizons;
▶ Practice skills of project management, decision-making, team-
playing, leadership, and presentation.
8.1. Coursework Format
Students will work in groups and present a technical topic that they have learned. The topic must be relevant to the module but should go beyond the taught materials. It can be on application or/and theory. Coursework will be marked by GTAs, the instructor, and peer students.
The milestones for this coursework are as follows.
1. Wewillprovidedetailedinformationoncoursework4presentations according to Table 1.1.
2. Students need to form their groups, decide a topic, and choose a time slot for their presentation. The relevant information should be input in the relevant excel file by the due date shown in Table 1.1.
3. Groups submit their slides to the Teams folder by the due time (Table 1.1). Make sure that the key references are listed in your slides. After the due time, the Teams folder will be read-only.
4. Presentation:
▶ Each group will have 15 minutes for presentation and 5 minutes for questions.
▶ The group is required to attend all the presentations in the session where they present.
▶ All the groups in a session are required to arrive 10 minutes before the session starts for the setup.
8.1.1. Presentation Sessions
The presentation sessions will be finalised later. The following list is tentative.
▶ 4 December, 2023, Room 611 • 9:30-11:10
• 11:30-13:10
▶ 5December,2023,Room909B
• 14:00-15:40 • 16:00-17:40
▶ 6December,2023,Room611
• 9:30-11:10 • 11:30-13:10
程序代写 CS代考 加微信: cstutorcs
• 14:00-15:40
8.2. Marking Scheme
𝑀𝑖 = (0.4𝑀𝑔,𝑝 + 0.6𝑀𝑔,𝑎) ∗ 0.92 + 𝑀𝑖,𝑝 ∗ 0.08
▶ 𝑀𝑔,𝑝 is the mark for the group 𝑔 from peer marking. Each student is supposed to mark certain number of presentations given by oth