CS 124 Homework 6: Spring 2023
Your name: Collaborators:
No. of late days used on previous psets:
No. of late days used after including this pset:
Homework is due Wednesday April 12 at 11:59pm ET. You are allowed up to twelve (college)/forty (extension school), but the number of late days you take on each assignment must be a nonnegative integer at most two (college)/four (extension school).
Try to make your answers as clear and concise as possible; style may count in your grades. Assign- ments must be submitted in pdf format on Gradescope. If you do assignments by hand, you will need to scan your papers to turn them in.
You may not write anything that you will submit within one hour of collaborating with other students, language models, the internet, etc., or using notes from such sources. That is, whatever you submit must first have been in your head alone, or notes produced by you alone, for an hour. Subject to that, you can collaborate with other students, e.g. in brainstorming and thinking through approaches to problem-solving.
For all homework problems where you are asked to give an algorithm, you must prove the correctness of your algorithm and establish the best upper bound that you can give for the running time. Generally better running times will get better credit; generally exponential-time algorithms (unless specifically asked for) will receive no or little credit. You should always write a clear informal description of your algorithm in English. You may also write pseudocode if you feel your informal explanation requires more precision and detail, but keep in mind pseudocode does NOT substitute for an explanation. Answers that consist solely of pseudocode will receive little or no credit. Again, try to make your answers clear and concise.
Primality testing, cryptography 2SAT, 3SAT algorithms
(a) (10 points) Prove that 12403180369 is composite by finding a witness in the form of a positive integer a < 12403180369 such that a12403180369−1 ̸= 1 (mod 12403180369). Give any information necessary to show that your witness in fact witnesses. (Note: do not use a factor as a witness! Sure, 12403180369 is small enough that you can exhaustively find a factor, but this question is about your knowledge of the primality-testing algorithm.)
(b) (10 points) The number 63973 is a Carmichael number. Prove that it is composite by finding a witness in the form of a nontrivial square root of 1.
(You can find the answers by whatever method you like, but we recommend writing code. You don’t need to submit your code.)
(a) One witness is a = 2. Let n = 12403180369, we can calculate that an−1 ≡ 1464795051 (mod n) by using the repeated squaring approach. We can iteratively calculate 20, 21, 22, 24, 28,216,232 (modn)andn−1inbinaryis1011100011010010011000001101010000sowe canuseoursquaredvaluestodeterminethevaluesof24,26,28,29,...,234 (modn)and multiply them together to get our final answer.
(b) Some possible numbers that are not in {1,63972} that satisfy the property that their square is 1 mod 63973 are : 6252, 7734, 10620, 19683. We can loop through numbers starting from 2 and check each one.
The Miller-Rabin algorithm would suggest picking a random number a in {2, . . . , 63971} and then using the fact that 63972 = 4 · 15993 to compute a15993 mod 63973, a31986 mod 63973 and a63972 mod 63973. If the final number is not 1 we can output that as a witness that 63973 is not a prime (but it will be 1 if 63973 is a Carmichael number). Else we look for the last number in the sequence that is not 1 and output it if is also not equal to 63972. If it is 63972, then we start again with a fresh random a.
2. (a) (15 points) Adam has generated an RSA private key (p,q) and published n = pq. Adam wanted to be sure that his primes were both big enough, so he decided on the largest positive integer X his system could handle, picked some small positive integer δ, and generated both primes in the interval [X − δ, X]. Give an O(δ log124 n) algorithm to find p and q given n. (Your algorithm isn’t given δ or X: it should work for all n, but the runtime may depend on what δ and X were.)
Algorithm:
Let p = ⌊√n⌋. While n mod p ̸= 0, decrement p by 1. Once the while loop terminates, return p, q = np .
Correctness: √ √ √ WLOG,choosep≤q. Itisclearthatp≤⌊ n⌋,becauseifp> nandq> n,then n=pq>n,whichisnotpossible. Thenforanyvalueofpthatwetest,ifpisthetrue valueweseek,thenpisafactorofn,son modp=0. Thensincepisafactorofn, and n = pq, it follows that q = n/p is the other factor of n. Since we are guaranteed that p ≤ ⌊√n⌋, and we continually decrement the value of p, we know that we will find the true value of p eventually and the algorithm will terminate.
Runtime: √ 3
First, we can find ⌊ n⌋ in O(log n) time by binary searching for the correct value: we need to make log n comparisons for binary search, and each comparison of an integer a with √n can be done in O(log2n) time by using the grade school multiplication to multiply a by itself to compare a2 with n. We can also compute n mod p using grade school division in time O(log2 n), so the only remaining task is to bound the number of iterations. For this part we claim that the algorithm will terminate within δ iterations, because p and q are within δ numbers of each other. If we had to decrement p more than δ times, then p < ⌊√n⌋−δ, but q > ⌊√n⌋, so q−p > δ, which would be a contradiction. Thus we get a total run time of O(δ log2 n + log3 n) = O(δ log3 n) = O(δ log124 n).
(b) (5 points) The public n that Adam generated as above was 15375998174720047661999. Factor it. (You can find the answer by whatever method you like. This is certainly solvable without code, but you may want to write some code for practice. You don’t need to submit code.)
Programming Help, Add QQ: 749389476
p = 123999990089, q = 123999995191
You can do this by coding the algorithm above, or by plugging it into some online resource like Wolfram Alpha.
Here’s some python code that would work:
from math import sqrt
n = 15375998174720047661999 for i in range(int(sqrt(n)),n):
if n % i == 0: print(i)
print(n // i) break
3. (15 points) We have considered, in the context of a randomized algorithm for 2SAT, a random walk with a reflecting boundary at 0—that is, at every position i > 0 the walk moves, at the next turn, to position i + 1 with probability 1/2 and position i − 1 with probability 1/2, and when i = 0 it moves to position 1 with probability 1. Consider now a lazy random walk with an additional parameter p ∈ [0, 1] which at every position i > 0 moves, at the next turn, to position i + 1 with probability p/2 and position i − 1 with probability p/2 and stays at position i with probability 1−p. When i = 0 the walk moves to position 1 with probability p/2 and stays at 0 with probability 1 − p/2.
Find the expected number of moves to reach n starting from position i on a lazy random walk with parameter p. (You may assume there is a unique solution to this problem, as a function of p, n and i; you should prove that your solution satisfies the appropriate recurrence.)
Let Ei be the expected number of steps to reach n starting from i. We can write the recursive
relationship as
Ei = p2Ei−1 + p2Ei+1 +(1−p)Ei +1
since p/2 of the time we go down, p/2 of the time we go up, and the remaining 1 − p of the time we stay still. Regardless of what happens, we have taken one step of time, so we add 1.
We can solve for Ei+1 to get
Ei+1 = 2Ei − Ei−1 − p2
For the base case i = 0, we only have two options, go up with probability p/2 or stay with
probability 1 − p/2 so we have
E 0 = p E 1 + 1 − p S 0 + 1 =⇒ E 1 = E 0 − 2 .
22p FinallywealsohaveEn =0sinceifi=nthewalkneeds0stepstogetton.
We solve for the Ei by writing Ei in terms of E0 for every i and noticing that the pattern looks like Ei = E0 − p1 (i2 + i) and then proving this by strong induction as done below.
Basecase: Thecasefori=0istrivialandwegetE0 =E0. Fori=1,wehaveE1 =E0−p2 which is correct by the base recursion case above.
Inductive hypothesis: Suppose Ei = E0 − p1(i2 +i) holds for all i ≤ k for some 1 < k < n. Inductive step: Using the recursive formula that we derived above as well as our inductive
hypothesis, we get
Ek+1 = 2Ek − Ek−1 − p2
12122 =2 E0−p(k +k) − E0−p((k−1) +k−1) −p
= E0 − p1 2k2 + 2k − (k − 1)2 − (k − 1) + 2 = E0 − p1(k2 + 3k + 2)
= E0 − p1 ((k + 1)2 + (k + 1))
This proves that our solution is of the form Ei = E0 − p1 (i2 + i).
Applying this expression for the known value of Ei when i = n we get 0 = En = E0−p1(n2+n)
andsowegetE0 = p1(n2+n). WeconcludethatEi =E0−p1(i2+i)= p1 (n2 +n)−(i2 +1).
4. Suppose we have a random walk with boundaries 0 and n, starting at position i. As mentioned in class, this can model a gambling game, where we start with i dollars and quit when we lose it all or reach n dollars. Let Wt be our winnings after t games, where Wt is defined only until we hit a boundary (at which point we stop). Note that Wt is negative if we are at a position j with j < i; that is, we have lost money. If the probability of winning and the probability of losing 1 dollar each game are 1/2, then with probability 1/2, Wt+1 = Wt + 1 and with probability 1/2, Wt+1 = Wt − 1. Hence
E[Wt+1] = 21E[Wt + 1] + 12E[Wt − 1] = E[Wt],
where we have used the linearity of expectations at the last step. Therefore when the walk reaches a boundary, the expected winnings is E[W0] = 0. We can use this to calculate the probability that we finish with 0 dollars. Let this probability be p0. Then with probability p0 we lose i dollars, and with probability 1 − p0 we gain n − i dollars. Hence
p0(−i) + (1 − p0)(n − i) = 0, from which we find p0 = (n − i)/n.
Gossip Girl encounters this game in one of her classes at Constance Billard High School. Unfortunately, because it is high school, the game is not fair; instead, the probability of losing a dollar each game is 2/3, and the probability of winning a dollar each game is 1/3. Each student starts with i dollars and gets to leave high school if they reach n dollars before going bankrupt. We wish to extend the above argument to this case.
程序代写 CS代考 加微信: cstutorcs
(a) (5 points) Show that E[2Wt+1 ] = E[2Wt ]. By Adam’s Law
E[2Wt+1 ] = E[E[2Wt+1 |Wt]] Since the probability of losing is 2/3
Simplifying,
1 Wt+1 2 Wt−1 =E32 +32
112 =E 232Wt +232Wt
(b) (5 points) Use this to determine the probability of finishing with 0 dollars and the probability of finishing with n dollars when starting at position i.
Let p0 be the probability of finishing with 0 dollars (i.e., probability of going bankrupt). By part (a), we have E[2Wt+1 ] = E[2Wt ]. Therefore, when the walk reaches a boundary, (say at t∗) we have E[2Wt∗ ] = E[2W0 ] = 20 = 1. As above, we have that with probability p0 we lose i dollars, and with probability 1 − p0 we gain n − i dollars. This implies:
W ∗ −i n−i 0 2n−i − 1 2n − 2i
E[2 t ]=p02 +(1−p0)2 =2 =1⇒p0 = 2n−i −2−i = 2n −1 (1)
implying that the probability of finishing with n dollar is 1 − p = 2i−1 0 2n−1
(c) (10 points) Now suppose the initial number of dollars that each student has is a ran- dom number generated as follows: Every student is given n envelopes and each (inde- pendently) has 1 dollar with probability 1/2 and 0 dollars with probability 1/2. At the beginning of the game the student opens all the envelopes to get their initial i dollars. Show that the probability of finishing with n dollars in this game is Ω(cn) for some constant c > 1/2. (So while graduation is still exponentially unlikely, it is better than the probability of graduating on day 0.)
Say X is the random variable for the initial number of dollars. Notice that for any i ∈ {0, 1, …n}, we have Pr[X = i] = n(1/2)i(1/2)n−i = 1 n since exactly i envolopes
must have 1 dollar in it, and there are i ways of choosing those. Say pi is the probability
of finishing with n dollars given that the student starts with i dollars. By part (c), we
have p = 2i−1 ≥ 2i−1−n. Lastly, say that p is the probability of finishing with n
dollars, where the initial number of dollars is determined as above. In that case:
pgrad = Xpi Pr[X = i] (2)
Xn 1 n
= 2−(2n+1)
2i · 1n−i · i (4)
2i−1−n · 2n i (3)
= 2−(2n+1)3n = 12(3/4)n
where we used the binomial theorem (Pn nxiyn−i = (x + y)) to get (5). Thus we
i=0 i get pgrad ≥ 21(3/4)n = Ω(cn) for c = 3/4 > 1/2.
(d) (10 points) Show that the probability the process from the previous part is still running after 124n games is at most 123. Hint: if not, how many games happen in expectation?
What’s the expected amount of money you have?
LetT =124nandpbetheprobabilitythatgamedoesnotstopinT steps. Let0≤t≤T. Let Yt be the amount the student has after the process has run for t steps. We prove below that E[Yt] ≤ n − pt/3. Since this expectation can not be negative it follows that n−pT/3≥0andsop≤3n/T ≤3/124≤123/124.
To prove this note that E[Yt+1|Yt] = Yt + E[∆t+1] where ∆t+1 is the expected gain or loss in step t + 1. Now if the proecss has stopeed at time 0, ..t, then ∆t = 0. If not then Yt ̸∈ {0, n} and in this case E[∆t+1] = −1/3. Now the probability that the game has not stopped in time 0,…,t is at least p (since t ≤ T) and so we get that E[∆t+1] ≤ −p/3. We conclude that E[Yt+1|Yt] = Yt + E[∆t+1] ≤ Yt − p/3, and so E[Yt+1] ≤ E[Yt] − p/3. At time 0 we have E[Y0] ≤ n and so we conclude E[Yt] ≤ n − pt/3.
(e) (5 points) Show that the probability the process is still running after k games is at most 2−Ω(k)/n).
By part (d), if pT the probability the game is running after T steps, then pT ≤ 3n/T. Taking T = 6n, we have p6n ≤ 1/2. So the probability that the process will not be over in the first 6n steps is at most 1/2. Suppose the process has not stopped in the first 6n steps, then the probability that the game will not be over in the next 6n steps is also at most 1/2 by the same argument. Thus we have that the probability that the game will not be over in the first 12n steps is at most 12 · 12 . Then, given any k, we can divide it to k/(6n) groups of 6n steps each, in which case we get the probability of not having terminated by the end of k steps is pk ≤ 2−k/6n = 2−Ω(k/n).
(Note that there was an unfixed typo in the problem statement and we only get pk ≤ 2−Ω(k/n) and not pk ≤ 2−Ω(k)/n.)
(f) (10 points): For some c > 1/2 give an expected O(n124c−n) = o(2n)-time algorithm for solving 3SAT.
Assign each literal randomly T/F (analogous to the setup in 3c). Then pick an un- satisfied phrase and switch one of three literals. Continue for k steps for some k =
ω(n2 log(1/c)) where c is the constant from Part (d). If no satisfying assignment is reached in the end of k steps, start from scratch (with a new random assignment), giving k steps per each try. Repeat this for O(c−n) times.
Given that a satisfying assignment exists, in the worst case, this is a random walk with 2/3 probability of decreasing the number of agreed literals and 1/3 probability of increasing them, same setup as 3(c-e). Main difference is that we have a reflective boundary at 0 rather than a termination, which only increases our odds.
Runtime analysis: Clearly the run time is bounded by O(kc−n) = O(n124c−n). What we argue here is that the algorithm returns a satisfying assignment with high probability (in this amount of time). To do so we claim that a single run of the random walk (using k steps) outputs a satisfying assignment with probability at least Ω(cn). Thus O(c−n) iterations return a satisfying assignment with probability at least .99.
To see the claim, consider running a walk for ever till the process finishes. Let S be the event that the walk is successful and ends with agreement of n. Let F be the event that the process runs for more than k steps. Then note that our algorithm (with k steps) is successful if the event S \ F occurs (since in this case the walk ended at n and took less than k steps). Using a variant of the union bound we have Pr[S \ F ] ≥ Pr[S] − Pr[F ]. By Part (d) we have that Pr[S] ≥ Ω(cn). We also have Pr[F] is the probability the random walk is still running after ω(n2 log(1/c)) steps, and so by Part (e) we have Pr[F] ≤ 2−ω(n2 log(1/c))/n = cω(n). Thus the probability the random walk finishes in k steps at location n is at least Ω(cn)−cω(n), which is still Ω(cn). This concludes the proof of the claim.
Returning to our algorithm, we note that our algorithm is at least as likely to return a satisfying assignment as the random walk is to stop within k steps at location n. (Various reasons why our algorithm might do better are: (1) The random walk fixes one satisfying assignment and bounds the probability of finding that assignment. Our algorithm might find a different satisfying assignment and stop early. (2) At each step the random walk assumes there is a 2/3 probability of decreasing the score by one and 1/3 probability of increasing the score corresponding to having exactly one true literal in the selected clause, whereas the satisfying assignment may have more than one true literal in the selected clause. (3) Our analysis stops the walk when we reach 0 and concede failure, while our algorithm does not know this and continues to search for an assignment and might actually find one. But all the above only increases the probability of our algorithm returning a satisfying assignment in one trial and so in O(c−n) trials, the probability that it returns a satisfying assignment is at least .99.
程序代写 CS代考 加QQ: 749389476