1. Difficult: the hardest part was following along with the description of the electronics in the Enigma machine--but they were just trying to give a general overview of how it worked. There is a hard-coded permutation given by the letter plugs, and rotors that constantly change the permutations on every keystroke. I didn't quite follow how decryption was the same as encryption, or what they meant, but ah well.
As for the permutations and the decomposition into cycles, this made sense to me. The way they broke the enigma was a little more iffy.
2. Interesting: I think it's interesting that the attack outlined in the book dealt mostly with not the machine itself, but the protocols being used. Because the message key was always encoded first, twice in a row, then the code breakers were able to recognize this and use it to their advantage.
Sunday, November 24, 2013
Tuesday, November 19, 2013
19.1-2, due Oct 20
1. Difficult - this is a pretty easy introduction to quantum computing/physics, so the math hasn't gotten very complicated yet. I guess keeping track of the probabilities when discussing Eve eavesdropping on the quantum bits being distributed was a little confusing.
2. Interesting - the whole section is interesting! I want to try the poloroid experiment, it seems mind-boggling. As for the vectors in the 2-dimensional complex vector space, why do they have complex numbers for the coefficients? This is not obvious yet. It really seems they have only used the magnitudes of those complex numbers, and so real vectors would've worked equally well so far. Ah well, whatever.
2. Interesting - the whole section is interesting! I want to try the poloroid experiment, it seems mind-boggling. As for the vectors in the 2-dimensional complex vector space, why do they have complex numbers for the coefficients? This is not obvious yet. It really seems they have only used the magnitudes of those complex numbers, and so real vectors would've worked equally well so far. Ah well, whatever.
Sunday, November 17, 2013
Sections 14.1-2, due Nov 18
This section was on zero-knowledge techniques, wherein Victor tests Peggy's knowledge of a secret without her revealing any information that Eve could use to find the secret.
1. Difficult - the first section went pretty smoothly, transitioning from the analogy back into the mathematics pretty seamlessly. The difficulty came a little more in the more complex method, wherein there are many secret square roots and it becomes increasingly difficult for somebody to masquerade as Peggy. I'm still wrapping my head around it.
2. Interesting. I understand now the reason for Peggy choosing $r$ and sending $x=r^2$ to Victor. This is to change up the answers to the questions (which are of the form $(b_1, \ldots, b_i)$) so, if in two sessions, she gets the same question, she will have different answers. This way, Eve cannot reproduce these results.
1. Difficult - the first section went pretty smoothly, transitioning from the analogy back into the mathematics pretty seamlessly. The difficulty came a little more in the more complex method, wherein there are many secret square roots and it becomes increasingly difficult for somebody to masquerade as Peggy. I'm still wrapping my head around it.
2. Interesting. I understand now the reason for Peggy choosing $r$ and sending $x=r^2$ to Victor. This is to change up the answers to the questions (which are of the form $(b_1, \ldots, b_i)$) so, if in two sessions, she gets the same question, she will have different answers. This way, Eve cannot reproduce these results.
Tuesday, November 12, 2013
Sections 12.1-2, Due Nov 13
1. Difficult: following along with the Lagrange interpolation polynomial was the hardest part. I've been exposed to Vandermonde determinants before, and, removing that experience, that method of solving for the coefficients makes a lot of sense (i.e. you have a system of $t$ linear eqns, with $t$ unknowns, solve for them). However, the Lagrange interpolation seems easier to compute.
2. Interesting: I really thought the Shamir Threshold scheme was neat. With only $t-1$ people, they combined have exactly no knowledge on what the secret is, but with $t$ or more, they can solve for it. The math is not that in-depth, and it makes sense to me.
2. Interesting: I really thought the Shamir Threshold scheme was neat. With only $t-1$ people, they combined have exactly no knowledge on what the secret is, but with $t$ or more, they can solve for it. The math is not that in-depth, and it makes sense to me.
Monday, November 11, 2013
Sections 9.1-9.4, due Nov 11
1. Difficult. The El-Gamel system was the most confusing, just because it's new to me right now. After reading the section a couple times it is beginning to make sense. The security comes from Eve not being able to find $a$ when $\beta = \alpha^a \pmod{p}$, for given $(\alpha, \beta, p)$. The security hole when the same value of $k$ is used twice makes sense.
2. Interesting: I liked the section 9.4 on birthday attacks on signatures, that it's possible to find repeats in the hashes by possibly making many insignificant changes in the document. I also liked the way to foil this plan, by changing the document slightly. Anyways.
2. Interesting: I liked the section 9.4 on birthday attacks on signatures, that it's possible to find repeats in the hashes by possibly making many insignificant changes in the document. I also liked the way to foil this plan, by changing the document slightly. Anyways.
Friday, November 8, 2013
Sections 8.4-5,7, due Nov 8
1. Difficult - These sections does make a lot of sense. For 8.4 and 8.5, (on birthday attacks), there are many more opportunities for matches (proportional to n^2, where we have n people), instead of trying to find a particular value (which is just proportional to n). The book gave an approximate formula for a match as (1-exp(-r^2/2N)) where r is the number of people and N is the number of of birthdays, and cited an exercise for proof.
Working through the exercise, we approximate ln(1-x) using the taylor polynomial to 1 or 2 degrees. This gives bounds on a sum of ln(1- k/N), so when we exponentiate it, it gives bounds on the desired product [of (1-k/N)]. This is approximated above and below by c_1 exp(-lambda) and c_2 exp(-lambda), where lambda = r^2 / 2N. As N grows, c_1 and c_2 tend toward 1. Hence, the product is approximately exp(-lambda), giving the formula mentioned above.
2. Insteresting - I liked how the analog of OFB came back in 8.7, this time using hash functions. This connects well back to the modes of operation earlier in the semester. The book was mentioning how this was a stream cipher. That's all I have to say about that though.
Working through the exercise, we approximate ln(1-x) using the taylor polynomial to 1 or 2 degrees. This gives bounds on a sum of ln(1- k/N), so when we exponentiate it, it gives bounds on the desired product [of (1-k/N)]. This is approximated above and below by c_1 exp(-lambda) and c_2 exp(-lambda), where lambda = r^2 / 2N. As N grows, c_1 and c_2 tend toward 1. Hence, the product is approximately exp(-lambda), giving the formula mentioned above.
2. Insteresting - I liked how the analog of OFB came back in 8.7, this time using hash functions. This connects well back to the modes of operation earlier in the semester. The book was mentioning how this was a stream cipher. That's all I have to say about that though.
Tuesday, November 5, 2013
Sections 8.1-2, due Nov 6
1. Difficult: the example in 8.1 (discrete log hash) was a bit hard to follow along. I sorta get the idea, that finding a hash collision allows you to solve the discrete log problem, so it follows it is likely to be secure.
2. Interesting: I've always been curious about how they come up with hashes in the first place. I've used them for some time in confirming file transfers between computers as accurate, but never really thought much about how they did it in the first place. Looking ahead to 8.3, it seems a little intimidating...
2. Interesting: I've always been curious about how they come up with hashes in the first place. I've used them for some time in confirming file transfers between computers as accurate, but never really thought much about how they did it in the first place. Looking ahead to 8.3, it seems a little intimidating...
Subscribe to:
Posts (Atom)