I have found the links below to be interesting, either as
background or further information on topics covered in the course.
I recommended them all for reading, as they give better or at least alternative presentations of difficult topics in the book.
Matt Zucker presents (at the end of his article) a way of making noise periodic, so that it repeats seamlessly over a given distance. That particular section should be disregarded. Contrary to the first part of his article, this is bad, even wrong. His method is prohibitively expensive, and it also gives fairly evident visual artefacts in the resulting image. The fact that the value and derivatives of a periodic function match up at the boundaries does not imply that the pattern looks the same over the entire interval. (Try it and you will see what I mean. At the edges, you have one single noise() instance, while towards the middle, you take a weighted average of two noise() instances. Statistically and visually, these are two different things. The proposed method works for white noise with a normal distribution, but Perlin noise is very far from that.)
The correct way of doing periodic noise is this: Instead of treating
noise() as a fixed function, look at its implementation.
In the standard implementation, the gradient directions are arbitrarily
chosen to repeat after
256 integer steps. You can easily change this to any integer period,
although powers of 2 will be most efficient to implement.
So, no need at all to interpolate like Matt suggests, just change the
way the gradients are picked for the lattice points! (Non-integer
periods are tricky to implement, and should be avoided.)
Hugo Elias presents interesting notes on value noise, which as you know is
very different from gradient noise. Perlin noise is a gradient noise,
not a value noise. Apart from that, his article is fine and clearly written, so just replace every
occurrence of the term "Perlin noise" with the correct term "low-pass filtered
or interpolated value noise", and you will be OK.