Foray into Fourier

https://roughly-understood.com/posts/foray-into-fourier/

Buckle in everyone, today we are exploring some maths. Now before you immediately exit this blog, I want you to hang around, because todays topics is possibly one of the most important mathematical concepts to be thought up in the last 200 years. Today we are going to try and gain an intuitive understanding for what the Fourier Transform is and how it works.

Fourier Transform?

Have you ever watched a music video where some element of the screen moves in reaction to the beats or pitches of a song? Something along the lines of this really cool music visualizer by BS Media on Youtube.

Well it turns out this is an application of the Fourier Transform. The discoverer (inventor?) of the Fourier Transform, Joseph Fourier, was around much before any music visualizers and instead discovered it with maths. Thats right, maths! Maths is the reason we have a lot of nice things in this world and the music visualizer is just one of them.

The Fourier Transform is also incredibly useful in lots of other fields beyond music, in fact it is used extensively in electronic circuits, earthquake monitoring, wireless communications, noise cancellation and much more.

Getting Frequie

So I have justified the uses of the Fourier Transform but we are yet to see what it actually does.

It turns out that every signal, whether it be music, wireless data, light or anything, can actually be thought of as a number of frequencies added together. Joseph Fourier, was the first person to mathematically prove, that not just specific signals can be made of frequencies, but actually all signals are composed of a bunch of frequencies added together. For an overview of what I mean by frequency feel free to check out my previous blog on the Doppler Effect where I give a brief explanation.

To see an example of what I mean by a signal being made up of a number of different frequencies. Try and have a go at building the orange signal below, by toggling on or off specific sine waves. All of the different frequencies that are turned on are added together to result in the blue wave. So the goal for you is to find the frequencies that make the blue wave perfectly overlap the white one. These frequencies can be thought of as the frequencies that make up the original orange signal.

Now switching on or off different frequencies works but is time consuming. In fact, it gets worse, because it turns out that each frequency that a signal is made up of, has their own independent amplitudes and phases. Where the amplitude is how big/strong the wave is and the phase moves the wave left to right. In the case of a sine wave the formula is:

y=Asin⁡(2πf+ϕ)y = A\sin(2\pi f + \phi)

Where ff is the frequency, AA is the amplitude and ϕ\phi is the phase shift. You can have a go of playing around with these parameters below.

As you can probably see, manually trying to decide what frequencies exist in a signal along with their respective amplitudes and phases is too much to do manually. However, we still want to know this information and use it. So how do we calculate this information? Welcome the Fourier Transform:

F(f)=∫−∞∞f(t)e−j2πftdt\mathcal{F}(f) = \int^{\infin}_{-\infin}f(t)e^{-j2\pi ft} dt

This formula will allow us to have any signal f(t)f(t) and find out the frequency components hidden within that signal. Beyond that we even get the phase and amplitude of each frequency.

I know this looks scary but we are going to break down every piece of this formula for the rest of this article.

How Does It Work?

To discover why the Fourier Transform works, I think it’s best we take it in two steps. Firstly we are going to discuss just the form of the equation. Why is the Fourier Transform a formula in the shape of:

F(f)=∫−∞∞f(t)g(t)dt\mathcal{F}(f) = \int_{-\infin}^{\infin} f(t)g(t) dt

Where f(t)f(t) and g(t)g(t) are some functions of time (like our sine waves from earlier for example).

Secondly we are going to discuss why choosing the function g(t)g(t) to be e−j2πfte^{-j2\pi ft} is a good and useful idea.

My Day Dots

One Dimensional Similarity

Before we can talk about that huge scary equation I think it’s best we think about the idea of similarity in maths. For example lets just imagine that we have two numbers, aa and bb. How would we measure how similar aa is to bb? What does it even mean to be similar?

I guess one way we could think about similarities is their size. If two numers are of a similar size then we could say they are similar. For example, 2.9 is very similar to 3.0.

Another way we could think about similarity is the sign of the number. If aa is negative and bb is positive, then I think it is fair to say that these two numbers are not very similar at all. Conversely if aa and bb are negative then the two numbers are similar.

These are just general observations though. What if we wanted a way to score how similar two numbers are? Maybe with a high similarity having a positive score and low similarity having a negative score.

One way to achieve this goal would be to simply multiply the two numbers together. When the numbers have the same sign (+,−+, - ) then we will end up with a positive score (remembering that a minus times a minus gives a plus). Conversely, if they don’t have the same sign then the score will be negative.

Below I have a made a little animation to demonstrate this idea of similarity, you can see that when the numbers are both positive then they are similar. If both numbers are negative then they are similar, but if one is positive and the other is negative then we end up with poor/low similarity.

Two Dimensional Similarity

Okay so we now have a measure of similarity for two numbers but how can we extend this? For example if we had two sets of coordinate a=(x1,y1)a = (x_1, y_1) and b=(x2,y2)b = (x_2, y_2) then how could we possibly tell how similar these coordinates are to one another?

You see if we think about those coordinates as vectors (just a fancy word for numbers in a row or column) and rewrite them like this:

a=[x1y1]a = \begin{bmatrix} x_1 \\ y_1 \end{bmatrix} b=[x2y2]b = \begin{bmatrix} x_2 \\ y_2 \end{bmatrix}

We could view this as finding the similarity between single numbers like we did previously. However, we just do this twice. For example we could multiply the xx coordinates to find out how similar the xx‘s are and multiply the yy coordinates to find out how similar they are. Something like this:

similarity of (a,b)=[x1×x2y1×y2]=[similarity of x’ssimilarity of y’s]\begin{split} & \text{similarity of }(a, b) = \begin{bmatrix} x_1 \times x_2 \\ y_1 \times y_2 \end{bmatrix}\\ &= \begin{bmatrix} \text{similarity of }x\text{'s}\\ \text{similarity of }y\text{'s} \end{bmatrix} \end{split}

So this is looking more promising, however, we have to now juggle two similarity scores. This doesn’t gain us anything. What we want instead is a single score that measures how similar the two coordinates/vectors are.

One way we could achieve this is by maybe adding the similarity of the xx‘s and the similarity of the yy‘s together. That way if the x coordinates are very similar but the y coordinates are facing in opposite directions the scores of the x and y will cancel each other out. Writing this out in some maths looks like this:

similarity of (a,b)=a⋅b=[x1y1]⋅[x2y2]=(x1×x2)+(y1×y2)\begin{split} &\text{similarity of }(a,b) = a \cdot b \\ & = \begin{bmatrix}x_1\\y_1\end{bmatrix}\cdot\begin{bmatrix}x_2\\y_2\end{bmatrix} \\ & = (x_1\times x_2) + (y_1\times y_2) \end{split}

Some of you may have guessed this already but what we are building above is actually known as a dot product (thats why the formula has the little ⋅\cdot in it). The dot product is used widely in physics and maths. As we have discovered through playing around with the numbers, the dot product gives us a measure of how similar two vectors are. In english you can think of it like saying if we drew an arrow from (0,0)(0,0) to point aa and another arrow from (0,0)(0,0) to point bb, then how much of aa is pointing in the same direction as bb.

Maths and words can be kind of hard to get right so I have made a little animation below where you can move the point aa around to see how similar it is to bb calculated the same formula we came up with above.

N Dimensional Similarity

Okay so we can find the similarity for 1D vectors (a single number) and now for 2D vectors but what about 3D? 4D? 5D? Even N Dimensions?

It turns out this idea generalizes really nicely. Regardless of how many dimensions (how big the vectors are) for aa and bb we can follow the same procedure. We multiply the corresponding dimension (ie. the xx‘s multiply and the yy‘s multiply) and then add up the result.

For example:

a⋅b=[x1y1z1⋮N1]⋅[x2y2z2⋮N2]\begin{split} a \cdot b & = \begin{bmatrix}x_1\\y_1\\z_1 \\ \vdots\\ N_1 \end{bmatrix} \cdot \begin{bmatrix}x_2\\y_2\\z_2 \\ \vdots\\ N_2 \end{bmatrix} \end{split}

Which is the same as:

(x1×x2)+(y1×y2)+(z1×z2)+…+(N1×N2)\begin{split} (x_1 \times x_2) & + (y_1 \times y_2) \\ & + (z_1 \times z_2) + \dots \\ & + (N_1 \times N_2) \end{split}

Pretty cool huh? So we just discovered by playing around with some numbers, a way to tell how similar two vectors of potentially infinite length are.

Functions as Vectors

A function can be thought of as a mapping from one number usually an xx value or a time tt value to another value. This was shown earlier with the sine waves. For example the formula:

f(t)=sin⁡(t)f(t) = \sin(t)

Specifically says for a given value of t (which could be any number you like) then compute the output of the function to be the sine of that number. Here are a few examples:

f(0.1)=sin⁡(0.1)f(−0.1)=sin⁡(−0.1)f(10)=sin⁡(10)f(∞)=sin⁡(∞)f(0.1) = \sin(0.1)\\ f(-0.1) = \sin(-0.1)\\ f(10) = \sin(10)\\ f(\infin) = \sin(\infin)

If this notation is new to you, you can think of f(t)f(t) as the yy value of the graph. So it’s like y=mx+by = mx + b except we are computing the sine curve and not a straight line.

If we take another look at vectors they kind of are doing the same thing. They are mapping a dimension/row into a value. Now this is a fancy sounding sentence but really what I mean is the following.

Given the vector:

a=[xyz⋮N]a = \begin{bmatrix}x\\y\\z\\ \vdots\\N\end{bmatrix}

If you “ask” the aa vector what it’s first row is, it will give you back xx. Similarly, if you “ask” what its second row is, it will give you back yy. This continues all the way until the NN‘th row where it will give you back NN.

A function does the same thing, except instead of having its first, second, …, N’th rows, it instead has an infinite number of rows. Where there could be a row for 0.1, another row for 0.001, another for 0.0000001 etc.

So we could actually write out our function like this:

f(t)=[f(−∞)⋮f(−0.01)⋮f(0)⋮f(0.01)⋮f(∞)]=[sin⁡(−∞)⋮sin⁡(−0.01)⋮sin⁡(0)⋮sin⁡(0.01)⋮sin⁡(∞)]\begin{split} f(t) & = \begin{bmatrix}f(-\infin) \\ \vdots \\ f(-0.01) \\ \vdots \\ f(0) \\ \vdots \\ f(0.01) \\ \vdots \\ f(\infin) \\ \end{bmatrix} \\& = \begin{bmatrix}\sin(-\infin) \\ \vdots \\ \sin(-0.01) \\ \vdots \\ \sin(0) \\ \vdots \\ \sin(0.01) \\ \vdots \\ \sin(\infin) \\ \end{bmatrix} \end{split}

Why Are You Telling Us This?

This raises an important and crucial element of the Fourier Transform. We can compute the similarity of two functions in the same way as we do for vectors. That is, we can compute the dot product between two functions.

If we think of two functions f(t)f(t) and g(t)g(t), we can determine how similar they are by taking the dot product just like we do for vectors. Written with maths this looks something like.

With the functions written as vectors just like before:

f(t)=[f(−∞)⋮f(−0.01)⋮f(0)⋮f(0.01)⋮f(∞)]f(t) = \begin{bmatrix}f(-\infin) \\ \vdots \\ f(-0.01) \\ \vdots \\ f(0) \\ \vdots \\ f(0.01) \\ \vdots \\ f(\infin) \\ \end{bmatrix} g(t)=[g(−∞)⋮g(−0.01)⋮g(0)⋮g(0.01)⋮g(∞)]g(t) = \begin{bmatrix}g(-\infin) \\ \vdots \\ g(-0.01) \\ \vdots \\ g(0) \\ \vdots \\ g(0.01) \\ \vdots \\ g(\infin) \\ \end{bmatrix}

Then the dot product of the two functions is:

f(t)⋅g(t)=f(−∞)×g(−∞)+…+f(−0.01)×g(−0.01)+…+f(0)×g(0)+…+f(0.01)×g(0.01)+…+f(∞)×g(∞)\begin{split} f(t) \cdot g(t) = & f(-\infin)\times g(-\infin) \\ & + \dots \\ & + f(-0.01)\times g(-0.01) \\ & + \dots \\ & + f(0)\times g(0) \\ & + \dots \\ & + f(0.01)\times g(0.01) \\ & + \dots \\ & + f(\infin)\times g(\infin) \end{split}

Now unfortunately the long sequence of adding is a little cumbersome. Luckily maths has some handy notation for adding up an infinite amount of numbers. This notation is known as an integral and looks like this:

∫−∞∞f(t)dt\int^{\infin}_{-\infin} f(t) dt

This is in effect saying that for every possible value of t (from −∞-\infin to ∞\infin), compute f(t)f(t) and add all of the results together.

So this means our dot product or similarity of two functions can be written as:

f(t)⋅g(t)=∫−∞∞f(t)g(t)dtf(t)\cdot g(t) = \int^{\infin}_{-\infin} f(t)g(t)dt

Which in english again, just says that for every single t value, multiply the results of f(t)f(t) by the result of g(t)g(t) and add up all of the results.

For those interested the reason this works is because the dot product is also known as the inner product. I don’t know much beyond that in terms of pure maths definitions but let Google guide you my friend.

So bringing all of this back to the Fourier Transform.

F(f)=∫−∞∞f(t)e−j2πftdt\mathcal{F}(f) = \int^{\infin}_{-\infin}f(t)e^{-j2\pi ft} dt

You can see from the formula that we are taking a dot product of two functions. f(t)f(t) and the function e−j2πfte^{-j2\pi ft}. So the core idea is that we are discovering frequency, phase and amplitude information of our signal f(t)f(t) by finding out how similar f(t)f(t) is to e−j2πfte^{-j2\pi ft}.

e To The Power of What?!

This one I am going to gloss over a little bit just because this blog is getting long enough as it is.

I am mainly going to go over the key elements that we need to know to understand why we need e−j2πfte^{-j2\pi ft} in the Fourier Transform equation. Perhaps I can write a future article on it a bit more deeply.

The key thing to understand is that this e term comes from Eulers equation:

ejt=cos⁡(t)+jsin⁡(t)e^{jt} = \cos(t) + j\sin(t)

Where jj is a complex number (sometimes written as ii but my electrical engineer brain wins this battle). So if we expand out the Fourier Transform a bit:

e−j2πft=cos⁡(2πft)+jsin⁡(2πft)e^{-j2\pi ft} = \cos(2\pi ft) + j\sin(2\pi ft)

Where ff is the frequency of the waves, tt is time and the 2π2\pi term is just there to convert the answer into Hertz.

Fun fact, this is exactly where the famous equation eiπ=−1e^{i\pi} = -1 comes from.

So in effect the e term of the Fourier Transform is encoding a frequency. The fun thing though is that using Eulers equation like this actually gives us the ability to compute phase and amplitude components as well.

This is because the magic of Eulers equation allows for visualizing e−j2πfte^{-j2\pi ft} as a rotating vector on the complex plane. This representation is known as a phasor. An example of this can be seen in the animation below:

You can see that the rotating line moves at a speed that matches our frequency ff. Notice that the x coordinate corresponds to the sine wave and the y coordinate corresponds to the cosine wave.

Additionally the magnitude information can be gathered from the radius of the circle. The phase is simply where on the circle the rotation started from. That is instead of starting on the x-axis it could start at an angle to the x-axis. This angle is our phase.

Apologies again for the hand-wavyness of this explanation but these are the key elements needed for the Fourier Transform.

Summary:

  • The term e−j2πfte^{-j2\pi ft} is a function that represents a spinning vector/line known as a phasor with a given frequency, amplitude and phase.
  • The components can be found from complex numbers and Eulers equation

Tying It All Together

So to finish off. How does the Fourier Transform work? Well we know that e−j2πfte^{-j2\pi ft} can be thought of as encoding the phase and amplitude for the given frequency ff in a rotating vector. We also know that the fourier transform is the dot product of our signal with this rotating vector.

So to sum it all up, the Fourier Transform computes how similar our signal f(t)f(t) is to a given frequency, and beyond that it computes the amount of similarity between our signal and the frequency ff. So bringing it back to the equation:

F(f)=∫−∞∞f(t)e−j2πftdt\mathcal{F}(f) = \int^{\infin}_{-\infin}f(t)e^{-j2\pi ft} dt

The Fourier Transform says how much our signal f(t)f(t) is similar to the given frequency ff. That means that to find out the magnitude and phase information for say 1Hz we can simply substitute f=1f = 1 into the equation.

This gives us

F(1)=∫−∞∞f(t)e−j2π1tdt\mathcal{F}(1) = \int^{\infin}_{-\infin}f(t)e^{-j2\pi 1t} dt

which once the summing up over time is complete gives us a complex number that perfectly embodies how much of the signal is similar to the frequency 1Hz, complete with phase and amplitude information. If the frequency 1Hz does not exist in our signal then this will produce simply a zero. If it does exist then we get all then information we need.

Now summing up over infinite time seems complicated (in maths this can be done using calculus) however on computers, we don’t have an infinite amount of a signal, we may only have a 2 second voice recording for example. So summing up over all of time consists of just going through each of the voice samples one by one and multiplying it with e−j2πfte^{-j2\pi ft} adding up the result as we go.

The only issue with this is that if we want a plot of the frequencies in a signal we need to iterate through all of the signal, then for each sample of the signal we also need to iterate through all of the frequencies we are checking. This makes our algorithm quite slow. You can imagine if there are one thousand samples then we want to check one thousand frequencies then we would need to do one million (1000×10001000\times 1000) calculations. This is known as an O(n2)O(n^2) algorithm. There is an amendment to the DFT (Discrete Fourier Transform, which is the Fourier Transform computers can do), known as the Fast Fourier Transform (FFT) which speeds this up tremendously. This is a very famous algorithm that really made the Fourier Transform feasible to perform in real time on devices.

For more information on the FFT you can watch this video by Reducible for a great explanation.

For different ways to think about the Fourier Transform I can recommend checking out this video by 3blue1brown and this video by Veritasium.

Why Should We Care?

Apart from being a particularly beautiful piece of maths that elegantly ties together elements of complex numbers, Eulers formula and dot products. The Fourier Transform has become one of the key equations in our history. Allowing for incredible feats of engineering and science across numerous disciplines.

Henry Animation

The idea of visualizing each frequency of a signal as a rotating vector allows for some very interesting displays of the Fourier Transform. For example, inspired by this 3Blue1Brown video, if we take a single line image of Henry my dog, and take the Fourier Transform of it, we can compute all of the frequencies in Henry (the image… The real dog is far more complicated). If we draw a rotating vector for each of these frequencies, with each circle being centered on the tip of the previous rotating frequency vector we get the following:

This animation has quite a few things you can play with. At first the drawing of Henry is going to look innacurate. If you increase the number of phasors drawn with the slider, you will see that the drawing of Henry becomes more and more accurate. Additionally, you can turn on manual drawing mode using the button up the top left of the animation. This will allow you to draw your own pictures and see the phasors that make it up.



That’s it! I hope that you have enjoyed reading far too much (or not enough) maths and that you now feel like you have a rough understanding of the Fourier Transform. Thanks so much for your time and if you enjoyed please share with your friends. Any feedback is also much appreciated.

As always the code for all of the animations in this article can be found on my GitHub here.

Sorry for the delay on this one, I got side-tracked creating a little animation library built on top of the incredible Rust library Macroquad. To see this library you can visit my GitHub here. Thats where all of the pretty plots for this article came from.

{
"by": "wofo",
"descendants": 1,
"id": 40244909,
"kids": [
40261856,
40261760
],
"score": 9,
"time": 1714719158,
"title": "Foray into Fourier",
"type": "story",
"url": "https://roughly-understood.com/posts/foray-into-fourier/"
}
{
"author": null,
"date": null,
"description": "The Fourier Transform. What is it and how does it work? Today we dive particularly deep into some of the maths and intuition behind one of the most useful algorithms of all time. Bonus, at the end of this article we draw my dog Henry with maths.",
"image": "https://roughly-understood.com/_astro/github-mark-white.S2fJVXLq_ZVkBPl.svg",
"logo": null,
"publisher": null,
"title": "Foray Into Fourier",
"url": "https://roughly-understood.com/posts/foray-into-fourier/"
}
{
"url": "https://roughly-understood.com/posts/foray-into-fourier/",
"title": "Foray Into Fourier",
"description": "The Fourier Transform. What is it and how does it work? Today we dive particularly deep into some of the maths and intuition behind one of the most useful algorithms of all time. Bonus, at the end of this article we draw my dog Henry with maths.",
"links": [
"https://roughly-understood.com/posts/foray-into-fourier/"
],
"image": "",
"content": "<article> <p>Buckle in everyone, today we are exploring some maths. Now before you\nimmediately exit this blog, I want you to hang around, because todays topics is\npossibly one of the most important mathematical concepts to be thought up in\nthe last 200 years. Today we are going to try and gain an intuitive\nunderstanding for what the Fourier Transform is and how it works.</p>\n<h2 id=\"fourier-transform\">Fourier Transform?</h2>\n<p>Have you ever watched a music video where some element of the screen moves in\nreaction to the beats or pitches of a song? Something along the lines of\n<a href=\"https://youtu.be/lcztWFC95hM?si=gEXMWw0n8dXuzP8I\" target=\"_blank\">this\nreally cool music visualizer by BS Media on Youtube</a>.</p>\n<p>Well it turns out this is an application of the Fourier Transform. The\ndiscoverer (inventor?) of the Fourier Transform, Joseph Fourier, was around\nmuch before any music visualizers and instead discovered it with maths. Thats right,\nmaths! Maths is the reason we have a lot of nice things in this world and\nthe music visualizer is just one of them.</p>\n<p>The Fourier Transform is also incredibly useful in lots of other fields beyond\nmusic, in fact it is used extensively in electronic circuits, earthquake\nmonitoring, wireless communications, noise cancellation and much more.</p>\n<h3 id=\"getting-frequie\">Getting Frequie</h3>\n<p>So I have justified the uses of the Fourier Transform but we are yet to see what it <strong>actually does</strong>.</p>\n<p>It turns out that every signal, whether it be music, wireless data, light or\nanything, can actually be thought of as a number of frequencies added together.\nJoseph Fourier, was the first person to mathematically prove, that not just\nspecific signals can be made of frequencies, but actually <strong>all</strong> signals are\ncomposed of a bunch of frequencies added together. For an overview of what I\nmean by frequency feel free to check out <a target=\"_blank\" href=\"https://roughly-understood.com/posts/foray-into-fourier/dopplin-around\">my previous blog</a>\non the Doppler Effect where I give a brief explanation.</p>\n<p>To see an example of what I mean by a signal being made up of a number of\ndifferent frequencies. Try and have a go at building the orange signal below, by\ntoggling on or off specific sine waves. All of the different frequencies that\nare turned on are added together to result in the blue wave. So the goal for\nyou is to find the frequencies that make the blue wave perfectly overlap the\nwhite one. These frequencies can be thought of as the frequencies that make up\nthe original orange signal.</p>\n<p>Now switching on or off different frequencies works but is time consuming. In\nfact, it gets worse, because it turns out that each frequency that a signal is\nmade up of, has their own independent amplitudes and phases. Where the\namplitude is how big/strong the wave is and the phase moves the wave\nleft to right. In the case of a sine wave the formula is:</p>\n<span><span><span>y=Asin⁡(2πf+ϕ)y = A\\sin(2\\pi f + \\phi)</span></span></span>\n<p>Where <span><span>ff</span></span> is the frequency, <span><span>AA</span></span> is the amplitude and <span><span>ϕ\\phi</span></span> is the phase shift.\nYou can have a go of playing around with these parameters below.</p>\n<p>As you can probably see, manually trying to decide what frequencies exist in a\nsignal along with their respective amplitudes and phases is too much to do\nmanually. However, we still want to know this information and use it. So how do\nwe calculate this information? Welcome the Fourier Transform:</p>\n<span><span><span>F(f)=∫−∞∞f(t)e−j2πftdt\\mathcal{F}(f) = \\int^{\\infin}_{-\\infin}f(t)e^{-j2\\pi ft} dt </span></span></span>\n<p>This formula will allow us to have any signal <span><span>f(t)f(t)</span></span> and find out the frequency\ncomponents hidden within that signal. Beyond that we even get the phase and\namplitude of each frequency.</p>\n<p>I know this looks scary but we are going to break down every piece of this\nformula for the rest of this article.</p>\n<h2 id=\"how-does-it-work\">How Does It Work?</h2>\n<p>To discover why the Fourier Transform works, I think it’s best we take it in two steps. Firstly we are going to discuss just the form of the equation. Why is the Fourier Transform a formula in the shape of:</p>\n<span><span><span>F(f)=∫−∞∞f(t)g(t)dt\\mathcal{F}(f) = \\int_{-\\infin}^{\\infin} f(t)g(t) dt</span></span></span>\n<p>Where <span><span>f(t)f(t)</span></span> and <span><span>g(t)g(t)</span></span> are some functions of time (like our sine waves from\nearlier for example).</p>\n<p>Secondly we are going to discuss why choosing the function <span><span>g(t)g(t)</span></span> to be\n<span><span>e−j2πfte^{-j2\\pi ft}</span></span> is a good and useful idea.</p>\n<h3 id=\"my-day-dots\">My Day Dots</h3>\n<h4 id=\"one-dimensional-similarity\"><strong>One Dimensional Similarity</strong></h4>\n<p>Before we can talk about that huge scary equation I think it’s best we think\nabout the idea of similarity in maths. For example lets just imagine that we\nhave two numbers, <span><span>aa</span></span> and <span><span>bb</span></span>. How would we measure how similar <span><span>aa</span></span> is to <span><span>bb</span></span>? What\ndoes it even mean to be similar?</p>\n<p>I guess one way we could think about similarities is their size. If two numers\nare of a similar size then we could say they are similar. For example, 2.9 is\nvery similar to 3.0.</p>\n<p>Another way we could think about similarity is the sign of the number. If <span><span>aa</span></span>\nis negative and <span><span>bb</span></span> is positive, then I think it is fair to say that these two\nnumbers are not very similar at all. Conversely if <span><span>aa</span></span> <strong>and</strong> <span><span>bb</span></span> are negative\nthen the two numbers are similar.</p>\n<p>These are just general observations though. What if we wanted a way to <em>score</em>\nhow similar two numbers are? Maybe with a high similarity having a positive\nscore and low similarity having a negative score.</p>\n<p>One way to achieve this goal would be to simply multiply the two numbers\ntogether. When the numbers have the same sign (<span><span>+,−+, - </span></span>) then we will end up\nwith a positive score (remembering that a minus times a minus gives a plus).\nConversely, if they don’t have the same sign then the score will be negative.</p>\n<p>Below I have a made a little animation to demonstrate this idea of similarity,\nyou can see that when the numbers are both positive then they are similar. If\nboth numbers are negative then they are similar, but if one is positive and the\nother is negative then we end up with poor/low similarity.</p>\n<h4 id=\"two-dimensional-similarity\"><strong>Two Dimensional Similarity</strong></h4>\n<p>Okay so we now have a measure of similarity for two numbers but how can we\nextend this? For example if we had two sets of coordinate <span><span>a=(x1,y1)a = (x_1, y_1)</span></span> and\n<span><span>b=(x2,y2)b = (x_2, y_2)</span></span> then how could we possibly tell how similar these coordinates\nare to one another?</p>\n<p>You see if we think about those coordinates as vectors (just a fancy word for\nnumbers in a row or column) and rewrite them like this:</p>\n<span><span><span>a=[x1y1]a = \\begin{bmatrix}\nx_1 \\\\\ny_1\n\\end{bmatrix}</span></span></span>\n<span><span><span>b=[x2y2]b = \\begin{bmatrix}\nx_2 \\\\\ny_2\n\\end{bmatrix}</span></span></span>\n<p>We could view this as finding the similarity between single numbers like we did\npreviously. However, we just do this twice. For example we could multiply the\n<span><span>xx</span></span> coordinates to find out how similar the <span><span>xx</span></span>‘s are and multiply the <span><span>yy</span></span>\ncoordinates to find out how similar they are. Something like this:</p>\n<span><span><span>similarity of (a,b)=[x1×x2y1×y2]=[similarity of x’ssimilarity of y’s]\\begin{split}\n&amp; \\text{similarity of }(a, b) = \\begin{bmatrix} \nx_1 \\times x_2 \\\\\ny_1 \\times y_2\n\\end{bmatrix}\\\\ \n&amp;= \\begin{bmatrix}\n\\text{similarity of }x\\text{'s}\\\\\n\\text{similarity of }y\\text{'s}\n\\end{bmatrix}\n\\end{split}</span></span></span>\n<p>So this is looking more promising, however, we have to now juggle two\nsimilarity scores. This doesn’t gain us anything. What we want instead is a\nsingle score that measures how similar the two coordinates/vectors are.</p>\n<p>One way we could achieve this is by maybe adding the similarity of the <span><span>xx</span></span>‘s\nand the similarity of the <span><span>yy</span></span>‘s together. That way if the x coordinates are\nvery similar but the y coordinates are facing in opposite directions the scores\nof the x and y will cancel each other out. Writing this out in some maths looks like this:</p>\n<span><span><span>similarity of (a,b)=a⋅b=[x1y1]⋅[x2y2]=(x1×x2)+(y1×y2)\\begin{split}\n&amp;\\text{similarity of }(a,b) = a \\cdot b \\\\\n&amp; = \\begin{bmatrix}x_1\\\\y_1\\end{bmatrix}\\cdot\\begin{bmatrix}x_2\\\\y_2\\end{bmatrix} \\\\\n&amp; = (x_1\\times x_2) + (y_1\\times y_2)\n\\end{split}</span></span></span>\n<p>Some of you may have guessed this already but what we are building above is\nactually known as a dot product (thats why the formula has the little <span><span>⋅\\cdot</span></span>\nin it). The dot product is used widely in physics and maths. As we have\ndiscovered through playing around with the numbers, the dot product gives us a\nmeasure of how similar two vectors are. In english you can think of it like\nsaying if we drew an arrow from <span><span>(0,0)(0,0)</span></span> to point <span><span>aa</span></span> and another arrow from\n<span><span>(0,0)(0,0)</span></span> to point <span><span>bb</span></span>, then how much of <span><span>aa</span></span> is pointing in the same direction as\n<span><span>bb</span></span>.</p>\n<p>Maths and words can be kind of hard to get right so\nI have made a little animation below where you can move the point <span><span>aa</span></span> around to\nsee how similar it is to <span><span>bb</span></span> calculated the same formula we came up with above.</p>\n<h4 id=\"n-dimensional-similarity\"><strong>N Dimensional Similarity</strong></h4>\n<p>Okay so we can find the similarity for 1D vectors (a single number) and now for\n2D vectors but what about 3D? 4D? 5D? Even N Dimensions?</p>\n<p>It turns out this idea generalizes really nicely. Regardless of how many\ndimensions (how big the vectors are) for <span><span>aa</span></span> and <span><span>bb</span></span> we can follow the same\nprocedure. We multiply the corresponding dimension (ie. the <span><span>xx</span></span>‘s multiply and\nthe <span><span>yy</span></span>‘s multiply) and then add up the result.</p>\n<p>For example:</p>\n<span><span><span>a⋅b=[x1y1z1⋮N1]⋅[x2y2z2⋮N2]\\begin{split}\na \\cdot b &amp; = \\begin{bmatrix}x_1\\\\y_1\\\\z_1 \\\\ \\vdots\\\\ N_1 \\end{bmatrix} \\cdot\n\\begin{bmatrix}x_2\\\\y_2\\\\z_2 \\\\ \\vdots\\\\ N_2 \\end{bmatrix} \n\\end{split}</span></span></span>\n<p>Which is the same as:</p>\n<span><span><span>(x1×x2)+(y1×y2)+(z1×z2)+…+(N1×N2)\\begin{split}\n(x_1 \\times x_2) &amp; + (y_1 \\times y_2) \\\\ \n&amp; + (z_1 \\times z_2) + \\dots \\\\\n&amp; + (N_1 \\times N_2) \n\\end{split}</span></span></span>\n<p>Pretty cool huh? So we just discovered by playing around with some numbers, a\nway to tell how similar two vectors of potentially infinite length are.</p>\n<h4 id=\"functions-as-vectors\"><strong>Functions as Vectors</strong></h4>\n<p>A function can be thought of as a mapping from one number usually an <span><span>xx</span></span> value\nor a time <span><span>tt</span></span> value to another value. This was shown earlier with the sine waves. For example the formula:</p>\n<span><span><span>f(t)=sin⁡(t)f(t) = \\sin(t) </span></span></span>\n<p>Specifically says for a given value of t (which could be any number you like)\nthen compute the output of the function to be the sine of that number. Here are\na few examples:</p>\n<span><span><span>f(0.1)=sin⁡(0.1)f(−0.1)=sin⁡(−0.1)f(10)=sin⁡(10)f(∞)=sin⁡(∞)f(0.1) = \\sin(0.1)\\\\\nf(-0.1) = \\sin(-0.1)\\\\\nf(10) = \\sin(10)\\\\\nf(\\infin) = \\sin(\\infin)</span></span></span>\n<p>If this notation is new to you, you can think of <span><span>f(t)f(t)</span></span> as the <span><span>yy</span></span> value of the\ngraph. So it’s like <span><span>y=mx+by = mx + b</span></span> except we are computing the sine curve and not\na straight line.</p>\n<p>If we take another look at vectors they kind of are doing the same thing. They\nare mapping a dimension/row into a value. Now this is a fancy sounding sentence but\nreally what I mean is the following.</p>\n<p>Given the vector:</p>\n<span><span><span>a=[xyz⋮N]a = \\begin{bmatrix}x\\\\y\\\\z\\\\ \\vdots\\\\N\\end{bmatrix}</span></span></span>\n<p>If you “ask” the <span><span>aa</span></span> vector what it’s first row is, it will give you back\n<span><span>xx</span></span>. Similarly, if you “ask” what its second row is, it will give you\nback <span><span>yy</span></span>. This continues all the way until the <span><span>NN</span></span>‘th row where it will\ngive you back <span><span>NN</span></span>.</p>\n<p>A function does the same thing, except instead of having its first, second,\n…, N’th rows, it instead has an infinite number of rows. Where there could be\na row for 0.1, another row for 0.001, another for 0.0000001 etc.</p>\n<p>So we could actually write out our function like this:</p>\n<span><span><span>f(t)=[f(−∞)⋮f(−0.01)⋮f(0)⋮f(0.01)⋮f(∞)]=[sin⁡(−∞)⋮sin⁡(−0.01)⋮sin⁡(0)⋮sin⁡(0.01)⋮sin⁡(∞)]\\begin{split}\nf(t) &amp; = \n\\begin{bmatrix}f(-\\infin) \\\\ \n\\vdots \\\\ \nf(-0.01) \\\\ \n\\vdots \\\\\nf(0) \\\\ \n\\vdots \\\\\nf(0.01) \\\\ \n\\vdots \\\\\nf(\\infin) \\\\ \n\\end{bmatrix} \\\\&amp; = \n\\begin{bmatrix}\\sin(-\\infin) \\\\ \n\\vdots \\\\ \n\\sin(-0.01) \\\\ \n\\vdots \\\\\n\\sin(0) \\\\ \n\\vdots \\\\\n\\sin(0.01) \\\\ \n\\vdots \\\\\n\\sin(\\infin) \\\\ \n\\end{bmatrix}\n\\end{split}</span></span></span>\n<h4 id=\"why-are-you-telling-us-this\"><strong>Why Are You Telling Us This?</strong></h4>\n<p>This raises an important and crucial element of the Fourier Transform. We\ncan compute the similarity of two functions in the same way as we do for\nvectors. That is, we can compute the dot product between two functions.</p>\n<p>If we think of two functions <span><span>f(t)f(t)</span></span> and <span><span>g(t)g(t)</span></span>, we can determine how similar\nthey are by taking the dot product just like we do for vectors. Written with\nmaths this looks something like.</p>\n<p>With the functions written as vectors just like before:</p>\n<span><span><span>f(t)=[f(−∞)⋮f(−0.01)⋮f(0)⋮f(0.01)⋮f(∞)]f(t) = \n \\begin{bmatrix}f(-\\infin) \\\\ \n \\vdots \\\\ \n f(-0.01) \\\\ \n \\vdots \\\\\n f(0) \\\\ \n \\vdots \\\\\n f(0.01) \\\\ \n \\vdots \\\\\n f(\\infin) \\\\ \n \\end{bmatrix}</span></span></span>\n<span><span><span>g(t)=[g(−∞)⋮g(−0.01)⋮g(0)⋮g(0.01)⋮g(∞)]g(t) = \n \\begin{bmatrix}g(-\\infin) \\\\ \n \\vdots \\\\ \n g(-0.01) \\\\ \n \\vdots \\\\\n g(0) \\\\ \n \\vdots \\\\\n g(0.01) \\\\ \n \\vdots \\\\\n g(\\infin) \\\\ \n \\end{bmatrix}</span></span></span>\n<p>Then the dot product of the two functions is:</p>\n<span><span><span>f(t)⋅g(t)=f(−∞)×g(−∞)+…+f(−0.01)×g(−0.01)+…+f(0)×g(0)+…+f(0.01)×g(0.01)+…+f(∞)×g(∞)\\begin{split}\nf(t) \\cdot g(t) = &amp; f(-\\infin)\\times g(-\\infin) \\\\\n&amp; + \\dots \\\\\n&amp; + f(-0.01)\\times g(-0.01) \\\\\n&amp; + \\dots \\\\\n&amp; + f(0)\\times g(0) \\\\\n&amp; + \\dots \\\\\n&amp; + f(0.01)\\times g(0.01) \\\\\n&amp; + \\dots \\\\\n&amp; + f(\\infin)\\times g(\\infin)\n\\end{split}</span></span></span>\n<p>Now unfortunately the long sequence of adding is a little cumbersome. Luckily\nmaths has some handy notation for adding up an infinite amount of numbers. This\nnotation is known as an integral and looks like this:</p>\n<span><span><span>∫−∞∞f(t)dt\\int^{\\infin}_{-\\infin} f(t) dt</span></span></span>\n<p>This is in effect saying that for every possible value of t (from <span><span>−∞-\\infin</span></span> to\n<span><span>∞\\infin</span></span>), compute <span><span>f(t)f(t)</span></span> and add all of the results together.</p>\n<p>So this means our dot product or similarity of two functions can be written as:</p>\n<span><span><span>f(t)⋅g(t)=∫−∞∞f(t)g(t)dtf(t)\\cdot g(t) = \\int^{\\infin}_{-\\infin} f(t)g(t)dt</span></span></span>\n<p>Which in english again, just says that for every single t value, multiply the\nresults of <span><span>f(t)f(t)</span></span> by the result of <span><span>g(t)g(t)</span></span> and add up all of the results.</p>\n<p>For those interested the reason this works is because the dot product is also\nknown as the inner product. I don’t know much beyond that in terms of pure\nmaths definitions but let Google guide you my friend.</p>\n<p>So bringing all of this back to the Fourier Transform.</p>\n<span><span><span>F(f)=∫−∞∞f(t)e−j2πftdt\\mathcal{F}(f) = \\int^{\\infin}_{-\\infin}f(t)e^{-j2\\pi ft} dt </span></span></span>\n<p>You can see from the formula that we are taking a dot product of two functions.\n<span><span>f(t)f(t)</span></span> and the function <span><span>e−j2πfte^{-j2\\pi ft}</span></span>. So the core idea is that we are\ndiscovering frequency, phase and amplitude information of our signal <span><span>f(t)f(t)</span></span> by\nfinding out how similar <span><span>f(t)f(t)</span></span> is to <span><span>e−j2πfte^{-j2\\pi ft}</span></span>.</p>\n<h3 id=\"e-to-the-power-of-what\">e To The Power of What?!</h3>\n<p>This one I am going to gloss over a little bit just because this blog is\ngetting long enough as it is.</p>\n<p>I am mainly going to go over the key elements that we need to know to\nunderstand why we need <span><span>e−j2πfte^{-j2\\pi ft}</span></span> in the Fourier Transform equation.\nPerhaps I can write a future article on it a bit more deeply.</p>\n<p>The key thing to understand is that this e term comes from Eulers equation:</p>\n<span><span><span>ejt=cos⁡(t)+jsin⁡(t)e^{jt} = \\cos(t) + j\\sin(t) </span></span></span>\n<p>Where <span><span>jj</span></span> is a complex number (sometimes written as <span><span>ii</span></span> but my electrical\nengineer brain wins this battle). So if we expand out the Fourier Transform a bit:</p>\n<span><span><span>e−j2πft=cos⁡(2πft)+jsin⁡(2πft)e^{-j2\\pi ft} = \\cos(2\\pi ft) + j\\sin(2\\pi ft) </span></span></span>\n<p>Where <span><span>ff</span></span> is the frequency of the waves, <span><span>tt</span></span> is time and the <span><span>2π2\\pi</span></span> term is\njust there to convert the answer into Hertz.</p>\n<blockquote>\n<p>Fun fact, this is exactly where the famous equation <span><span>eiπ=−1e^{i\\pi} = -1</span></span> comes from.</p>\n</blockquote>\n<p>So in effect the e term of the Fourier Transform is encoding a frequency. The\nfun thing though is that using Eulers equation like this actually gives us the\nability to compute phase and amplitude components as well.</p>\n<p>This is because the magic of Eulers equation allows for visualizing <span><span>e−j2πfte^{-j2\\pi\nft}</span></span> as a rotating vector on the complex plane. This representation is known as\na phasor. An example of this can be seen in the animation below:</p>\n<p>You can see that the rotating line moves at a speed that matches our frequency\n<span><span>ff</span></span>. Notice that the x coordinate corresponds to the sine wave and the y\ncoordinate corresponds to the cosine wave.</p>\n<p>Additionally the magnitude information can be gathered from the radius of the\ncircle. The phase is simply where on the circle the rotation started from. That\nis instead of starting on the x-axis it could start at an angle to the x-axis.\nThis angle is our phase.</p>\n<p>Apologies again for the hand-wavyness of this explanation but these are the key\nelements needed for the Fourier Transform.</p>\n<p>Summary:</p>\n<ul>\n<li>The term <span><span>e−j2πfte^{-j2\\pi ft}</span></span> is a function that represents a spinning vector/line\nknown as a phasor with a given frequency, amplitude and phase.</li>\n<li>The components can be found from complex numbers and Eulers equation</li>\n</ul>\n<h3 id=\"tying-it-all-together\">Tying It All Together</h3>\n<p>So to finish off. How does the Fourier Transform work? Well we know that\n<span><span>e−j2πfte^{-j2\\pi ft}</span></span> can be thought of as encoding the phase and amplitude for the\ngiven frequency <span><span>ff</span></span> in a rotating vector. We also know that the fourier\ntransform is the dot product of our signal with this rotating vector.</p>\n<p>So to sum it all up, the Fourier Transform computes how similar our signal\n<span><span>f(t)f(t)</span></span> is to a given frequency, and beyond that it computes the amount of\nsimilarity between our signal and the frequency <span><span>ff</span></span>. So bringing it back to the\nequation:</p>\n<span><span><span>F(f)=∫−∞∞f(t)e−j2πftdt\\mathcal{F}(f) = \\int^{\\infin}_{-\\infin}f(t)e^{-j2\\pi ft} dt </span></span></span>\n<p>The Fourier Transform says how much our signal <span><span>f(t)f(t)</span></span> is similar to the given\nfrequency <span><span>ff</span></span>. That means that to find out the magnitude and phase information\nfor say 1Hz we can simply substitute <span><span>f=1f = 1</span></span> into the equation.</p>\n<p>This gives us</p>\n<span><span><span>F(1)=∫−∞∞f(t)e−j2π1tdt\\mathcal{F}(1) = \\int^{\\infin}_{-\\infin}f(t)e^{-j2\\pi 1t} dt </span></span></span>\n<p>which once the summing up over time is complete gives us a complex number that\nperfectly embodies how much of the signal is similar to the frequency 1Hz,\ncomplete with phase and amplitude information. If the frequency 1Hz does not\nexist in our signal then this will produce simply a zero. If it does exist then\nwe get all then information we need.</p>\n<p>Now summing up over infinite time seems complicated (in maths this can be done\nusing calculus) however on computers, we don’t have an infinite amount of a\nsignal, we may only have a 2 second voice recording for example. So summing up\nover all of time consists of just going through each of the voice samples one\nby one and multiplying it with <span><span>e−j2πfte^{-j2\\pi ft}</span></span> adding up the result as we go.</p>\n<p>The only issue with this is that if we want a plot of the frequencies in a\nsignal we need to iterate through all of the signal, then for each sample of\nthe signal we also need to iterate through all of the frequencies we are\nchecking. This makes our algorithm quite slow. You can imagine if there are\none thousand samples then we want to check one thousand frequencies then we\nwould need to do one million (<span><span>1000×10001000\\times 1000</span></span>) calculations. This is known as an <span><span>O(n2)O(n^2)</span></span>\nalgorithm. There is an amendment to the DFT (Discrete Fourier Transform, which\nis the Fourier Transform computers can do), known as the Fast Fourier Transform (FFT)\nwhich speeds this up tremendously. This is a very famous algorithm that really\nmade the Fourier Transform feasible to perform in real time on devices.</p>\n<p>For more information on the FFT you can <a target=\"_blank\" href=\"https://youtu.be/h7apO7q16V0?si=gH-i0P07gBd7nmIx\">watch this\nvideo by Reducible</a> for a great\nexplanation.</p>\n<p>For different ways to think about the Fourier Transform I can recommend\nchecking out <a target=\"_blank\" href=\"https://youtu.be/spUNpyF58BY?si=1wbRQLX1yNBEhb_K\">this video by 3blue1brown</a> and\n<a target=\"_blank\" href=\"https://youtu.be/nmgFG7PUHfo?si=8Ja6dokpQ8dzibJn\">this video by Veritasium</a>.</p>\n<h2 id=\"why-should-we-care\">Why Should We Care?</h2>\n<p>Apart from being a particularly beautiful piece of maths that elegantly ties\ntogether elements of complex numbers, Eulers formula and dot products. The\nFourier Transform has become one of the key equations in our history. Allowing\nfor incredible feats of engineering and science across numerous disciplines.</p>\n<h3 id=\"henry-animation\">Henry Animation</h3>\n<p>The idea of visualizing each frequency of a signal as a rotating vector allows\nfor some very interesting displays of the Fourier Transform. For example,\ninspired by <a target=\"_blank\" href=\"https://www.youtube.com/watch?v=r6sGWTCMz2k&amp;t=57s\">this 3Blue1Brown\nvideo</a>, if we take a single\nline image of Henry my dog, and take the Fourier Transform of it, we can\ncompute all of the frequencies in Henry (the image… The real dog is far more\ncomplicated). If we draw a rotating vector for each of these frequencies, with\neach circle being centered on the tip of the previous rotating frequency vector\nwe get the following:</p>\n<p>This animation has quite a few things you can play with. At first the drawing\nof Henry is going to look innacurate. If you increase the number of phasors\ndrawn with the slider, you will see that the drawing of Henry becomes more and\nmore accurate. Additionally, you can turn on manual drawing mode using the\nbutton up the top left of the animation. This will allow you to draw your own\npictures and see the phasors that make it up.</p>\n<br />\n<hr />\n<p>That’s it! I hope that you have enjoyed reading far too much (or not enough)\nmaths and that you now feel like you have a rough understanding of the Fourier\nTransform. Thanks so much for your time and if you enjoyed please share with\nyour friends. Any feedback is also much appreciated.</p>\n<p>As always the code for all of the animations in this article can be found on my\n<a target=\"_blank\" href=\"https://github.com/James-Rhodes/foray_into_fourier\">GitHub here</a>.</p>\n<p>Sorry for the delay on this one, I got side-tracked creating a little animation\nlibrary built on top of the incredible Rust library\n<a target=\"_blank\" href=\"https://macroquad.rs/\">Macroquad</a>. To see this library you can visit <a target=\"_blank\" href=\"https://github.com/James-Rhodes/mqanim\">my\nGitHub here</a>. Thats where all of the\npretty plots for this article came from.</p> </article>",
"author": "",
"favicon": "https://roughly-understood.com/favicon.ico",
"source": "roughly-understood.com",
"published": "",
"ttr": 697,
"type": ""
}