Hello there, poor soul! How come you found your way to this post? Are you perhaps unhappy with the way programming is done nowadays? Did you perhaps hear someone say that functional programming has all the benefits like:

  • Simplifying your projects
  • Removing a class of errors and bugs
  • Bringing back joy of programming of early days

Well, whether those claims are true or not, you will have to see them for yourself. One interesting thing happening is:

  1. people who are not into FP think those who are into FP are pretentious pricks who want to feel themselves smarter than everyeone else, while
  2. the guys who are into FP don’t understand all the resistance when there’s such a wealth of good things about FP and it is actually not as difficult as many people claim.

Well, it seems kinda accurate to me.

There was a period in time when I was “aggressive” trying to get people learn Functional Programming. Because of me, some people are probably never crossing to our side.

Oopsie-daisy.

But in my defense, some of those people saw the benefits of those ideas, but are in a stage of their lives when they can’t afford to learn functional programming.

That is a valid gripe with functional programming that those who are on the other side have: there’s just too much to learn. Of course, you can use map, filter, fold, recursion, immutability, etc. to spice up your code with some functional flavor. But the problem most advanced of us have with this, is that you won’t understand what you’re doing and whether the solution you’re deploying is the proper one. Essentially, you’re not getting what you were promised: understanding. But it is too difficult to get all of it right away (thanks OOP). Blindly deploying all the tools that you’ve learned in hopes it will work is not such a good approach, because high chances it is not going to work, but somehow it is the fault of the FP folk who proposed the idea, not the person who couldn’t figure out how to use the idea properly. Speaking from the experience of a friend who tried to make the code in Dart robust by employing a Either type, and who later regretted it, since Dart wasn’t developed with FP ideas in mind.

But the problem still persists: functional programming is difficult to grasp. In truth, functional programming ideas are simple, and possibly much simpler than OOP ideas to understand (I mean understand, not just “ahh, I see” and forget the next 15 minutes). But for some people their brains are adamant in not comprehending those simple ideas. And I don’t know why. Might make a good PhD thesis in Neuroscience, who knows.

So I’ll try my best to explain functional programming principles… in Python. Nowadays everyone and their grandmother seems to know Python, so it should be no problem.

What is the lesson?

The most important takeaway from this post is the perspective. The correct one, hopefully.

The problem with learning functional programming is that its ideas lie on several spectrums of logic, math, computer science and engineering, simultaneously. Its ideas are so deep, essentially like calculus, and are re-discovered again and again, by many different people, independently. The book of “Clean Code” is essentially a pseudo-introduction into functional programming (not that reading that book helped me understand how to go about my code as a kid, but I really liked it nonetheless).

So yeah, there are many ways to look at the concept through many different lenses, and tugging on one concept attracts other concepts and ideas. That’s why when you’re learning functional programming, it feels like you have to learn a lot, simultaneously. Because you kinda do. Additionally, functional programming tends to attract people who like to explore ideas, which is nice, but they make bad teachers. That’s why we need better teaching materials for functional programming.

At the end of your (beginning of a) journey into functional programming if you make it, you get a sense of understanding. The world would never be the same. You’ll start to see things that weren’t there before. Like a sixth sense. And you’ll realize that the ideas were in fact quite simple (although not easy to grasp).

You won’t get that understanding by just reading one blog post. Or two. Or ten. Or even one or two books. At least now. People who get into functional programming claim that Elm is a great gateway drug into functional programming: it is easy and friendly. The problem is that it is quite niche (for client-side frontend and compiles to JavaScript). Unless you do just that, you’re stuck. So one of my aspirations is to create a general purpose programming language, that will be a bad functional programming language, but which will make a good start into FP, just like Elm does. Unfortunately there isn’t one (yet), so we’ll be using Python.

Remember, the most important takeaway of this blog post is the perspective. If you get the perspective, a lot of unconnected things will fall into place.

So, let’s start.

Functions, Purity and Side-effects

Just like in OOP, the most basic building block is an object (or a class, I don’t know), the most basic building block in functional programming is a… function.

Don’t look at me like that: there are surprisingly many things that you can build with just functions. In fact, Alonzo Church, academic supervisor of Alan Turing, discovered lambda calculus and showed that it is equivalent to the Turing Machine. Essentially, what your fancy C++ and other languages can do, functional programming can also do.

The first important point:

Important point no. 1: Function is all you need

Whenever you feel the itch to overcomplicate your solution, try writing a simple function first.

But not just any function. What we like in functional programming is a pure function. So, now we are getting jargons territory, let’s quickly back it up with examples.

Let’s imagine we are trying to write a function that takes the user’s name and prints the message of the day:

def motd(name):
  print("Hello, {}! Have a nice day!".format(name))

def main():
  motd("KtlTheBest")

Now, while this example is purely imaginary, the code is emblematic of what people would usually write. While it is fine, the question is: how do you test this?

“What do you mean by test?” I hear you asking. What I’m asking is why are you confident that this will work?

“Well, I’ll run the code and see the result printed in the terminal…”. Yeah, but when I was initially writing this code, I made a mistake. I would have to run the code to see the bug.

But what if the code would be more involved? Say, it would have many functions (or better say procedures) doing many things, tightly coupled and the only way you see the output is by pinging or calling some other guy sitting on another laptop checking the server response.

Doesn’t sound so nice now, does it?

“Well, that’s how the things are”. But we as functional programmers would disagree. We would say that the reason you are having problems is because most of your functions are impure. For example:

  • print() is an impure function.
  • randint() is an impure function.
  • time.now() is an impure function.
  • input() is an impure function.
  • Function that fails because it is Tuesday is an impure function.
  • Function that reads from a global variable is an impure function.
  • Function that changes the values of input arguments passed by reference is an impure function.

Already, in a small function of one line:

def motd(name):
  print("Hello, {}! Have a nice day!".format(name))

… you’re already experiencing problems.

So, how do we make this testable?

def motd_pure(name)
  return "Hello {}! Have a nice day!".format(name)

What we are doing instead of printing is returning a string to print. Printing is not our responsibility now.

Now, to verify the code, we can write something like this:

def main():
  message = motd_pure("KtlTheBest")
  assert message == "Hello, KtlTheBest! Have a nice day!"
  print(message)

What we made, is essentially made an impure function a pure one. But as a consequence, this function became testable

We also wrote an assert statement that will print only if the function motd_pure() is correct (by some arbitrary definition).

And if you have very keen eyes, you’ll realize that there’s a bug in a motd_pure: I forgot a comma after "Hello".

But instead of blindly relying on my (or someone else’s) eyes, I can ask computer to verify the function and be 100% sure it works only when it is correct.

So with that, let’s get into some definitions.

What is purity?

When talking about purity, functional programmers refer to pure functions. By pure functions we mean functions in mathematical sense:

f(x) = x * x

We can also say that pure functions:

  1. Output of the pure function always depends solely on input arguments.
  2. It has no observable side-effects.

Let’s look at the function again. If we try to substitute numbers, we get different values:

f(1)  = 1  * 1  = 1
f(2)  = 2  * 2  = 4
f(10) = 10 * 10 = 100
f(n)  = n  * n  = n * n

Here, the result of f(x) depends solely on x. Another interesting observation is that we can evaluate f(x) infinitely many times, and the result would always be the same: x * x.

This is a nice property to have.

“Are there functions that don’t behave like that?” you may ask. To which I’ll show you this code:

DEBUG = False

def foo(x):
  if DEBUG == False:
    return x * 2
  else:
    return -1 * x

Now, if we evaluate this function like this: foo(3), we may get 9, but when we run it again, f(3) = -3. Now, we have two different values for foo(3). This is not a pure function. This function has an implicit state. Implicit, because it is not observable from the function signature (input and output arguments), but if you run it, you’ll feel the effects (for example different values for the same input arguments).

Second line says that the function must not have observable side-effects. Let me show you:

def sum_left_to_right(l):
  sum = 0
  for i in range(len(l)):
    sum += l[i]
  return sum

def sum_right_to_left(l):
  sum = 0
  for i in range(len(l) - 1, -1, -1):
    sum += l[i]
  return sum

I’ll give you a little sneak-peek, but mutation of a variable (i.e. x = 1; x = 2; assert x != 1) is a side-effect and thus leads to impure functions. So while we are doing a side-effect, we essentially have pure functions.

For the same list l the results of the functions would be the same. On a side note, we are doing the order in different ways in both functions, but since the functions are pure and the addition is commutative, i.e. the x + y = y + x, we can say:

sum_left_to_right(l) == sum_right_to_left(l)

However, if we were to add print("Currently on: {}".format(i)) into the loop, the functions would become impure.

def sum_left_to_right(l):
  sum = 0
  for i in range(len(l)):
    print("Currently on: {}".format(i))
    sum += l[i]
  return sum

def sum_right_to_left(l):
  sum = 0
  for i in range(len(l) - 1, -1, -1):
    print("Currently on: {}".format(i))
    sum += l[i]
  return sum

First of all, the functions are doing printing, which is side-effect by definition. But even if we compare by the side-effects, they would still be different, even if the sum is the same:

def sum_left_to_right(l):
  sum = 0
  to_print = []
  for i in range(len(l)):
    # Currently on:
    to_print.push(i)
    sum += l[i]
  return (sum, to_print)

def sum_right_to_left(l):
  sum = 0
  to_print = []
  for i in range(len(l) - 1, -1, -1):
    # Currently on:
    to_print.push(i)
    sum += l[i]
  return (sum, to_print)

If we compare them, the results will not be the same (in general):

sum_left_to_right(l) != sum_right_to_left(l)        |l| > 1

This is dandy and all, but what’s the use, you may ask? Well, to answer that, we need to cover one more case of side-effect: mutating variables.

You may have seen this millions of times:

x = 1
x = x + 1

What this does, is that it “creates” a variable named x and assigns it a value of 1. After that it mutates its value to x + 1.

From the mathematical point of view this expression doesn’t make sense. There’s no value of x in , , , or that has this property (except if you do modulo 1, but that’s useless).

The thing is, the ability to freely mutate state is the source of many software bugs. Of course, you may not believe me, since I don’t have enough experience writing software, but if you try to look at it yourself, you’ll see that I’m right.

As for me, everytime I am forced to write in Python or any similar languages with hard-to-understand semantics (i.e. you look at the code and have no idea what will happen), I dread inside.

Let’s look at this code:

x = ... # some value
f(x)
g(x)
# x = ???

The thing is with Python, is that you don’t know or can’t be sure. You don’t know, and compiler doesn’t know either.

Simple values like int are passed by value:

def f(x): x = 2
x = 1
f(x)
assert x == 1

Same for lists:

def f(l): l = []
l = [1]
f(l)
assert l == [1]

However, values inside the list are passed by reference:

def f(l): l[0] = 2
l = [1]
f(l)
assert l == [2]

Don’t know about you, but this leaves a bad taste in my mouth.

The reason is that the logic is purely artificial, somebody came up with those rules. That’s exactly the reason you have to do === instead of == in JavaScript or that Java does Referential Equality instead of Structural Equality (that’s the reason you have to write s1.equals(s2) instead of s1 == s2).

The ability to freely mutate state coupled with unintuitive semantics makes reasoning about programs hard. Reasoning is the ability to tell if the code is correct or not, especially useful when debugging.

In fact, humans are really bad with reasoning. Most of the logical thinking that we do is in the prefrontal cortex, the front of the brain, its highest layers. Essentially, this part of the brain evolved last and is quite recent. Forcing that part of the brain to work is quite difficult. We didn’t evolve to solve math problems naturally, compared to breathing uncounciously, for example, that’s why math is difficult in general.

But we have tools to aid us in that. Here is the tool: =.

“Looks like… an assignment?” you ask yourself. No, no, it is not an assignment, it is equality.

The ability to tell that two unrelated things are actually the same opens up a myriad of possibilities. On a side note, people who keep arguing that “you can’t understand me” or cultural isolutionism or whatever, they are essentially robbing people of the tools necessary to understand the world, but that is a story for another day.

For example:

sum(angles of triange) = 180°

Here we establish an equality. While it may seem trivial, but it is useful, if you know first two angles, using this equality you can find the third angle:

a + b + c == 180 => c = 180 - a - b

Or take physics for example. We all may have seen this equation:

F = ma

Newton’s Second law. When I first saw this, I didn’t pay much attention. Of course, this equation gets introduced in a rather boring context, seemingly no use outside of simple kinematics.

But take another example:

F = kx

This is a description of Hooke’s law, i.e. what force does the spring exert when displaced by the distance of x. Again, what of it?

See, the = symbol is actually really powerful. Because we know that those two equations are basically the same, we can combine them:

F = ma
F = kx
-------
ma = kx

Seems boring, but with this we can find the answer to a question, “What is the acceleration of the object of mass m attached to a spring with a constant factor of k and displaced by x meters?”.

Just basic algebra and voila!

(ma) / m = (kx) / m => a = kx / m

Now just plug in numbers and find the answer.

Interestingly, physics is quite “functional”, in a mathematical sense. The whole science in general is established on the shoulders of equality, or rather equational reasoning. By measuring things and comparing them with others or saying that one is equal to another, we establish connections. And turns out, those connections are quite strong. So, the next time you’re wondering why do you need to learn physics: this is why. Understand that things are interconnected. Discover the power of equational reasoning.

Let us move on.

Unfortunately, the application of equational reasoning in programming is rather limited. In the presence of the side-effects it is even impossible:

print("Hello World!") ??? print("Goodbye World!")

How do you even compare those things?

While comparison for those kinds of things is not well defined, we can compare values:

"Hello World!" != "Goodbye World!"

Side-effects complicate things, they complicate equational reasoning, they complicate reasoning in general. And they put implicit restrictions that you can’t easily verify:

x = ... # some value
f(x)
g(x)

Again, back to our example. Can you tell what this code does? Can you find a mistake in it?

Probably not.

What about this?

def f(x):
  x.close()
  
def g(x):
  x.write("Hello filesystem!")

x = open("file.txt", "w")
f(x)
g(x)

Now do you see the problem? We are writing to a closed file. The correct order must be this:

def f(x):
  x.close()
  
def g(x):
  x.write("Hello filesystem!")

x = open("file.txt", "w")
g(x) # swapped
f(x) # places

This was an easy example, but the arbitary invoking of side-effects from virtually anywhere can cause problems even on this scale. This requires us to read ALL the code to find bugs. Now you see why the job of software developer is so difficult? Because you’re bad at it.

If the functions were pure, the order wouldn’t be so important, or rather the order of the functions would be explicit.

Take this imaginary example:

(f(x) + g(x)) * (k(x) - h(x))

All the functions are pure and perform some computations. It is clear to see that it doesn’t matter if we perform f(x) first or g(x), or h(x) or k(x). However, it is also clear that we must first do f(x) + g(x) and k(x) - h(x) before we can multiply them, or that we must perform the computations of the functions if we want to do the addition or the subtraction.

With pure functions there are only computations, and with computations the order is described by data dependency.

Data dependency is when you need to calculate y, but that y depends on x, i.e. y = f(x). But if x depends on some other argument u, i.e. x = g(u), then you get a clear dependency, or an order:

y = f(g(u))

Simple? Simple.

Hopefully at this point it is clear why we don’t like mutation, because mutation implies state and state complicates our code in many unpredictable ways.

That’s why we as functional programmers tend to use immutable variables. In the dicitionary, immutability is defined as a quality of not changing, staying the same.

If we take a problem of the form:

x + 3 = y => x = ?

While there are many values that x can take, once it is taken, it doesn’t change. That’s the beauty of the variables and how they should be used.

Of course, the question is, how do you write a code with variables that don’t change? Simple, just create a new variable. And if that variable needs to change, create a new variable for that.

I know, it will be difficult initially. But once you start doing it, you’ll notice it that since your variables are immutable, it becomes easy to reason about the program.

So, at this point, we are ready to cut-off our first part of the intro into functional programming:

  1. Use pure functions
  2. Fuctions that do side-effects are impure
  3. Impure function called inside pure function makes pure function impure
  4. Use immutable variables
  5. Don’t read from global variables
  6. Pass all variables used through input arguments to make function pure
  7. If side-effects are not observable from outside, the function is pure

Even if you don’t progress further than this, if you just do those when you can, you’ll already see the benefit. That’s how I started my journey into functional programming. With a small step. It’s fine if it will be your only step.

And for those who are still onboard, let us move on.