In 1978 Intel released the 8086 processor. It was the first processor of the x86 line. It ran at 5 megahertz, had a single core, and looked more like a large micro-controller than a modern CPU. Seven years later they released the 80386 which ran at a maximum speed of 40 megahertz, and it also had a single core.
Twelve years later they released the Pentium two which ran at 450 megahertz, and still had only one core. It wouldn’t be until eight years later that they released their first processor to have two cores which ran at 3.2 gigahertz. Then 15 years later AMD released a processor that had 64 cores, and ran at about 4 gigahertz.
Computers aren’t getting much faster. They haven’t been getting much faster for over a decade, and they’re not likely to get much faster in the near future. The laws of physics, as they apply to silicon computer chips, forbids them from getting faster without exponentially increasing the power consumption. Of course research is now being done on alternate materials, but you’d still be a fool to assume that computers will get much faster any time soon.
Computers aren’t getting faster, but they are getting smaller. That means there will be more instructions they recognize, more memory and cache, and more cores. Now is the time to become good at programming systems with many cores. So here’s how to do that.
What is functional programming? In a nutshell, functional programming is programming where we minimize the percentage of functions in our code that have side effects. What is a side effect? A side effect is either accessing random numbers, or accessing a global variable, or doing any kind of I/O.
So why is there a word for this? Well because functional functions are purely deterministic. You can call them up with a set of arguments, get a return value, wait a while, and then call them up with the same set of arguments and be guaranteed to get the same return value. This, as it turns out, is a powerful assumption to be able to make about a function.
If it accesses random numbers then it’s not deterministic. If it takes input then it might return a different result when something giving it that input changes. If it’s doing output then that output operation might fail. If it accesses a global variable then the global variable might have changed since the last time the function was called.
To put it plainly a functional function does not rely on it’s environment. It’s just a subroutine that only takes in arguments, does a calculation, and returns an answer. A functional function just does a computation.
So some of the advantages of it are that functional functions are easier to use. They tend to line up with how we tend to naturally write software. More on that later. There’s less internal state to think about. You don’t need to get a whole environment set up just to call the function. You can just pass the environment to it as an argument.
It’s more consistent so bugs are more reproducible. There’s no need to worry about race conditions. This means it’s easier to debug, and therefore more stable and more secure. It’s inherently thread safe. It’s faster because it makes no system calls. It has less error checking to deal with. It’s more portable because it doesn’t rely as heavily on system specific system calls. It’s also easier to copy and paste the code into a shared library if the project gets too big.
Also if the compiler knows that a function has no side effects, then it can make some advanced optimizations to it that it couldn’t safely do to functions which have side effects. Functional code is much faster when the compiler knows what has side effects and what doesn’t.
To put it plainly, maximizing the amount of your code that’s functional has nothing but advantages to it. How often in life do we come across something that has nothing but advantages?
If we’re writing a hypothetical project, like for example an image library then don’t write your code like this:
Img getImage(string filename);
Where you write a function that takes a file name as an argument, reads that file, and then returns an image object. This means that the file can only come from the disk, and can’t be directly taken from a network connection.
Instead write your code like this:
Img getImage(byte[] fileContents);
Img getImage(channel byte[] fileContents);
where it takes in either the entire contents of the file as an argument or some kind of channel that spews out the contents of the file one block of data at a time.
Don’t write your blur method like this:
class Img {
void blur()
{
// blur this instance of the image.
}
}
where you write the method to blur that instance of the image object. Odds are that your user will want to keep the original.
Instead write it like this:
class Img {
Img blur()
{
// code
return copyOfImageButBlurred;
}
}
where we return a copy of the image that’s exactly the same except blurred. That’s not as efficient with memory, but we don’t have to be efficient with memory now.
If you’re writing an autopilot then don’t write your code like this:
int main()
{
// get sensor data
// react to input
// get more sensor data
// react to input some more
}
where we do random stuff in whatever order we feel like.
Instead add some organization to your code by writing it like this:
int main()
{
state := initializeState();
while(1) {
input := getSensorInput();
state, commands := processInputFunctionally(state, input);
runCommands(commands);
}
}
where we initialize the state, and then in a loop we gather sensor data, put that, and the state information into a big function with no side effects, and it will return the new state, and commands for what to do and then we execute those commands. This way the code is more modular. The main code for the autopilot can be cleanly removed and hooked into a simulator for testing.
But there’s more to it than that. Functional programming can be taken up a notch with functional programming languages. There are programming languages specifically designed to be functional.
Let’s look at the first one: Lisp. Lisp is a functional scripting language. In lisp variables are immutable. What does that mean? Remember earlier when I told you to write the blur function so it returns a blurred copy of the image? There’s a reason for that. As it turns out, statistically, most variables are set once and then never written to again. Those variables that are written to again tend to be written to in a loop.
As it turns out you can replace loops with recursion and then do away with having to modify the variable more than once and still be Turing complete.
This way people who read your code can assume that any variable will only be set once. This makes it easier to reason about what the code is doing.
If we’re going to create a language where functions will be called up frequently then we need a new syntax that makes calling functions easier.
And here’s what that looks like.
(sqrt (add (square a) (square b)))
As you can see there’s no need for commas. All arguments are separated by spaces, and the function name goes inside the parentheses.
While that’s certainly nice Lisp lacks an important feature: declarative programming. If we’re going to be writing lots of recursive functions then we need to make that easy to do. Most recursive functions will have some kind of if statement that checks the arguments so it knows when to stop. So why not add some special syntax to do exactly that?
factorial :: Int -> Int
factorial 0 = 1
factorial x = x * (factorial (x - 1))
This is what haskell code looks like. What we do is we declare multiple versions of the function, each of which has its own patterns to the arguments that it’s expecting. Then when we call the function it will try out one pattern after another until it finds one that fits and then it will run the associated version of the function. If you want to write a function that takes the factorial of something you can do that in just three lines of code in haskell.
This is known as declarative programming. In declarative programming we give the computer a set of declarations, and the pose it a query and it uses those declarations to figure out the answer to that query.
We can use pattern matching and declarative programming for more than that. Prolog is a scripting language based on predicate logic. In predicate logic we take the set of everything in the entire universe and use functions that return true or false, known as predicates, to filter out anything that’s not an answer until we get the set of all answers, and we can do so declaritively.
Here’s what that looks like.
is_smart(noah).
is_smart(nate).
is_smart(bob).
We can issue a query to this code through an interactive prompt that will list everyone who is smart. We can also create predicates that use other predicates to filter out invalid answers.
is_smart(noah).
is_smart(nate).
is_smart(bob).
has_computer(noah).
has_computer(nate).
is_programmer(X):- is_smart(X), has_computer(X).
The way this works is through backtracking. Here is a page that explains how that works internally: http://www.amzi.com/articles/prolog_under_the_hood.htm
Prolog has another feature: atoms. An atom is like a string except that it’s, ideally, immutable. Atoms are used as messages that get sent to functions. A lot of large programming projects send information around as strings, so why not make that easier and more efficient? That’s what prolog does. Any identifier that’s not a predicate and that does not start with an uppercase letter is assumed to be an atom.
Atoms show up in other languages as well, like for example erlang.
Erlang is a programming language that was intended to be used for distributed computing systems. It’s functional, concurrent, declarative, and part of it’s declarative nature means it can use atoms to match patterns against incoming messages.
Here we see something like that:
process_http(...) ->
receive
{get, ...} ->
// foo
{post, ...} ->
// bar
end.
This erlang function, when it reaches this chunk of code, will check it’s incoming channel for messages, and it will execute code foo if the message is a tuple that starts with the atom ‘get’, and it will execute code bar if it receives a tuple that begins with the atom ‘post’.
Erlang is specifically designed to be THE solution for distributed computing systems. It’s got almost everything you could want, except for buffering.
Erlang channels are unbuffered. It’s very easy to accidentally slurp up a huge file into memory that is too large to fit in memory on the computer that the erlang routine is running on. Unlike in golang where all channels are buffered so as to protect against running out of memory, erlang coroutines can run out of memory when loading a large file.
So how do we solve this problem? Earlier I suggested that we should write functions that load up images by taking the entire contents of the image file as an argument. What if the image is too big to fit into memory?
This is where lazy evaluation comes in. Lazy evaluation allows us to build infinitely large data structures. Lazy expressions are only executed when they’re needed. The way that works is that a function which is pure can be executed later instead of right now because it will return the same result either way.
Haskell is a lazy language. Every expression in Haskell is lazily executed. This enables us to write programs that will naturally process input as much as possible while still reading it. It’s not possible to tell when a chunk of code will be run in a lazy language, but we don’t have to care because it will give us the same result in the end anyways.
fib :: Int -> Int -> [Int]
fib a b = b : (fib (b + a) a)
The above code will return the entire Fibonacci sequence in a list. It returns a cell containing the first argument as the head, and the tail is a data structure which contains a function pointer to fib and the list of arguments.
But what if we have an error lazily loading the image? This is where monads come in. A monad is a thing that we can use to check for errors, and also use it for global variables. Here’s the idea: in C if we have a function that does IO then that IO might fail, so we need to be able to return an error indicator, but the function calling THAT function also does IO because something it called does IO.
Ultimately what we tend to do in real life is either handle the errors if we can, or, more often, just throw the error up one until it reaches the top so the user can deal with the problem. That’s exactly what a monad does. A function which uses a monad in Haskell either returns a successful value or throws something up with an error value.
getImage :: MonadFail m => ByteString -> m Img
getImage [] = fail "Error: empty file"
getImage fileContents = do
-- code
return img
main :: IO ()
main = do
c <- openFile "file.png"
i <- getImage c
foo i
bar
You can also chain together monad functions so that if any step in the process fails then it will just stop right there. Here, if the function getImage is given an empty list, then it will throw up a monad of type MonadFail that contains a message indicating that it failed. The function that calls getImage can specify what data type should contain the error message or Img object.
In this case the main function uses IO as the monad type, so if openFile fails then it will get an IO type that contains the error message. If openFile doesn’t fail then getImage will be called. If getImage doesn’t fail then foo will be called with the argument i, and if that doesn’t fail then bar is called. If any of them fail then main will return an error.
Of course, monads can be used for more than just errors. If we have a data type that contains another data type, such as a list, or a binary tree, then monads make it easy to apply a function to it that doesn’t know or care about whether the data is in a tree, or list, or hash map, or whatever.
I hope that you’ll try out some of the ideas I’ve presented here. No idea should go unchallenged forever, and I think the idea of “truly object oriented” needs to be challenged now. Unlike the vaguely defined idea of truly object oriented, functional programming has a precise definition.
With functional programming you can apply this idea to your existing code bit by bit and see some immediate improvements. Simply finding functions and methods that have unnecessary side effects, and removing those side effects (or moving them somewhere else) can greatly improve maintainability.
It can be slightly hard to get into the habit of doing things in a functional way, but it’s well worth it.
The future is functional.
One of the best article I’ve read abou Functional Programming.