This series of articles is an attempt to briefly present functional programming (FP) in a simple form, introducing its most important concepts (currying, categories, monads..). The goal of this first article is to show that functional programming helps programmers write efficient and elegant code.
The different programming paradigm
Let us briefly recall what the different main programming paradigms are and how they were introduced. There are two families of paradigms: the imperative paradigm and the declarative paradigm. The first is to tell the computer to do things, while the second is to tell to be things. For example, SQL is a declarative language because the result is queried without specifying the computational steps. On the other hand, in an imperative language like C, you must write the steps to carry out your calculation. From there, let’s enumerate several paradigms of these two families:
-
Imperative:
- Procedural which groups instructions into procedures,
- object-oriented which groups instructions with the part of the state they operate on,
-
declarative:
- functional in which the desired result is declared as the value of a series of function applications,
- logic in which the desired result is declared as the answer to a question about a system of facts and rules,
People tend to be more familiar with imperative languages like C++ as computer science emerged from the imperative perspective. The concept appeared at a time when the notion of computability was not clear and needed to be defined properly. In his thesis, Turing defined the concept of Turing’s machine, a model of computation based on an abstract machine manipulating symbols on an infinite magnetic tape divided into cells, according to a table of rules. The machine has a state and a head which can read and write symbols and move the tape to the left or to the right. A function is computable if there is an algorithm with a finite number of steps that can compute the results with a Turing machine. Turing machine models and imperative programs have in common that they start from an input and take a series of steps that change a state stored in memory, ending with some output.
History of Functional Programming
Turing’s Machine is not the only way to define computability. Lambda Calculus is a formal system in mathematics for expressing computations based on function abstractions and applications. You could think about lambda calculus as a way to construct any computable functions, by composing over and over very simple and abstract functions. Church proved that Lambda-Calculus and Turing’s Machine formally defined classes of computable functions coincide: a function is $\lambda$-computable if and only if it is Turing computable.
This is the basic theoretic ground for functional programming. Instead of defining a program as a list of steps, they are defined by the composition of nested functions. Church invented Lambda Calculus to formalize the concept of functions and application with a simple syntax.
Church later developed a weaker system, the simply-typed lambda calculus, which extended the lambda calculus by assigning a type to any terms. This forms the basis for statically-typed functional programming. The first high-leve functional programming language (LISP) was developed in the late 1950s. LISP functions were defined using Church’s lambda notation. There are now many functional programming languages, including ML, Ocaml, F#, Haskell and Scala.
Functional programming languages are usually very different from any imperative programming language. The most significant differences stem from the fact that functional programming avoids side effects and is based on immutability, while OOP endorse mutation. Pure functional programming completely prevents side-effects and provides referential transparency. Each paradigm has its own strengths and weaknesses, but FP is powerful when it comes to writing robust, efficient and reusable code for large projects, because of its features and properties:
- Pure functions and recursion
- Immutability
- Higher-order functions
- Currying
- Lazy evaluation
- Type system
Principles of functional programming
Pure functions and recursion
In most OOP languages, mutability transforms the data during computational steps, and side effects can be deliberately introduced. But mutation can introduce unpredictable behavior which can make debugging a nightmare.
The good thing with FP is that this problem does not exists as thing are only declared locally: functions are pure. This makes things much more robust and easy to debug. Besides, since any function is the composition of other functions, the code is reusable, modular, and testable. This provides also many benefits regarding parrallel computing.
This property of functional programming is called referential transparency. Every expression denotes a single value. The value cannot be changed by evaluating an expression or by sharing it between different parts of the program. No references to global data; there is no global data. There are no side-effects, unlike in referentially opaque languages.
Let’s witness an example in python of pure and impure functions. In this example, we are computing the Fibonacci numbers.
f1 = 0
f2 = 1
fn1 = f1
fn2 = f2
def fibo(n):
for n in range(n-2):
fn2, fn1 = fn1 + fn2, fn2
return fn2
Here, variables are defined outside, so if I run the function again without clearing the variable, it won’t compute the expected result.
On the other hand, the function defined below is pure. It uses recursivity:
def fibo(n):
if n <=1:
return n
else:
return fibo(n-1) + fibo(n-2)
Recursion is a process whose description leads to the repetition of the same rule. Like the use of loops, recursion makes it possible to perform a number of operations that are not known in advance because they are determined by the program inputs; these two processes also make it possible to write programs that do not terminate.
Here is the Haskell’s version of the same function. The function is also defined recursively, and things are only declared.
fib :: Integer -> Integer
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
Immutability
Immutability is the property that once a variable is assigned to a name, their value cannot be changed. To start, let’s consider this example of how mutability works in Python:
>> x = [1, 2, 3]
>> x
[1,2,3]
>> x.reverse()
>> x
[3,2,1]
Calling the reverse function changed the state of the variable x. Haskell is an immutable language. Variables and objects are only defined with no specific definition order. A variable or a function cannot be defined twice. Functions cannot change the state of a variable, but only creates new variables. Let’s see the same example in Haskell:
>> let a = [1,2,3]
>> reverse a
[3,2,1]
>> a
[1,2,3]
First class and higher order functions
In functional programming, functions are allowed both to have functions as parameters and to return a function. Such functions are called higher order functions. The use of these functions is at the heart of the functional programming approach. For example, since for loop are prohibited from functional programming, higher order functions can be used to compute equivalent outputs. For example, the map function converts a function of type A to type B to a function of type List[A] to List[B]. Let’s try to write its definition in Python:
def mapList(func):
return lambda xs: [func(x) for x in xs]
Suppose you want to apply the expression $(x^2 + 5)$ to each element of a list of integer. The map function will convert the function $x \rightarrow (x^2 + 5)$ to a function which apply the expression to each element of a list of integers.
>> func = lambda x : (x**2 + 5)
>> xs = [3, 2, 4, 1, 9]
>> mapList(func)(xs)
[14, 9, 21, 6, 86]
There is a native map function in python:
>> list(map(lambda x: (x**2 + 5), [3, 2, 4, 1, 9]))
[14, 9, 21, 6, 86]
In Haskell, the map function can be implemented recursively:
mapList :: (a -> b) -> List[a] -> List[c]
mapList f [] = []
mapList f (x:xs) = f x : map f xs
Currying
In functional programming, currying is the transformation of a function with several arguments into a function with one argument. This “new” function returns another function on the rest of the arguments. For example, let’s consider the function add which is defined as follow: $add: (x, y) \rightarrow x + y$ where $x$ and $y$ are integers. In Haskell, we can define the function as follow:
add :: (Int -> Int -> Int)
add x y = x + y
In this function, we can partially apply $add$ to any integer. Let’s look at the type of this function once partially applied:
>> :t add
add :: Integer -> Integer -> Integer
>> :t add 2
add 2 :: Integer -> Integer
>> :t add 2 3
add 2 3 :: Integer
The two-input function $add$ is also a one-input function returning a function. To be more precise, the function add:: (Int -> Int -> Int) is also of type (Int -> (Int -> Int)! Thus, add can be seen as a function which for any integer returns another function that takes integers as input, and returns an integer. In Python, this feature is available, but is much less elegant because we need to call partial every time we want to partially apply a function.
def add(x, y):
return x + y
And then:
>> partial(add, 2)(3)
6
Lazy evaluation
In programming language theory, lazy evaluation, or call-by-need, is an evaluation strategy which delays the evaluation of an expression until its value is needed (non-strict evaluation) and which also avoids repeated evaluations. This technic is used in FP because we separate the definition of a variable, and when we need its value. Playing with those technics can increase the efficiency of the code.
Type systems
By now, you should already have noticed that in Haskell, we are always mentionning the type of any function or variable. Functional programming languages have tended to use typed lambda calculus, rejecting all invalid programs at compilation time and risking false positive errors, as opposed to the untyped lambda calculus, that accepts all valid programs at compilation time and risks false negative errors. The use of algebraic datatypes makes manipulation of complex data structures convenient; the presence of strong compile-time type checking makes programs more reliable in absence of other reliability techniques like test-driven development, while type inference frees the programmer from the need to manually declare types to the compiler in most cases. For example, if I try to define the following function:
add :: Char -> Char -> Char
add x y = x + y
We will have an error because + is not defined for Char (No instance for (Num Char) arising from a use of ‘+’). As a consequence, things are not compiling.
Conclusion
By now, you should have understand the most basic concept of Functional Programming. If you want to learn about Haskell, I recommend this link. In the next posts, we will learn more about lambda calculus, then about category theory. Talk to you soon.