The appropriateness of the F-(co)algebra lens

This is a discussion of \(F\)-(co)algebras, and their appropriateness as abstractions in general programming. I want to try and drive home the following points:

  • All \(F\)-(co)algebras are expressible as typeclasses
  • Not all typeclasses are expressible as \(F\)-(co)algebras, or combinations of both
  • No (interesting) algebraic structures are \(F\)-(co)algebras (at least, without further information)
  • No (interesting) algebraic structures are typeclasses without a lot of extra work

The discussion will be in Haskell. Some of the points above are partially due to constraints of Haskell itself, particularly the last two, which are a general problem with idiomatic Haskell constructions. Nonetheless, rambling follows.

Typeclasses

A typeclass is exactly what it sounds like - it is a way of defining a class (in the mathematical sense, rather than the OOP sense) of types. What brings these types together into a single class is their adherence to some interface. In Haskell, typeclasses are the language feature providing a mechanism for ad-hoc polymorphism.

Here’s the ubiquitous example, of a monoid:

-- The class of monoids (types with an associative binary operation that
-- has an identity).  Instances should satisfy the following:
--
-- [Right identity] @x '<>' 'mempty' = x@
-- [Left identity]  @'mempty' '<>' x = x@
-- [Associativity]  @x '<>' (y '<>' z) = (x '<>' y) '<>' z@ ('Semigroup' law)
-- [Concatenation]  @'mconcat' = 'foldr' ('<>') 'mempty'@
--
-- The method names refer to the monoid of lists under concatenation,
-- but there are many other instances.
--
-- Some types can be viewed as a monoid in more than one way,
-- e.g. both addition and multiplication on numbers.
-- In such cases we often define @newtype@s and make those instances
-- of 'Monoid', e.g. 'Data.Semigroup.Sum' and 'Data.Semigroup.Product'.
--
-- __NOTE__: 'Semigroup' is a superclass of 'Monoid' since /base-4.11.0.0/.
class Semigroup a => Monoid a where
        -- | Identity of 'mappend'
        --
        -- >>> "Hello world" <> mempty
        -- "Hello world"
        mempty  :: a

        -- | An associative operation
        --
        -- __NOTE__: This method is redundant and has the default
        -- implementation @'mappend' = ('<>')@ since /base-4.11.0.0/.
        -- Should it be implemented manually, since 'mappend' is a synonym for
        -- ('<>'), it is expected that the two functions are defined the same
        -- way. In a future GHC release 'mappend' will be removed from 'Monoid'.
        mappend :: a -> a -> a
        mappend = (<>)
        {-# INLINE mappend #-}

        -- | Fold a list using the monoid.
        --
        -- For most types, the default definition for 'mconcat' will be
        -- used, but the function is included in the class definition so
        -- that an optimized version can be provided for specific types.
        --
        -- >>> mconcat ["Hello", " ", "Haskell", "!"]
        -- "Hello Haskell!"
        mconcat :: [a] -> a
        mconcat = foldr mappend mempty

This says that a type \(T\) is a Monoid if it is a Semigroup, and it has a distinguished element mempty. Notice that the library definition of Monoid also includes two other requirements, mappend, and mconcat, but then provides default implementations for both. While it is true that a monoid in the mathematical sense must have both of these operations defined (though concatenation comes from recursive applications of appending), they are defined here in terms of the methods of the underlying semigroup, where the binary operation <> is borrowed as the appending operation for the monoid.

Important to note is the docstring of this typeclass. It pleads that the user of this typeclass should ensure that any instance satisfies certain constraints. These constraints are absolutely required in the mathematical definition of a monoid, but here, they cannot be enforced. Indeed, we can quite easily bake up a Monoid which is not a true monoid - consider the following, over the integers:

-- We specify a wrapper for `Int`, since `Int` is already a `Monoid` 
-- in the correct sense in Haskell
data XInt = XInt { n :: Int } deriving (Eq, Show)
-- Make XInt into a Semigroup by specifying <> as (the lift of) addition
instance Semigroup XInt where
    a <> b = XInt (n a + n b)
-- Make XInt into a Monoid by specifying mempty. Here, we specifically 
-- ensure it is _not_ the identity
instance Monoid XInt where
    mempty = XInt 1


someInteger = XInt 10
integerAfterApplicationOfIdentity = someInteger <> mempty

-- We have a problem though - the elements are now not equal
main = putStrLn $ show (someInteger == integerAfterApplicationOfIdentity)

A typeclass is often designed with additional constraints like this in mind. One could be tempted to argue here that it is our notion of equality that is flawed here - I will just note that this is not an argument that will go very far, following strictly the (mathematical) definition of a monoid.

What can we do about this?

What is the issue we are running into here? The problem is that the constraints on the interplay between the concrete operator definitions an instance provides just aren’t expressible in the typeclass language feature we have to work with. In a broad sense, they are external to the feature entirely. In theory, there are ways we could attempt to work around this - for example, we could require that the user provide, or that the compiler can derive, some kind of proof that (for example) two functions are equivalent. This would have to be derivable at compile-time, and then we could require this “evidence burden” as part of the typeclass definition. I am sure this could lead to some serious optimisation tricks from the compiler.

This idea requires some serious machinery. We would need some form of a proof verification engine in a preprocessor of the Haskell compiler to allow for (probably quite verbose) user-defined proofs to be specified, and to generate non-trivial proofs by itself the compiler would need to be very sophisticated indeed. I have come across attempts at writing proofs for things like this in a way that Haskell can currently understand, and there are things like ConstraintKinds already existing in compiler extensions which may be of value to us here.

There are issues with this though, not least of which is the fact that, particularly in languages out in the OOP sphere, ad-hoc polymorphism is used for things vastly more complex than specifying constraints on behaviour which are expressible in any reasonable language specification. Even if it were possible to write a verification engine which could verify all describable conditions (and I would hazard a guess that a coercion of Godel’s incompleteness theorem would put a stop to this thought), and make it suitably fast, we would still run into the issue that in the real world, this form of polymorphism is used to define interfaces, which will always be able to describe things outside of the language. How would one attempt to prove that the following is valid, for example?

-- getStr should, for at least one input, return a section of prose 
-- which Dvorak would have found appealing
class XXX a where 
    getStr :: a -> String

As a result, the idiomatic approach is to keep these laws around in docstring comments, as done here, rather than enforce them.

\(F\)-algebras

An \(F\)-algebra on a category \(\mathcal{C}\) is a pair \((A, \alpha)\), for a fixed functor \(F : \mathcal{C} \rightarrow \mathcal{C}\), where:

  • \(A\), known as our carrier, is any object of \(\mathcal{C}\),
  • \(\alpha\) is a morphism \(F(A) \rightarrow A\)

This is a generalisation of algebraic structures we are familiar with. For a given \(F\), we can define maps between \(F\)-algebras, which we call homomorphisms, by taking a \(\mathcal{C}\)-morphism \(f: A \rightarrow B\) between the carriers and requiring that \(f \cdot \alpha = \beta \cdot F(f)\). This allows us to define a category of \(F\)-algebras over \(\mathcal{C}\), and from here we are able to distinguish particular (isomorphism classes of) elements through universal properties. This is where the abstraction becomes extremely attractive.

(Almost) writing an algebraic structure as an \(F\)-algebra

\(F\)-algebras generalise algebraic structures in the following way. An algebraic structure consists of a non-empty set \(A\) (known as the carrier), a collection of maps \(f : A^n \rightarrow A\), with each \(n \in \mathbb{N}\) (known as the operations), and a finite set of relations between the operations (known as the axioms). In the monoid instance, we have:

  • \(i: A^0 \rightarrow A\) distinguishing our identity element, and \(m: A^2 \rightarrow A\) defining how elements are joined
  • The axioms:
    • \(\forall a, b, c \in A : m(m(a, b), c) = m(a, m(b, c))\;\) (associativity)
    • \(\forall a \in A : m(i, a) = m(a, i) = a\;\) (identity)

When we pass to viewing this as an \(F\)-algebra - and I feel this is a point which is always brushed over discussions of this topic - we throw away the data of the axioms. For any given \(A\), what we are left with is a collection of maps \((f_i: A^{n_i} \rightarrow A)_{i \in I}\) - we glue these together as a coproduct, to get a single map \(\alpha_A = \coprod_{i \in I} f_i : \coprod_{i \in I} A^{n_i} \rightarrow A\). Letting \(F(B) = \coprod_i{i \in I} B^{n_i}\), and defining \(F\) on morphisms to be the obvious coproduct of products, we get a functor on \(\textrm{Set}\) such that whenever \(A\) was a set endowed with this algebraic structure, \((A, \alpha_A)\) is an \(F\)-algebra.

By doing things in this way, we generalise in a couple of key respects:

  • We no longer totally restrict ourselves to collections of maps \(A^n \rightarrow A\). Indeed, coproduct components of the signature \(F(A)\) can be anything we want, so long as we can define the images of morphisms under \(F\). Common examples include:
    • Algebraic data types on \(A\). This allows us, for example, to think about recursive types
    • Types which do not depend on \(A\) at all. We can set \(F(f)\) to be the identity on these components of the coproduct
  • More fundamentally, we no longer restrict ourselves to working over \(\textrm{Set}\). This abstract definition works over any category with the required features (products, coproducts, etc.) and is how we can define things like topological groups, once we put the axioms back in

But in doing so, we have sacrificed the specificity of our axioms. Many axioms can be specified in terms of commutative diagrams that the operations must satisfy, but that data belongs to the underlying category and the specific \(F\)-algebra instance, rather than being embedded in the functor \(F\). What this viewpoint does do, however, is bring us into the realm of typeclasses. Of course, not all typeclasses can be written as \(F\)-algebras (or \(F\)-coalgebras), with the obvious example being something like:

class NotAnFAlgebra a where
    listMap :: [a] -> [a]

In particular, since the axioms are not embedded in \(F\), we can never recover them in a pure \(F\)-algebra viewpoint, unlike in the typeclass world where we may be able to work around this with extensions. A great many typeclasses can be written like this, however. Consider the following, where we show how the monoid example from earlier might be expressed:

-- Define an F-algebra to be its associated morphism \alpha, since this
-- contains the data of the carrier object a
type Algebra f a = f a -> a

-- Define the functor MonoidF which will give the signature for a monoid
data MonoidF mF = Id | Join mF mF

-- Define a MondoidF-algebra over the Ints by specifying \alpha, an Algebra
fAlgMonoidOverIntegers :: Algebra MonoidF Int
fAlgMonoidOverIntegers Id = 0
fAlgMonoidOverIntegers (Join a b) = a + b

main = putStrLn $ show (fAlgMonoidOverIntegers Id)

Distinguished objects in the category of \(F\)-algebras

The category-theoretic machinery becomes particularly worthwhile when we consider distinguished elements in the aforementioned category of \(F\)-algebras for a given \(F\). In particular, one can talk about the initial algebra, i.e., an initial object in this category. Notice that if \((A, \alpha)\) is such an initial object, then \(A \cong F(A)\) via \(\alpha: F(A) \rightarrow A\). In this manner, \((A, \alpha)\) is the “smallest fixed point” of \(F\).

For example, consider the functor on Hask defining the signature \(A \longmapsto 1 + (\mathbb{Z} \times A)\). After a bit of consideration we find an initial object for this \(F\)-algebra category - consider taking \(A\) to be the set of all finite sequences of elements of \(\mathbb{Z}\), and \(\alpha\) as:

\[\alpha: 1 \longmapsto \emptyset ;\;\; (n, L) \longmapsto \textrm{append}(L, n)\]

This is a List data type - here, we have shown it can be derived up to isomorphism from nothing more but the type signature of \(F\).

\(F\)-coalgebras

Of course, we can also dualise. Coalgebraic structures are the same as algebraic structures, except our operations go \(A \rightarrow A^n\). \(F\)-coalgebras are pairs \((A, \alpha)\), where this time \(\alpha: A \rightarrow F(A)\); homomorphisms of \(F\)-coalgebras are given by reversing all the arrows in the corresponding definition for \(F\)-algebras. When we build one with a particular coalgebraic structure in mind, note that to “glue” the maps \(A \rightarrow A^n\) together we need to take the product of the domains, rather than the coproduct of the codomains. As in the dual setting, we are not limited to signatures of true coalgebras, and may construct \(F\)-coalgebras for complex signatures.

Here is a comonoid, as an \(F\)-coalgebra; the 1 component of \(F(A)\) can be left out, as being terminal implies there is only one possible map to it from \(A\).

type Coalgebra f a = a -> f a
data ComonoidF mF = Diagset mF mF deriving Show
fCoalgMonoidOverIntegers :: Coalgebra ComonoidF Integer
fCoalgMonoidOverIntegers a = Diagset a a

main = putStrLn $ show (fCoalgMonoidOverIntegers 10)

For another example, consider the signature that gave us List, \(F(A) = 1 + (\mathbb{Z} \times A)\). When we consider coalgebras for this functor, we find we can view them as state machines - at each state in \(A\), they generate an element in \(\mathbb{Z}\) along with the next state, or they reach an end state \(1\).

The terminal object in the category of \(F\)-coalgebras, for the same \(F\), is the set of all sequences of elements of \(\mathbb{Z}\), including infinite lists, paired with a map which, given a sequence \((X_i)\), returns:

  • A pair consisting of the first element \(X_0\), and the rest of the sequence \((X_{i>0})\), is \((X_i)\) is non-empty,
  • The unique value of the terminal object \(1\), if \((X_i)\) is empty.

This is a Stream data type; as with List, we have here derived it from nothing but the type signature. Note that this is not just “Lists but possibly infinite”, as the coinductive/corecursive properties of streams are markedly different from the inductive/recursive properties of lists. The appropriateness of one over its dual is dependent on the use-case, but this duality is a strong hint that they are related.

Typeclasses as (co)algebra pairs

For more complex types, we can use a combination of algebras and coalgebras to construct this “packaged” representation of them. We will refrain from the term bialgebra here, as this has connotations of special relations between the homomorphisms in one setting and the evaluation map in its dual. Instead, let’s refer to this as an \((F, G)\) pair, where an \((F, G)\) pair is given by a triple \((A, \alpha, \beta)\) such that \(\alpha: A \rightarrow F(A)\) defines an \(F\)-algebra, and \(\beta: G(B) \rightarrow B\) defines a \(G\)-coalgebra.

We saw above an example of a typeclass that cannot be written in this way - however, a great many typeclasses can. Consider the Enum typeclass:

class  Enum a  where
    succ, pred     :: a -> a
    toEnum         :: Int -> a
    fromEnum       :: a -> Int
    enumFrom       :: a -> [a]            -- [n..]
    enumFromThen   :: a -> a -> [a]       -- [n,n'..]
    enumFromTo     :: a -> a -> [a]       -- [n..m]
    enumFromThenTo :: a -> a -> a -> [a]  -- [n,n'..m]

As an exercise in what is possible here, we can rewrite this as follows (here, we just wrap the existing definitions):

-- We will put succ, pred, and toEnum in the F-algebra
data F a = Succ a
         | Pred a
         | ToEnum Int
-- Everything else will go in the G-coalgebra
data G a = Ga {
             gFromEnum :: Int
           , gEnumFrom :: [a]
           , gEnumFromThen :: a -> [a]
           , gEnumFromTo :: a -> [a]          
           , gEnumFromThenTo :: a -> a -> [a]
           }

-- We will make Bool into an Enum as a test
enumFAlg :: Algebra F Bool
enumFAlg (Succ x) = succ x
enumFAlg (Pred x) = pred x
enumFAlg (ToEnum x) = toEnum x

enumGCoalg :: Coalgebra G Bool
enumGCoalg x = Ga {
                 gFromEnum = fromEnum x
               , gEnumFrom = enumFrom x
               , gEnumFromThen = enumFromThen x
               , gEnumFromTo = enumFromTo x
               , gEnumFromThenTo = enumFromThenTo x
               }

shouldBeFalse = enumFAlg (ToEnum 0)
shouldBeTrue = enumFAlg (Succ shouldBeFalse)
shouldBeOne = gFromEnum (enumGCoalg shouldBeTrue)

main = putStrLn 
        $ show (shouldBeFalse) ++ "\n"
        ++ show (shouldBeTrue) ++ "\n"
        ++ show (shouldBeOne)

What \(F\)-(co)algebras are, and are not

Importantly, these abstractions are not a tool for doing other types of polymorphism. They are not a tool for handling generics (parametric polymorphism), or for handling subtyping. We’ve also seen they aren’t really a way to properly define algebraic structure either, though this is obviously a motivation.

In reality, they’re unfortunately both too general and not general enough an abstraction to be of much direct use in code, at least for everyday programming. We have seen that:

  • Despite covering the vast majority of useful cases, \((F, G)\) pairs over the same carrier set can still fail to endow it with an interface as rich as that which may be specified by a typeclass. For this reason they do not adequetely solve the issue of ad-hoc polymorphism.
  • While their use in specifying specific distinguished types as initial/terminal (/co)algebras is incredibly elegant, and extremely useful for proofs, the construction of these objects as a general procedure is non-trivial. The initial algebra can be realised as the colimit of the directed set of consecutive applications of \(F\) to the initial object \(0\) - but actually computing this colimit is not easy, and I’m not aware of any languages in which we can just provide a functor and have the compiler determine an initial algebra.

There is also no nice syntactic sugar for working with them in Haskell. I suppose this is to be expected, given the above.

What they do give us, though, is a clean way to compact information about certain typeclasses, a helpful language in which to talk about these classes, and very elegant formulations of common datatypes via initial and terminal objects. Knowing when constructions are dual to one another in this manner can be a serious aid to design thinking in general engineering, and I would argue this is the strength of the abstraction - as a lens to interrogate the appropriateness of a choice of datatype, and particularly to identify a dual approach to the problem. In the examples above, we saw that Stream and List are dual in this sense. Indeed, while they are inequivalent, there are scenarios where rejecting a List type for a Stream generator is very appropriate, but may not be immediately obvious - the converse is true also.