Using tools from category theory, we provide a framework where artificial neural networks, and their architectures, can be formally described.

We first define the notion of machine in a general categorical context and show how simple machines can be combined into more complex ones. We explore finite-and infinite-depth machines, which generalize neural networks and neural ordinary differential equations.

Borrowing ideas from functional analysis and kernel methods, we build complete, normed, infinite-dimensional spaces of machines, and discuss how to find optimal architectures and parameters–within those spaces–to solve a given computational problem.

In our numerical experiments, these kernel-inspired networks can outperform classical neural networks when the training dataset is small.