Although synthetic biology is a relatively new field, it has been around long enough for there to be many different definitions of what it actually is. Often they end up being far too prescriptive or miss the point entirely. Rather than try to claim an authoritative definition, I want to offer a personal perspective that comes from originally training as a chemist.

Many of those who founded and developed the field in the early days came from backgrounds outside biology, in particular from computer or electrical engineering backgrounds. To them biology was a mess, a jumble of observations on model systems with few unifying principles. Any attempt to design or engineer biological systems was fraught with unpredictable consequences that meant that trial-and-error was the only possible strategy.

As engineers, they saw the need to impose order on these systems, by establishing well-behaved, modular units that could be predictably combined to deliver a desired outcome. In particular, a degree of abstraction was necessary, taking inspiration from computer coding.

Computer code operates at different levels of abstraction. Basic instructions are written in ‘machine code’, extremely dense binary code that describes every detail and is highly impenetrable to a human reader. In contrast, day-to-day ‘high-level’ programming languages consist of sets of functions that conceal the underlying ‘low-level’ machine code. These functions can be strung together according to the syntax of the programming language to give the desired outcome of the program.

In a similar fashion, synthetic biologists wished to have biological modules possessing functions that were abstracted from the underlying code—be it DNA, RNA, amino acids etc.

This (slightly old) YouTube video featuring Drew Endy gives, in my opinion, one of the most succinct explanations of synthetic biology.

As a chemist, I like to draw analogy with how organic chemistry also had its own ‘synthetic’ revolution. In the early days, once chemistry had moved awaay from the quackery of things such as alchemy, it was much like biology was during the last century. Research was largely a process of cataloguing and characterisation. There was a whole world of fascinating chemistry out there but we had no real way of rationalising or harnessing it.

As for synthetic biology, we can also identify several enabling developments that allowed a move towards truly synthetic chemistry. First—technology. Things became much faster and tractable once we were able to quickly characterise our products. Elemental analysis, mass spectrometry, NMR, X-ray crystallography, became essential to the synthetic chemist.

Second—abstraction. Retrosynthetic analysis is a great example of what we would love to have in biology. With retrosynthesis, we take the molecule we would like to make and break it down sequentially into smaller fragments, based on known chemical reactions that generate the desired connections between fragments, until we arrive at suitable starting materials.

Now I don’t want to give the impression that you just do the retrosynthetic analysis and then everything will join up perfectly. There’s a lot of chemical intuition involved—multiple synthetic routes are usually possible—and reactions can always be unpredictable. The use of computers to deduce syntheses should eventually solve some of these problems, although computer-assisted synthesis has been a long time in the making.

Ultimately though, this is where we are now heading in biology. To take a desired biological structure or function, break it down methodically into manageable constituent parts, and then assemble these components in a predictable and tractable manner. Only then we can truly begin to explore the vastness of biological space, and perhaps uncover the unifying principles responsible for life itself.