How is code born? What do good developers look for when they write their code? The development of a system can be measured by its trade-offs. Some systems cannot be slow, some others cannot fail, some others cannot be too expensive. All systems want to have everything, obviously. But they cannot, obviously.
Before we study this case, let’s set aside some terminology first.
Systems, architectures, and domains
- System: is “a set of things working together as parts of a mechanism or an interconnecting network; a complex whole.” 1
- Architecture: quoting Booch2, “An architecture is the set of significant decisions about the organization of a software system, the selection of the structural elements and their interfaces by which the system is composed, together with their behaviour as specified in the collaborations among those elements, the composition of these structural and behavioural elements into progressively larger subsystems, and the architectural style that guides this organization—these elements and their interfaces, their collaborations, and their composition”.
- Problem Domains: quoting Wikipedia, “A problem domain is the area of expertise or application that needs to be examined to solve a problem.” 3
We can, with these definitions, classify any given project according to how it fits into these categories. An operating system, for example, attempts to solve the concurrent usage of multiple reprogrammable software on a given hardware, that can be divided as an architecture into the scheduling problems, the virtual memory management, or the drivers, all of which builds up to the System, as we use it.
The concerns of a project
We write code because we need to solve a problem. Automation, databases, telecommunications, websites, games, servers, or A.I., just to name a few. Each problem requires different decisions from different architectures, to build different systems, to finally solve our problem domain. Each problem has different requirements, and can be solved with different tools.
We usually write code, obviously, in a given language. There are Domain Specific Languages, intended to solve one specific problem, like HTML in webpage rendering, or XML in the markup examples; and General Purposes Languages, intended to solve just about anything, like most mainstream languages (C++, Haskell, Java, and whatnot). But each language, even if General Purpose, carries along certain decisions about the architecture they’re made for, suiting certain problem domain better while imposing certain architectural constrains.
There are also meta-requirements for any coding project, not directly related to their more direct concerns like resource management, concurrency requirements, event handling or pretty GUIs; rather, I call “meta” those things like development costs, production demands, team work, and the ever-lasting concerns of failures and performance, as we all want fault-less lightning-fast systems after all.
The gestation stages of a piece of code
A trend in the industry is to pay a performance penalty in order to achieve certain development facility. An ubiquitous example these days is the Garbage Collector: resource management is not only complicated but also incredibly dangerous, therefore certain architectures trust an external agent, the Garbage Collector, to keep track of the resource ownerships and release what’s needed no longer, automatically and safely. But a GC might have bugs as well, might release things beforehand, might specially consume too many resources on its own, or might interact badly with non-GC code, releasing things that the non-GC code still needs. All this must be considered as well: performance under the risk of erroneous resource management, or automatic resource management under performance penalties and the risk of (very few anyway!) errors? When is the complexity of manual resource management worth the performance, and when the performance penalty is too near to nothing in comparison to a substantial improvement in resource management?
Modern advances need to be taken seriously into account: are we making our product scale for the future? How much concurrency and parallelism is necessary, and how much benefit will these bring to the product? Locking paradigms for multi-threading are having serious problems to scale, and new models are being developed and popularised, like Transactional Memory or the Message Passing. And if you’ve heard they’re slow, remember, locking is fast just because we built hardware support for them, in the past locks used to be spinning locks with scheduler dequeueing, kernel support, and interrupts disabling. The price!
Every language as well, imposes an architecture. There are different paradigms of programming, suitable to different problem domains, that different languages stand for. From the old Von Neumann style of imperative, to the – merely glorified imperative – style of Object Orientation, whose core ideas are stateful and sequentiality; Event-Driven frameworks for push-pull information or UI, Concurrency Oriented style for distribution, or functional style for modelling of transformations 4.
In parallel to software scalability, pun intended, it is important to consider the meta-scalability: how much people need to be involved? Do we assign all the system to all the team? Or do we divide the architecture into modular elements that can be divided across the developers? How do we separate concerns? The same way the talk about good coding practices, we need to consider good team-working organisation practices: will the mistakes of one developer affect the good-doings of another? A lesson might be taken from the Erlang ecosystem: programs are designed to have any arbitrarily large number of processes, and semantic facilities allow to put concurrency on some modules and sequentiality in some others: therefore, the experts can work on the concurrent parts of the program, classically harder, and the newbies on the easier sequential parts. In contrasts, many frameworks are usually as strong as its weakest developer, a weakness that needs being addressed.
And also importantly, how do we share and synchronize the job being done? Name it: a Version Control System. There are centralised VCS, or distributed VCS. There are VCS that promote branching and experimenting, and facilitate unrelated code to be kept separately, until merges are decided. There are VCS that are fault-tolerant, consistent, and quick. Say his name. I’m talking about git 5.
The life and evolution of a piece of code
Clients evolve, and with them, the requirements of a project. But code might be a chaos and adding new features can be a daunting task that prevents the project from evolving, which will in the end just shorten its lifespan. There will be those horrible days when the system is crashing, or worse, doing the wrong thing unnoticed. We then need to come back to our code and touch it. To add things, to remove things, to fix things. But what if the code turns out to be untouchable?
Architectural decisions are of extreme importance when the system is designed, in order to plan for the future. An architecture needs to be designed to be ready for extension, while making sure that future changes don’t accidentally break what was working correctly. Several of the glorious S.O.L.I.D. principles of Object-Oriented programming are all about this: Open for extension, closed for modification, for example, tells us that when adding new features to our code, the architecture should be designed so that previous code shall have no changes, therefore avoiding risks of breakage; instead of modifiable, code should be extendable 6.
Automated testing is an essential requirement that prevents breaking distant code very well: do a change in your code, and run the tests to ensure that things that shouldn’t change still behave exactly as expected. And testing doesn’t fall short of benefits: a Test-Driven Development approach will make sure that software meets its expectations, that it does what is supposed to be done.
These tests need to be quick, informative, and correct, or the developer won’t trust them, or won’t bother to wait eternities to check them. The tests need to be flexible to evolve with the project, or the developer won’t bother changing them when it’s required, and then tests will fail abandoned. If all of this is kept out of problem, the architecture of the project needs to be designed to be testable to begin with: dependencies should be reasonably easy to mock, and Dependency Injection techniques must be ubiquitous in the code base to facilitate both mocking and the final testing.
An architecture should as well be modular if it wants to be evolve-able. We see this concept in Erlang again, and coming back as far as 19857, where it is argued that modularity is a requirement for fault-tolerance: when failure ensues, modules encapsulate them, keeping failure costs low; and modules are replaceable, making failures easier to fix. Barbara Liskov said (more than) once 8:
I kind of envied electrical engineers […], because there was absolutely no structure superimposed on our computer programs, and so you could just do anything, it was infinitely plastic. Whereas I thought the engineers, they have to work with components and connect them by wires, and this forced a certain kind of discipline in the practice of organising things, that was totally lacking in the software world.
At last, code can be formally analysed, enabling a whole world of checks, both at compile-time and at run-time. I’m talking mostly about Type Systems, a Formal Logic System that can check and analyse formal properties of your program. The Curry-Howard correspondence can ensure that talking about types is the same than talking about propositional logic, of first order or any superior order for that matter, a science that has been thoroughly analysed and studied: thereafter, deciding that a program type-checks is equivalent to deciding whether a logic proposition is valid, hence, ensuring we don’t write non-valid propositions amounts to ensuring we don’t write nonsensical programs: we detect errors. Types are as well a design language, a specification and documentation language (the type of an entity tells us a lot about this entity!), and, the biggest merit, an amazing maintenance tool: change something in some place, and the type-checker will tell you if you broke something somewhere else! A golden standard these days of strong static typing is Haskell, a language from which we can learn a lot about correctness 9.
These three features alone, Type Systems, Modularity, and Automated Testing, are what really keeps high quality standards, good maintenance, long life, and low future costs, of any given project.
There’s also the performance concerns: before anything can be said about performance, before getting too maniac about it, one thing needs to be said first: premature optimisation is the root of all evil! These are not my words, they’re Donald Knuth’s 10. Only when performance is really a concern, and we have really profiled and analyse this concern, we can discuss this topic. A thorough knowledge of data structure and algorithms needs to be known by the architects, and complex decisions about how’re we modelling our problem domain need to be done. When clear and well known data structures and algorithms are made, we can start wondering about our technological stack: GC? Multi-threading? Hardware? Going “low-level” is perhaps not a thing of the present anymore, but working more closely to our tool-chains will be important: do know your compilers, discover its secrets. Often compilers are just incredibly huge toolbox full of useful surprises. And if you start scratching your head about a hardware architecture, remember, you’re getting more performance with a given architecture just because we have build hardware for that architecture. So, just look forward for our more functional and less imperative hardware. If this is still a problem (is it? Really? Are you sure? Well, ok, maybe, sometimes…), go for your filthy little assembly dreams.
The death of a piece of code
I heard once Joe Armstrong coming with a very nice comment in some talk: they, the senior developers of the previous generations should be awarded as national heroes from the amount of job they and no one else have created: the legacy code! Just look at the market: much more job is being done at keeping old stuff alive than at developing new ones. This is painful, and it is important to see the point: either we develop techniques to deal with legacy code, or we take a decision of when, if ever, is the moment to kill this legacy.
Studies are made on the former option. For example, there’s an analysis that orders code by the relationship between functions and variables of uses-what and is-used-by, which can be modelled by a mathematical lattice, called Concepts, of which we can analyse the order graph and extract disconnected sub-graphs, automatically improving modularization 11. Some compilers, like many C++ ones, also often implement a called-by ordering of all procedures in code, which as well builds an order on the set of all procedures, whose graph structure can again be analysed and restructured.
Then there’s the key moment: code reached the end of it’s life. This is easy when there’s nothing to lose: there’re no clients of this code, there’s no system using it, its responsibilities are not necessary anyway. This is nearly impossible when this code is used by numerous banking systems (read, COBOL), or by cutting-edge performance programs (read, FORTRAN). If the system is needed and the shut-down costs outweighs the maintenance costs, a third cost need to be performed: that of rebuilding the entire system in a different architecture, hopefully announcing a positive review.
Some final words
After all, I leave the final decision to management (after all I’m only a scientist, not a business man), but on the meantime, I’m going to make sure all cards are in the table when managerial decisions are made. That is what I can do, as a mere scientist. Talk facts.
- This will all be a topic of the future!
- One GIT to rule them all!
- Not to glorify O.O. Design, indeed, I consider it nothing more than a glorified imperative Von Neumann machine, full of state, assignments, lack of all sorts of safety, and what not.
- Jim Gray. Why do computers stop and what can be done about it? Technical Report 85.7, Tandem Computers, 1985.
- Strong typing best practices are often seen most purely functional languages, like Haskell, OCaml, F#, and the old school Standard ML. These are statically typed, while other languages like Perl and Python are dynamically typed: that is, deferring type-checking to compile type or to run-time. We’ll see much more on these in the future.
- Donald Knuth, Structured programming with go to statements, Stanford University, 1974
- Christian Lindig et al. Assessing Modular Structure of Legacy Code Based on Mathematical Concept Analysis, 1997