How to write 80% less code

The list of attempts to make it possible to write less code is endless. Many of them use knowledge of the data model to automate a considerable part of the database access, such as these:

  • Fourth generation programming languages (4GL)
  • Computer Assisted Software Engineering (CASE) tools
  • Model-driven development (MDD)
  • Domain specific languages
  • Object relational mapping tools (ORM)
  • Object-oriented databases (OODBMS)
  • Modern no-code and low-code environments (e.g. Codemotion and Mendix)
  • Etc.

And, of course, using the data model as a basis for standard behavior makes a lot of sense. Unless you think that passing around data from one format to another format, gluing strings together to form database queries, and manually handling the results of these queries, should be regarded business logic. I dare to claim that, on average, only 20% of the code that we write is pure and actual business logic.

But, then again, why did these things never go mainstream? Where did it go wrong?

It’s not just due to being proprietary (vendor lock in), slow builds (e.g., as a result of code generation), or not being general purpose enough. We can solve all that with the right implementation and by making things open source. No, there are way bigger issues with all the attempts listed above.

And these are the subject in my recent book, called Vertically Integrated Architectures. Instead of proposing yet another variant of the same theme, I went back to the drawing board, so to say, and asked some fundamental questions, such as: Why are many of these solutions in need of manual performance tuning in the end? How compatible is a model-driven solution with a heterogeneous application environment? How general purpose can we make them? And what is the fundamental cause behind the so-called database impedance mismatch?

The core of my analysis comes down to these two challenges:

  1. Data model versioning: Any solution that only supports a single version of the data model will not be able to automatically cope with data model migrations against external systems and non-web clients.
  2. When to compile: Any solution that tries to generate and/or compile code, ahead of any client request coming in, is doomed to be inefficient as soon as the requests get more complex.

Let’s go into some more details for both subjects.

1. Data model versioning

Imagine a completely isolated application with no external interfaces that only supports its own web client. In such a scenario, changes to the data model never result in compatibility issues. You might only have to create a script to migrate a test or production database.

However, as soon as we add an external API or native (mobile) client, we do have to cope with API versioning in the end. This is because data model changes are likely to impact the API and it is not always possible to force external systems to keep the same release schedule. Even when we must deal with a system that is in control of the same owner (company, department), not being dependent on each other’s release schedule is, in a way, the whole point of having subsystems.

So, what do we do in such a scenario? We add a version number to the API. And we write extra code to covert data where needed. This means that there must be a place to put that code – the service implementation. This is why code generation doesn’t work in such scenarios. Code being generated based on the current version of the data model will not be able to automatically cope with older versions of the same data model.

I dedicated a whole chapter in my book for the solution that I propose – making a data model aware of its own version history.

2. When to compile

The when to compile dilemma relates to the languages that we use. No matter how fancy a language, 3GL programming languages get translated into in-memory instructions – to allocate memory, do some calculations, and copy data from memory to memory location. As soon as we want to persist data, we must manually write code to call a database API. And this is where the trouble starts. We could save a lot of code if the language would automatically read and write to the database whenever needed. But, a 3GL compiler doesn’t look ahead. It does not know that, after loading customer data, we also want to load the customers’ orders and related product data.

Object relation mappers (ORM) and object databases tried to solve this. But, without special trickery (smart caching, API options, hints, etc.), they simply fail because the language and, thus, the compiler cannot know whether to combine certain queries to the database or not. Remember that network roundtrips are extremely costly! On one side, we need so-called lazy loading to prevent loading the whole database for every query; on the other side, we cannot do without combining data requests (non-lazy) to get a reasonable performance in all scenarios.

The only way out is a language that can be analyzed in the context of a complete service request. Just like database queries, for example SQL, are only analyzed and compiled based on the full request that comes in.

To conclude

This blog is obviously too short to explain every detail that is behind the above analysis and certainly not sufficient to go into the solutions that I propose. For this, you will have to excuse me, you should read my book.

Amazon: https://amzn.to/2LWMTxn

Apress ePub (DRM free): https://www.apress.com/9781484242513

4 thoughts on “How to write 80% less code

Add yours

  1. At the time these tools could have made a difference, they were priced out of reach except on rare occasion. These technologies could have had a much greater impact if their creators had been less greedy. It is hard to overcome that lost window of opportunity because most devs can develop without the high powered expensive tools. Now you have low-cost providers of these tools but the craftsmen have already learned to make do without…

    Like

    1. I agree price was a struggle. And, for example, Mendix and Outsystems are still expensive.

      But as I pointed out it is not only that. I do NOT promote the tools from then and now. I just analyze what’s wrong with them and suggest solutions (blog and book). I do believe we’re a bit stuck with 3GL languages because of the way they work.

      Like

  2. I just signed up for your newsletter and bought the book. Perhaps you have proposed this in your book which I haven’t started. But I wanted to get a high level comment from you about the possibility of a low-cost solution towards a writing less code? And does it need to work with the legacy of the 3GL environment we’re currently in?

    Thanks 🙂

    Like

    1. First of all, thanks for buying the book. I hope it will inspire you. Let me know what you think after reading it (a review, even a short one, on Amazon is also very appreciated).

      With low-cost do you mean compared to the very expensive low-code environments on the market? I think that cost is indeed a factor in the failure of 4GL and low-code so far. But as you will see in the book there are other more fundamental issues with low-code so far. The solution I think would be that the open source community gets interested in building such tools and developing an accompanying programming language.

      Compatibility with 3GL can first of all be achieve with webservices. In that respect the world is so much more ‘open’ and connected than with 4GL in the nineties. Other compatibilities can be found in a client that’s 3GL although the backend is low-code and/or in ways to ‘plugin’ 3GL code.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Up ↑

%d bloggers like this: