We tend to think that programming languages are getting smarter every year. But do they? Many people also believe that the endless stream of open source frameworks makes us more productive. But is that the full truth?
It’s hard to measure productivity. And, of course, we can produce more beautiful and bigger systems than ever before. It can however still take a considerable amount of time to build something like a business input form based on an underlying database model. Why is that?
The answer is in what our current languages actually do for us—or rather, what they don’t do for us. Besides basic arithmetic and logic operators, they manage objects in memory and make sure we can call methods from other methods. Sure, most languages can do a lot more, like closures, immutability, generics, co-routines, and so on. But all these features are just there to make programming easier, safer, or more fun. They do not contradict my basic premise that most languages today just process code to manage data in memory.
It becomes interesting when you ask yourself what we use these languages for. In any typical business or web application, we write and query data to and from databases; facilitate communication between client, server, and external systems; and do some business logic. Business logic can be as complex as you want it to be—it can check constraints, do complex calculations, or concern very specific queries. But on average, those business logic functions do not make up more than, say, twenty percent of the total. That’s because most of the code we write is meant to handle data communications with the outside world.
To put this into perspective, we should look beyond current mainstream architectures. In the nineties we had fourth-generation languages (4GL) that had a native interface with the underlying database. Instead of calling an API to execute a handwritten query, we could just refer to tables and columns to get things done. It was even possible to define screens in terms of the underlying data model, automatically handling common use cases. True, these two-tier environments were not perfect and gave way to the three-tier services-based architectures that we tend to use today. But some are still in use, and they prove the theory that a programming language does not have to limit itself to managing internal memory.
But wait! We have frameworks, don’t we? They can make things easier, so we can write less code. Especially since the advent of the Internet and open source, an almost inconceivable amount of creative effort has been put into the thousands of frameworks and libraries out there, and some frameworks do pretty well. They help us implement services, load and submit screen forms, access databases, and other important stuff.
The problem is that frameworks won’t help you get rid of the eighty percent of source code that do not actually fall under business logic. They help you glue things together, but because all data is still passing through all layers of an application (assuming an SOA architecture, of course), we still have to glue the frameworks to each other.
We do all that secretly knowing that most of the code we write can be derived from the conceptual data model of the given system. Adepts of model-driven development (MDD) know that. But they have their own problems because they tend to use code generation, which indirectly keeps them stuck in third-generation languages and frameworks like the rest of us.
There is a saying: “Don’t try to fix the language.” It was used to criticize people building stuff, especially in C++, who distanced themselves so much from the underlying language that they eventually made things more complicated than they were to begin with. I think we can apply that saying to some of today’s frameworks. Although some do an amazing job, they cannot fundamentally extend the language with better abstractions. Frameworks were never able to really add garbage collection to a language. They can use tricks to help us with asynchronous behavior, but they cannot add real co-routines without the compiler’s help. By the same token, you cannot add persistency awareness or something like native communications to a language that is concerned only with internal memory.
What we need is better languages.
Not better in the sense of steeling features from languages invented in the sixties and seventies, which seems to be a popular hobby these days. All that does is change how we humans interface with the language.
Instead, we should develop new abstractions that make languages aware of their inescapable surroundings: databases and client-server communication. I know we might need some research here, and it may sound challenging. But if we do this better than in the nineties, there is huge potential to boost the productivity of application developers like never before, while at the same time improving reliability, maintainability, and quality. Let’s start thinking today.