Judging by looking at anecdotal evidence, the pace of innovation is slowing down in the realm of programming environments and languages. For the past two decades, we have seen a consolidation of programming techniques around just one single programming paradigm, the object-first style. Even the old guard is trying to do objects – exemplified by how the GNOME folks tried to bend C this way.
This probably applies to most job markets in the programming world, but it is especially evident here in Finland, the promised country of mobile networks and rural broadband. By looking at the availability of jobs in niche languages and environments in American companies, the story is different. Even languages considered “niche” are worth seeking a job for.
Speaking of niche, it is quite telling that the Top 3 Most Loved languages in Stack Overflow’s Developer Survey happen to be functional-style, niche languages. When comparing that to the Most Used category, it is very evident that the wishes of developers and the reality of companies and professional teams are not meeting.
It’s the teams, stupid
What are the reasons for this slowdown of innovation in advancing the science and practice of software engineering? Sometimes the project management is making all sorts of technical decisions they are not wholly qualified to do, so you end up with more of the same. Quite often, the actual wishes of the team that executes the project aren’t taken into consideration.
In an ideal world, at the beginning of every new project, there should be a systematic evaluation happening where developers are tasked with finding out whether the old ways of creating stuff are still good enough and whether the new project demands a new set of capabilities.
During the early architectural design phase, it’s not very uncommon to hear: “We’re a XXX team, so it’s obviously a XXX solution”. This needs to be challenged. Otherwise, we would still be writing websites and REST services using C.
Architectural design decisions often revolve around technologies, frameworks, modularity and decomposition. All these aspects ought to be open for discussion. Is JVM going to be providing us with enough parallelism to be able to fulfil the specs while avoiding the pitfalls of scaling bottlenecks with Redis? If tough questions aren’t being asked upfront, there is an increased risk of growing pains at some point in the future.
Patterns behind popularity
Is it just a coincidence that today’s Top 10 programming languages are dominated by object-first languages? I’d wager it is a result of a long evolutionary journey. Objects did exist before C++, but until the immense popularity of C and Stroustrup’s pragmatic object model were mashed together to form C++, object stuff was mostly niche.
The obvious thing about their popularity is that dominant environments, frameworks and languages dominate because they are the biggest. This effect just gets compounded over time. Want to hire coders for a niche language? Perhaps the typical avenues for recruiting are not your friend. People flock to these positions because they really enjoy working with said environment.
What can we do better?
Involve your team in the early architectural design phase, if doing any upfront design at all. Try to find out which kinds of tradeoffs you are willing to have in exchange for the New Shiny Things. Provide your team with the necessary leeway to actually learn new things, not just to try and adapt to the newest and greatest JS frameworks. Learning new things and challenging oneself can be a much better motivator than money or even GitHub fame.
All the old stuff (from the 1990s) is already popular enough – do not push educational institutions to offer more of the same, but rather focus on the widest possible skill sets applicable to future computing problems. Back in the 1990s, Gerald Sussman in MIT decided to start teaching in Scheme, a rather impractical functional language, to the dismay of many industry players. The Finnish universities should have a critical discussion about this as well.
What Sussman at MIT was able to achieve was to enable students to concentrate on the actual problem domain and not have to fight against the particularities of the language. With the right tools, there will be less cognitive load on the programmer, which will translate into a better capability to actually tackle business problems. Developing better programming models and environments (and no, it’s just not about the IDE) is going to benefit not just the programmers but also the stakeholders.
Back to looking at the big picture
If we fail to become more flexible and adaptable as programmers, it will lead to a general ossification of skills and a lack of vision. The last time this happened was when thousands of Symbian coders were tasked to build mobile user interfaces back in the day, and all these people knew was a special flavour of C++.
Nobody can predict the future, but as we are living through this phase of increasing complexity of software and the end of Moore’s Law, it is becoming increasingly obvious that the software industry as a whole is ripe for a paradigm shift. In the end, the companies capable of fully reaping the benefits of new and upcoming paradigms will prevail, like Tesla is now doing in the automotive industry.
Facebook famously coined the phrase “move fast and break things”. After the furious pace of the 2010s, we seem to have developed an angst towards breaking up with the past. Without any desire for experimentation (and accepting the inevitable failures), we cannot say we are ready to make progress in our favourite art of computing. What would be the equivalent of “to boldly go where no man has gone before” in the IT sector?