Low-Code/No-Code: It's Just Software Evolution

From 1950s statisticians to LLM-powered tools, low-code/no-code is part of a long arc of software evolution. History shows every shift faced doubts, yet adoption prevailed. The real risk is being the last to embrace it.

author-image
DQI Bureau
New Update
Low-Code/No-Code
Listen to this article
0.75x1x1.5x
00:00/ 00:00

Everyone's asking the wrong questions about low-code/no-code tools. They're treating this like it's some new phenomenon when it's actually the continuation of a multi-decade story.

Advertisment

Let me tell you that story – because understanding where we've been is the only way to make sense of where we're going.

Let's start with a story about Acme Corporation, a fictional company. It was founded in the early 1950s and was in the business of manufacturing, distributing, and selling things to other businesses and individuals.

In the first decade, their operations, Acme Corporation, were a middling player. They were not the worst, nor were they the best. Their biggest bottlenecks were truly understanding customer demand to optimise their factory outputs and managing their cash to run their operations. They had large planning departments (like all corporations of that era) staffed with statisticians who would receive the written reports from across the country via mail, record it (store data), analyze the data they received (use data) based on past trends (retrieve data), and then send decisions down to their factories on production runs while requesting or depositing funds in their treasury operations.

Advertisment

Starting their third decade, they started using computers. While they would still receive written field reports in the mail, the recording of data (storage was done by computers. That made retrieval (retrieve) of past trends and subsequent analysis (use) faster by an order of magnitude. This was the first instance in Acme Corporation of software, or should I say low-code vs manual effort, being used to store, retrieveanalysenalyze data. And since they were one of the first to use computers to speed up and improve their decision-making, they were now in the top quartile performers.

In the past 5 years, Acme has moved completely to the cloud. It has exthe panded use of IoT devices. Of course, the only dependency is that either the users have to know how tthe o use the "structured" language of any tool, be it writing Excel formulas, SQL queries, or in many cases writing API calls in Python to run specific analysis.

Enter the modern era of LLMs and with it the proliferation of what is labelled as "low-code/no-code" tools. In their present avatar, the low-code/no-code tools enable storing, retrievanalysingayzing of data using natural language. Over the arc of history, the only jump has been from using structured languages (Excel, Python, C, etc.) to using natural language. The urge has been the same for all of us individuals and corporations – how can we store more information, how can we recall more of what and how most i,mportantwe lwe y how can use the information that is available at the speed of our thought, ht.

Advertisment

Now, anyone who can imagine can express their thought in natural language now create. I don't know what can be more powerful din democratising for human society!

Now let me tell you another story about a completely different world where software has much higher stakes than getting an analysis wrong.

Code is pervasive. Most of us are familiar with using code for browsers, spreadsheets, apps, analysis, etc. And then there is embedded coding. This code drives our planes, autos, trains, satellite communications, etc. This is the world of Embedded Engineering, another fictional company that's been designing mission-critical systems for communications, aerospace, and automotive industries for the past 40 years.

Advertisment

In the early 1980s, Embedded Engineering's engineers spent months hand-coding assembly language for custom silicon mission-critical applications like flight control or designing the power module for industrial equipment or control units for automobiles.

Economics has been the primary driver of change. By the mid-80s, the costshad b costs had all increased 10x to $50 million. In response, companies pursued two strategies: outsourced manufacturing and programmable hardware. This led to the development of higher-level design tools and hardware description languages like VHDL and Verilog that let engineers describe behaviour visually rather than in raw assembly code - essentially the low-code evolution in the embedded world.

The real transformation has been the proliferation of chips in everything - from wearabspecialisedialized GPUs. All of that has been possible because embedded software can now be developed by anybody. This led to Embedded Engineering growing its business manifold to create and support various specialised embedded chips in auto, communications, aerospace, and industrial systems. This was the democratization movement in embedded software.

Advertisment

No-code is a different story because we are yet to see native support for reliability, safety and regulatory standards in embedded systems. And it is likely that there will also be a human in the loop, given the criticality of these systems. While we can say low-code in embedded is here and has followed the arc of history, only time will tell if we get to a no-code state.

Embedded Engineering, like Acme, has collapsed the time to create or update an embedded system using prebuilt IP blocks and visual tools (low-code options) while still requiring a domain expert as the orchestrator of these tools. With natural language interfaces, the same domain expert will be more productive in orchestrating the IP blocks and the visual tools while maintaining the safety and reliability of systems.

So when people ask me about the 'problems' with today's low-code/no-code tools - the scalability issues, the security concerns, the lack of developer control - I think about those statisticians at Acme in 1950 and those embedded engineers hand-coding assembly language in 1983. Both worried about losing control, about reliability, about whether new systems could handle their complexity. The difference was stakes: business decisions versus safety-critical systems. Every technological leap brings the same anxieties.

Advertisment

Yes, there are growing pains like scalability and security worries. But we're shifting from creating the same software for every personalised solution. The traditional distinction between low-code and no-code is becoming less relevant anyway. With LLMs and coding assistants, the barrier between writing code and using visual interfaces is dissolving. The security risks are real, but look at the pattern from both stories - every major technology shift faced the same concerns. When computers first arrived at Acme, people worried about data security. When programmable chips emerged in embedded engineering, engineers worried about system reliability and regulatory compliance. The market solved both because adoption required those prototypes.ections

Yes, rebuild work is often required - but that's true of any software that proves valuable enough to scale. The difference is transformational. Finally, we are already seeing large companies deploy these tools at scale and in production systems.

History seems to be rhyming again. Just like previous generations were apprehensive about computers, are we being too apprehensive about whether low-code/no-code tools will scale? Will they be secure? The users have spoken with their money and their time - low-code/no-code tools will only continue to grow. The real question has always been the same. Stop asking questions about these tools and start asking whether you can afford to be the last one to use them.

Authored by Rajeev Ved, Head of Growth, Sasken Technologies