AI-Enhanced Software Design: Tools and Techniques
Date: 08/10/24
By: Krzysztof Lewczuk
The mistake that most buyers of technology services make, is in thinking that if a small team of expensive techies can write a small amount of code, a larger team of cheaper techies – often in far-flung locations – can write similar code, in greater quantity, and for fewer $ per line. That is a fundamentally flawed idea. And here is why.
The often-quoted paradox of “If I had more time I would have written a shorter letter,” illustrates the point well.
Complex business and technical challenges NEVER yield to a volume approach. No COO employs 100 “average” tax advisors to minimise the tax liability of a company – they hire the very best individual or small team to achieve this. Similarly with Strategy Consultants – people hire a small team from McKinsey to crack a business strategy problem, in preference to hiring an army of “bodies” to crunch numbers, because what matters most is the analysis of those numbers and the conclusions that result from it – not the numbers themselves.
Today’s global corporations are very complex beasts when viewed from the point of view of the data that runs them and the software that runs that data. Most have evolved over decades of trial-and-error, ad hoc solutions to immediately pressing problems and above all, by mergers and acquisitions.
These same organisations are experiencing margin pressures, competitive pressures and regulatory pressures that have rarely been seen before in business. To meet these demands and effectively do more with less – and faster than ever -they need to simplify this complexity to a very great degree. Put simply, they need to move from running off multiple versions of the data “truth” – for example, a report for one client that needs to be massaged by humans to be of value to another client – to creating and managing a single view of the truth. Amazon, Facebook, Google, Uber and any other shining example of corporate effectiveness and efficiency, achieves this objective natively because they build their systems and their businesses around a single view of each user/customer/data subject. This gives them a massive advantage over traditional corporate structures. To address this imbalance, all organisations will have to achieve this same single view of the truth, across all significant areas of their business data – be it customer, investor, accounting, production, trading, regulatory reporting etc etc.
Organisations are obsessed with AI, Machine learning and all of the latest buzzwords – without appreciating that the foundations of such activities are typically the data which already resides within the corporation – and that is often in an awful mess.
However, companies that apply a “mass of low cost bodies” approach to solving the inevitably complex and “chewy” system development, system integration and data engineering challenges that lie between where they are and where they aim to get to, are doomed to fail. They fail because an army of people never arrives at a detailed understanding of a complex problem – only individuals can do that. By failing to understand the true nature of the problem they usually set-off to solve the wrong problem or solve it in the wrong way or both.
And here is the crux of the matter; solving any complex problem is like peeling an onion – you cannot peel layer 2, 3 or 4 until you peel layer 1 successfully. Because the hard part of every hard problem is the investigation into what the problem is and how It came about, long before any solutions are proffered. The more removed one is from that problem – and big teams are always removed to some degree – the harder that job becomes.
But large organizations naturally assume that their “big” problems need “big” teams to solve them and so waste many years and millions of $ before finding out that they and the armies they hire, never understood the problem in the first place.
The simple truth is this; creating simplicity out of complexity, is one of the highest order capabilities that humans possess – and unfortunately very few humans possess it.
Here is a simple “how-to” guide covering the basic steps associated with providing working solutions to complex problems, at speed and scale.
The road to success lies in building hand-picked teams of fundamental problem-solvers, with the market and technical domain knowledge necessary to really UNDERSTAND the true nature of the business problem and then express that solution in code. This is achieved by doing and not just by pontificating and writing large specification documents, which can be in worse than useless. The writer doesn’t and cannot ever understand the true nature of the problem because they don’t have their head “under the hood” – they are not looking at the engine so how can they fix it. The only test of understanding is this: “does your solution actually solve that problem – or a part of it at least?”
Furthermore, does the solving of Part X of a complex problem, lead to a better understanding of parts Y and Z of that problem. If not, the direction of travel is wrong.
Does the team – all of them – really understand the data they are building systems to handle ? Do they understand why the business wants and needs it. If not, how can they hope to support the business with better data and better tooling?
Can they build software which provides a single, accurate and timely view on that data and which users love to use. Yes LOVE – not “have” – to use. Because we are all subject to the iPad paradigm of wanting to love the interactions with data and tooling.
Is the software easy to maintain – i.e. rarely goes wrong – and to change- without armies of people, excessive downtime, project risk and high cost?
Because if one thing is certain in the world of software and data, “change” is the constant. If the team cannot make changes to code RAPIDLY and deploy those into live REGULARLY – often intra-daily – with full confidence that the system’s operational performance will not be compromised, then the one thing they have been hired to do, which is to manage change, will not be achieved. So high levels of automated test coverage for all code built is an absolute necessity for all projects. Yet we rarely see a large corporation achieving even moderate levels of automated testing, relying instead on manual testing which once again requires ever more bodies. In fact the number of manual testers required increases in direct proportion to the number of lines of code written.
Whereas the relationship between automated testing resources and the size of the code-base, is non-linear and more akin to the relationship between fixed costs and revenues in a rapidly scaling business. In short, if you want to be “agile” as a business which is dependent upon software and data, you have to get the automated testing and deployment running fast and smooth. Any other way of doing it, is simply part of the “waterfall” paradigm. “Move fast and don’t break things” is the mantra and creating a single view of the truth or SVOTT across the organisation, is the goal.
Good luck!
If you’d like to speak with our team at Digiterre to discuss your data challenges please get in touch.
Date: 08/10/24
By: Krzysztof Lewczuk
Date: 02/10/24
By: Krzysztof Lewczuk
Date: 26/09/24
By: Digiterre
Date: 20/09/24
By: Digiterre
If you would like to find out more, or want to discuss your current challenges with one of the team, please get in touch.