Avoiding Common Development Pitfalls

Mistakes We Keep Repeating in Web App Development
Web applications development has been shaping the internet and has been a part of our lives for over 3 decades. Tools and technologies of it are also evolving drastically. From native HTML/CSS/Js in the mid 1990s to Single Page Application based frameworks like React and Angular along with the cloud technologies in the Mid 2010s to AI powered tools in the early 2020s. No one knows what's next!
However, we can evaluate our approach and develop our mindsets, to adapt with the changes and decide on what should be the dos and don'ts. Some practices remain timeless and some do change.
In this blog, we will focus on the common pitfalls to avoid for developers and stakeholders associated with the web application development lifecycle, considering the current state of the technologies (as of 2025).
Choosing a quick but short-term option:
Nowadays, with the advent of AI, multiple frameworks, several no-code frameworks it's easy for everyone to build something functional very quickly and also deploy that on cloud rapidly.
The rapid development is undoubtedly suitable for the cases like:
- POCs
- Quick demos just to show to potential customers.
- “Something to have” to discuss further requirements with stakeholders.
- What you want to build is very simple and 100% done.
But “What” if we are building something which is not from the above and which is mission critical? Most probably it will be complex and will demand a strong foundation and proper planning. With a technology or practices which are subtle for rapid development without a proper roadmaps, our code and architecture can quickly become messy and will be complex to maintain. And after some point, the productivity (the pace of things like adding new features) will gradually decrease which was fast initially.
In other words, the first few weeks of the project are very exciting and fast-paced. But after that, when we start expecting key features, the progress slows down and the following weeks become challenging. More time is spent on debugging, refactoring, reshaping rather than focusing on new features.
Let's have a walk through of a similar situation:
Day 0:
- We started on building a web application for our use case. We are not 100% sure about the exact requirements. Not even sure if we’ll make this product or not. As it depends on our potential customers’ responses.
- We choose to use Streamlit, which is a Python based framework to quickly build and deploy the web application with less code. Our team is good with Python but not much experienced with standard frontend technologies.
After a few days:
- It's going well!
- We are able to build and show it to potential customers. Many of them liked the product idea and also suggested more features which they wanted.
After a few weeks:
- Now we are working hard on adding new features. But, struggling with technical problems in even doing simple things.
- Example: We want to change some styles of our navigation bar. Unfortunately this is not possible until “Streamlit People” adds this support :( But, there are hacks of course (we are engineers), so we are spending time exploring and implementing those hacks.
- If you want more technical details of this, here are some discussions.
After N weeks:
- We’ve finally realised this is going to be a long term product and development. And Streamlit is not the way to go for us!
- We decided to scrap all the code and start over in Next.js framework with proper architecture planning.
After N Months:
- It took some weeks to learn Next.js and rebuild the web app but things are going well now….
This experience taught us a valuable lesson for web development: Rapid prototyping tools like Streamlit and “low code” are great for quick demos and early feedback, but they won’t scale for long-term products. Building something meaningful requires proper architecture, planning ahead and maybe learning new things.
Over engineering & development without a feedback:
“We the developers are the worst people to test the functionalities which we built.”
We know the happy flow, “what works”. But usually we are not so good at defining end users' behaviour and thinking of “what could go wrong”. That’s why it's really important to get real users feedback as soon as possible with our MVP. Otherwise the web app won’t prepare for some scenarios. And this leads to poor error handling. Which can be the root cause for many unexpected issues for the users.
But we can write automated tests and even have a QA team to test everything thoroughly. It helps for sure. But still, we would argue, without knowing about users’ behaviour, the QA team can also miss some use cases. They test everything works as intended for their test cases but their test cases can simply miss the unexpected ways real users interact with the system.
Getting users feedback is also useful for the other aspect, defining the product requirements. Sometimes we spend a lot of time developing features which users don’t actually need or care about. Sometimes we spend time optimizing something which we don't need (Here is a great blog related to this written by our colleague). Early feedback helps us focus on building what truly matters, instead of wasting effort on features that look good to our team but add little value in practice.
Again we would like to mention one example from our experience:
- We wrote the code to manage Chess tournaments that generate the pairings (who will play against whom) in the Swiss system.
- It was developed in a few days, we tested it for various scenarios we could think of and it worked well.
- We decided to use it for a real Chess tournament where around 70 kids were playing. Everything went well for the first 4 rounds.
- Suddenly after that, we got an issue. One of the parents felt it was unfair, His 10 year old won all 4 rounds but still not in the top 5. This led to heated arguments with the other parents and organizers.
- The root cause was that 2 of the kids reached a bit late after the 1st round started so one of our organizers asked them to play against each other and this was not recorded in our software.
- Now, to handle this critical situation, what was expected is that we tweak some specific pairings manually. But with our code level approach this was not possible! We missed a critical use case which we don’t think we could think of before using it in the real world.
Neglecting best practices:
Some of the best practices have been valuable over the years, even after AI-driven tools came into picture. Writing test cases at any level (unit tests, integration tests, performance tests etc..) and spending more time on code reviews are one of those. There are many more such practices of course but lets focus on these two for now.
We would say code reviews and tests are even more important if you are using AI tools. They are becoming more and more powerful and useful in suggesting and generating the next lines of our code. Today, we can type maybe just 10% of the code and the rest can be AI generated (Side note: Code auto-completion tools have been there even before LLMs. For example, Android Studio IDE was very good in auto complete even in 2015).
“It's good not to WRITE all the code with our fingers, but it's important to READ and TEST all of it.”
But sometimes we tend to think code reviews are a waste of time for our small or medium scale projects because “code quality” is something our end users and stakeholders won’t care about. But it's crucial to us. Functional code is only half the work done. The other half is readability and related best practices. Often we don’t spend much time on the other half. Due to various reasons like deadlines and making fast progress but that actually slows us down in the long run (maybe just in a few hours sometimes).
Reinventing the Wheel:
What we mean by this is our hesitance or ignorance towards using existing features or solutions and re-creating things which are already there.
First, let’s explain this in the context of external tools. There are many examples, but let us discuss one. Let's say we want to implement user authentication in our web application which is a very very common use case. Technologies like Firebase Authentication of GCP (and many equivalent others like Cognito authentication of AWS) provides many related functionalities for this for FREE up to a large number of users. Still we have seen dozens of small scale web applications where people are storing usernames and passwords in their own databases. There are valid reasons to do this in many cases of course but some just do it out of ignorance. This can lead to waste of time and even loopholes in security a few times. Like, you will need to manage password encryptions by yourself.
Second, the same is the case for using internal things that exist in our own project boundaries. Many times, instead of reusing or extending what’s already there, we end up rewriting parts of the codebase from scratch. This not only duplicates efforts but also makes it hard to maintain. Writing more code doesn’t mean accomplishing more work. Every line of code is a liability. It will need to be maintained, tested, debugged, and understood in future by someone else (or even by the writer itself). We should think about the possible ways to reduce the code.
____
Thank you for reading through! Do you agree with these points? We would love to hear your thoughts and reasons about any of it. Also, these points might change with time. Are you reading this from the future? (today is teachers day 2025) and does this blog still make sense?
References: