Performance testing in In the past, it was a relatively simple task to gather performance requirements, identify the business processes to be tested, and then plan, build, and execute performance tests that simulate the desired workload. It would be better if you could further clarify which resource was limited. It all worked fine in typical client/server and monolithic applications.

Then “Just-in-Time” compilers came along, and root cause analysis became a bit more complicated. Performance testers quickly found the answer to the proverbial needle in the haystack with APM tools during performance tests. However, the more distributed architectures became, the more moving parts needed to be monitored and analyzed. In other words, the proverbial haystack became multiple haystacks, which meant that finding that needle became increasingly difficult.

As devices, both desktop and mobile, have become more capable, much of the intelligence and presentation-level processing has moved to the user’s device. Bandwidth in general has also increased dramatically for both businesses and home users, and developers have quickly found ways to make efficient use of the extra resources by adding more features to improve the user experience. The move to the cloud has also helped provide more computing resources, whether in the form of virtual servers or containers that can be deployed at peak times and shut down when not needed.

For more information visit www.itecology.co.za

Today’s distributed computing architecture, of course, also allows API integration into other applications within the organization, as well as any third-party applications that add value to the overall system.

Finally, Agile development and DevOps have significantly changed the way we launch applications into production, requiring much more but smaller increments of code changes for testing.

Unfortunately, while application architectures have greatly increased in complexity, many application owners oversimplify the performance impact, believing that the resource gains mentioned above will compensate for this. In cloud systems, some even believe that the ability to scale up and out of additional capacity is a sufficient safety net just in case.

In short, for too many organizations, performance engineering has become an exercise in ticking the boxes, knowing what to do without a clear understanding of why and how.

Performance comes first

An organization that is serious about providing a responsive user experience considers performance as part of the application requirements long before a line of code is written or an architecture decision is made. Such basic performance requirements are not technical in nature and should be part of the business requirements of the system. They should, for example, clearly define who and where the users of the application are, how users connect, how many users will interact with the application during normal and peak times, and the scale of the expected growth of such a user base.

Such business requirements form a clear input to the technical requirements of the system, ensuring that these requirements are reflected in the system specification. Additionally, this approach informs the budget requirements for the project when it comes to performance development personnel, the performance development process to follow, and any tooling requirements.

As an example, for a DevOps-style project, it should ensure that performance engineering is considered early in the development lifecycle, even in development itself. It will drive the performance design process with a left shift that will determine which code modules, such as REST APIs, can be performance tested long before they are used in the UI.

In addition, the technical specification will provide clarity on which performance measures are valuable. First, is user experience in terms of actual eye click response time the key metric, or is server throughput as measured by traditional performance testing tools sufficient? Is it important to know which ISPs users are coming from, what connections they have, and how that affects their user experience? Does a performance engineer need to care about the waterfall charts of the pages displayed to the user and how they differ between different browsers and browser versions or mobile devices and underlying operating systems?

The cost of prolonged project delays and associated opportunity costs far outweighs the cost of implementing enterprise-grade performance engineering techniques

Knowing the location and connectivity of users will direct performance engineers to include network emulation in the performance testing exercise when required and require virtualization of services so that unit testing can occur when all parts of the application are not yet in place or unavailable expected during testing.

Finding bottlenecks earlier, however, does not solve the problem by itself, and performance engineering must also have the skills and tools to quickly identify the root cause of bottlenecks, working closely with development and operations teams to quickly resolve issues and retest.

Finally, no performance testing is a 100% guarantee for production, and thus the same metrics considered important in testing should be available in production, including all user experience metrics as well as all capabilities deep diagnostics. Production statistics can also be used to provide invaluable information on performance testing and environment design based on real-world usage. Performance engineering now knows how users navigate the app, how many users use which parts of the app, the frequency and time of day/week or month, which users use different parts of the app, and what the response rates are, given user connectivity and production environment capacity.

Clearly, there is much more to performance engineering than meets the eye. Yes, paying more attention to skill development, process alignment, and tooling doesn’t come cheap, but the cost of prolonged project delays and associated opportunity costs far outweighs the cost of implementing enterprise-level performance engineering practices.

Why, then, do so many organizations still see performance engineering as a field-based exercise?

For more information or to inquire about our consulting services, please do not hesitate to contact IT Ecology.

About IT ecology
Founded in 2004, IT Ecology has made it its mission to provide technical testing and monitoring capabilities to the sub-Saharan African market that are perfectly suited to meet unique customer requirements. Our low staff turnover ensures that important expertise and intellectual property remain within the organization while fostering a culture of learning for new team members. Coupled with a can-do attitude, our team has delighted our customers time and time again by exceeding expectations. Clients turn to IT ecology as solution thinkers and advisors.

For more information, visit www.itecology.co.za or the company’s LinkedIn page.

  • This promoted content has been paid for by the interested party

Source by [author_name]

Previous articleThe Meyiwa family is about Senzo’s friends
Next articleBarclays expands offering in Africa by $2 trillion