Continuous Test Orchestration And Execution Platform Online
Perform automated and live-interactive testing on 3000+ real desktop and mobile devices online.
Skyrocket Your Cross Browser Testing with Minimal Effort
Posted On: January 2, 2019
3 Min Read
One thing that is evident with developers is their preferences for IDE, Operating System, Browser, etc. If you take the case of web developers, a majority of them have an affinity towards certain types of browsers; due to that preference they prefer cross browser testing their source code only on ‘browsers of their choice’. After testing, the functionalities programmed by the web developer may work fine on specific browsers, but the situation in the real world is completely different. The users of your web-app or website might come from different parts of the world and may have a different preference towards browsers (or browser versions), some customer may even prefer to use completely outdated browsers which are having a minuscule market share in the browser market. How can you & your team deal with such kind of a situation? It is not possible to test the functionalities on all ‘existing browsers’ running on the various OS and it is not recommended to verify the code on a subset of browsers.
Based on the target market, your development & marketing team will have the breakup of the browsers being used by the users in the market, along with details about device types & operating systems. Hence, the ideal approach is to test on all the popular browsers available in the market, along with the browsers that are widely used by the audience in the target market. In the first scenario where you verify the functionalities on all the available browsers (immaterial of their usage) would require ‘infinite resources’, whereas the second approach is more calculated since cross browser compatibility check is performed on browsers that are being used by the majority of your user-base. Cross browser testing process should be an iterative process i.e. testing on the ‘shortlisted browsers, operating systems, devices, etc.’ should be performed before the changes are pushed to the production server. This process should be repeated before every new release.
Cross-browser Compatibility Testing – No replacement for manual testing
It is a known fact that before any developer pushes the code (either to the development environment or Staging environment before migrating to the Production environment), he would be performing unit testing on the code changes that he has made. For unit testing, developers have a variety of unit-testing frameworks to choose from. JUnit and Jasmine are the most popular unit-testing frameworks. Other types of tests performed at a module/package level are functional tests and visual regression tests. Cucumber is a popular choice for Behavior-Driven Development (BDD) or functional testing and visual screenshot comparison tool named Wraith is preferred for performing visual regression testing.
Manual cross browser testing cannot cover all the scenarios and even if it does, you cannot deploy resources for tests that do not require much intelligence since the tests are mostly repetitive in nature. Many organizations following continuous integration and continuous delivery are now implementing automation testing for validating cross browser compatible scenarios that are mostly linear in approach and test results are then logged in a report. However, many user scenarios are non-linear/unpredictable as far as web-app or websites are concerned. Automation test might not be able to figure out whether an image is being displayed properly on a ‘particular category of Android device’ or not. Hence, Automation testing or testing via AI Bots would be adapted to a larger extent if it could replicate user scenarios (or ad-hoc scenarios).
Though there are different types of testing – Accessibility Testing, Usability Testing, Automation Testing, Regression Testing, and so on, none of them are replaceable and the same also applies for cross browser testing which is ideally used for ‘polishing’ the product so that it is usable on different types of browsers running across operating systems installed on various devices.
Primary Objectives of Cross Browser Testing
When you think about creating a cross browser testing matrix, the main intent of performing different types of tests is to unearth bugs, get them fixed, and make the product better across as many different devices and browsers or browser versions. Though everyone in your team might vouch for this intent, as the purpose of cross browser testing is different when it is looked from the lens of a developer & a tester. Though developers test their module to unearth bugs and fix them, their verification is limited to their module, whereas testers verify the product from an end-user point of view. Testers are normally elated when they come across a bug since it makes their efforts counted, but it is important to understand whether the test is performed keeping the ‘target audience’ in mind or it was done only to increase the ‘overall bug count’. It becomes important to outline the ‘primary objective of cross browser testing’ and highlight the ‘corner scenarios and scenarios that can be ignored’.
Based on the target audience of your web-app/website, you can devise a cross browser testing strategy that can cover major scenarios. The important question to be asked is ‘should your team work on making the product better by finding bugs on the bases of test cases performed on popular/latest browsers?’ You have an option to ignore the bug/fix the bug & submit the code changes after re-testing the changes on the most preferred browsers/fix the bug or you could submit the code changes after re-testing the changes on all the available browsers (including browsers that are not popular or rarely used by the users of your target market). There are pros & cons associated with each approach. You cannot simply ignore the bug, but you need to ‘prioritize fixing of the bug’. If the development team fixes the bugs & tests on all possible browsers, there might be a delay in shipping the product and if they fix bugs on ‘certain browsers’; there is a possibility of shipping a defective product. This is a complex scenario and this situation needs to be fixed before cross browser testing is performed.
How do you plan your cross browser testing activity and how can you expedite the activity? Below are some of the pointers that can be used to plan & execute the cross-browser testing.
Identification of Browser, OS, and Device combinations
The first & foremost step of development is identifying the requirements of the customer. Along with the requirements, you should also have a deep level of understanding about the customer and market segmentation. Market segmentation would include understanding the traits of the consumer, preferable browsers in that particular market, the category of devices (phones/tablets) being used by consumers, etc.
Once you have identified these requirements, it becomes important that you test your web-app/website thoroughly against those browsers, devices, and operating system configurations. Companies that develop browsers (Chrome, Firefox, Opera, etc.) also push fixes to the browsers, hence your development & test team need to take a pragmatic approach to test and prioritize the browser versions on which testing should be performed after major fixes are done by the development team. Once the website goes live, you can get more details about the user preference using Google Analytics or other analytics engines that are integrated into your website. You can also use cross browser testing checklist for your website before it goes live.
Cross Browser Testing your product on browser & device variations is synonymous to planning & executing attack on the enemy on a battlefield. Just like a battlefield, where you have to plan the attack and use your arms/ammunition wisely, it is imperative to keep in mind your priorities with respect to the resources used for cross browser compatible web development while testing the product for cross-browser compatibility. During the initial phase of testing, you should spend a minimal amount of time in order to test the product. The next level of attack is termed as ‘Raid’ where the attack becomes fiercer and this is also the next level in your testing activity. Testing would become more intense in this phase and the test team has to spend much more time in order to unearth bugs (which are relatively uneasy to find). After these two levels of testing, the test team needs to take the final march in order to defeat their enemies i.e. bugs in the product. These bugs are mostly results of corner cases not being handled by developers or tests being performed on browsers that are used by much smaller user-base.
This gruelling cross-browser testing process ensures that the product is relatively free of bugs and it is more robust in nature. Now that you have tested your product across the maximum number of browsers, the next & an important step is to ‘narrow the context’ of your test and test the functionalities of your product across browsers that are preferred more by your target audience/market. This step is a crucial step in performing the ‘sanity testing’ of your product after which you can confidently claim that your product would work perfectly fine with X% of the customer base.
Analytics and Insights about the customer’s preferences
In the section titled ‘Identification of Browser, OS, and Device combinations’, we looked into the importance of understanding customer’s insights, so that the development & test team can spend effort in solving issues that are faced by your customers. There is no sense in solving a product issue for a browser that is ‘never/rarely used’ by your target audience.
Google Analytics, Kissmetrics, MixPanel, etc. are some of the popular web analytics tools that can be used to get the nitty-gritty about your customers. Your team can gather valuable insights about the customers e.g. browser used for visiting your website, the operating system being used, the location of the customer, etc. Though you will get a lot of information from the analytics, you should make use of data that matters the most to your team. Your testing team can also take the support of web developers who can help you in prioritizing the information e.g. In many cases, the operating system on which the browser is being used may not matter much, so it is recommended that their knowledge and understanding is utilized to focus on OS + Browser + Device combinations that are important for your product.
Test on Browsers that ‘matter the most’
Though popular browsers (Google Chrome, Mozilla Firefox, Safari, Opera, Yandex, Edge and IE10+) are available for different operating systems and devices, it is rare that you come across ‘browser issues’ that are largely dependent on the operating system. Unless the browser code/design goes through a major overhaul, there is a minor difference between browser versions e.g. Firefox 45, Firefox 46, etc. While gathering information about browser statistics to decide on your cross browser testing needs, it makes sense to combine all the desktop versions, except Internet Explorer (IE) since the old versions of IE lack support for latest web technologies e.g. HTML5.
The argument discussed above also holds good for mobiles & tablets. Handheld devices have been on the rise and mobiles are one medium that can no longer be ignored. In fact, many organizations are working on web products with the mobile-first agenda. Along with the popular browsers, many devices have in-app browsers e.g. Samsung Internet in Samsung devices, Mi Browser in MI devices, etc. In order to gain market share, device manufacturers make their in-house developed ‘native browsers’ as the default browser and many times, these in-app browsers become more important from cross-browser testing perspective than other popular browsers as you are targeting the customer base of the entire mobile company itself.
Hence, based on the market being targeted and the priority of the medium (mobile, tablet, desktop, etc.), you should have an organized and prioritize testing conducted on browsers that are popular in your target market. GS StatsCounter can give you detailed information about the browser market share from a ‘region’ & ‘device’ perspective. For example, below is the snapshot of the mobile browser market share in Asia.
As seen from the figure above, the market share of Samsung browser is more when compared to other browsers on the Android platform and hence becomes critical that the web developers & test team spend the right amount of development & testing effort on browsers that matter the most (for the product).
Along with the details about browser usage statistics, you should also have a detailed view about ‘test priorities of the browsers’. This would help in prioritizing the cross-browser testing & development effort. For example, if only a small percentage of your website visitors use Opera, testing on Opera should be a lower priority than other browsers which are used by a majority number of users who use your web-app/website. However, it is crucial to include those low priority browsers in your testing checklist, as they may bring your power leads.
Now that we have touched upon some of the basic elements to make your cross-browser testing efficient, let’s look at the points in more detail.
1. Focus on bugs which are ‘not browser dependent’
Before the source code is pushed to the Development environment/QA environment/Production environment, you should make perform a ‘regression testing’ on the ‘browser of your choice’ or the browser that you use frequently for browsing. The primary advantage of following this approach is the testing would be performed on a latest browser (as by default ‘auto-update’ is enabled in all the browsers) and the focus would be to ‘unearth bugs & fix bugs’ that are browser agnostic. There might be cases that your website/web-app is not rendering correctly on certain browsers/devices, but the issue might not be related to the viewport or screen-size of the device. It could be a bug that could be browser-agnostic and spending too much time on ‘testing the functionality on a certain device/browser category’ could hamper the speed of the product development.
Some pointers to figure out browser-agnostic bugs are
- Use browser developer debug options to view the website/web-app for different viewports.
- Try testing ‘interactive use cases’ on the website/web-app.
- Check whether the web-app/website works fine in scenarios where there is throttling of internet speeds. You could even use the admin login of your ISP (Internet Service Provider) to limit the internet speed. There are online tools (like webpagetest, etc.) that can also be used in order to test this scenario.
2. Achieve maximum testing throughput by focusing on the ‘RIGHT Browsers’
The approach mentioned in step (1) would be useful in unearthing bugs that are browser agnostic, but the cross browser testing would anyways be performed on ‘one of the popular browsers’ that is installed on the developer’s machine. There is a possibility of a regression defect i.e. a fix that was made for a particular browser might result in breakage of functionality on some other browser e.g. Fixing an issue that occurred on Chrome might result in a side-effect for Internet Explorer or any other browser. Another possibility is that a fix on one browser could fix the issue on all the other browsers.
When the test team reports a bug, they would also mention the ‘browser on which the issue occurred’, along with the scenario. Based on the bug-pattern, the development team can come up with a table that lists the browsers based on the ‘Risk v/s Returns’ matrix where ‘Risk’ indicates the impact of a bug-fix on cross browser compatibility that one type of browser may have on other browsers on which the testing is performed. Browsers can be categorized in different levels (risk-1, risk-2, risk-3) where risk severity rises with each level and ‘Returns’ would indicate the impact of a bug-fix (across different browsers).
You either have the option to test on the browsers in the risk-1–>risk-2 –>risk-3…risk–>nth order (ascending) or risk-nth –>….risk-3->risk-2->risk-1 (descending). In the first approach, the testing would be performed on popular browsers first, whereas in the second approach; the testing would be performed on ‘less popular browsers’ first. The approach-2 has better throughput (both in terms of development & testing) since bugs that are encountered (and fixed) for ‘less popular browsers’ would also result in fixing similar issues for ‘popular browsers’. CanIUse is one popular website that can help you in identifying browsers on which your website/web-app might encounter maximum number of issues with respect to the feature provided through your web elements. This finding should be used in conjunction with the data that we got from the section titled Identification of Browser, OS, and Device combinations.
As far as cross-browser testing is concerned, verification of the functionalities on different browsers & different screen sizes (and viewports) is considered an ideal option. By doing so, you can maximize your testing efforts since your website/web-app has been through a gruelling phase of cross browser testing (on combinations of different kinds of browsers, devices, as well as the operating system).
3. ‘Last-mile’ testing
By following the steps mentioned in (1) & (2), the majority of the bugs would have been unearthed & fixed as well. It is mandatory that a sanity test is performed on these set of browsers after each development & test cycle. Many mobile devices have native in-app browsers, some browsers follow a ‘freemium business model’ (i.e. some browser features are free, some are premium), whereas some browsers are not updated regularly. Such kind of browsers are ‘low priority browsers’ since only a minuscule percentage of users might be using them, hence you should test on these browsers only when you are done testing on all the other browsers.
Since the gains that you get by testing on these low priority browsers, sanity testing on these browsers should be done if your test resources have time.
4. Test before you go live
It is always recommended that extensive cross-browser testing is performed before making your website/web-app live in the Production environment. With the trend of “Shift-Left” Testing, it becomes imperative to test early and test often. You can perform cross browser testing of your locally hosted website or web-pages using an local page testing hosted through LambdaTest cloud servers. This is essential to provide a quality user-experience since the first experience makes a lot of difference.
5. Take care about Accessibility
In order to take care of the ‘Accessibility Factor’, you should answer the important question – ‘Can everyone use your website/web-app?’ i.e. Can a person with hearing impairment/colour blindness/motor impairments/some other disability use your product? It is indispensable to have the product tested for cross browser accessibility testing.
Summary – In order to get the best out of cross-browser testing, it is essential that you:
- Device a cross browser testing strategy.
- Create a cross browser compatibility matrix.
- Prioritize browsers so that testing is performed on ‘browsers that yield the best results’.
- Keep in mind the accessibility as well as the usability.
- Keep in mind the browser-dependent bugs.
- Perform local testing of your web-pages or website before you push them on the internet.
- Use a cloud-based, cross browser testing tool like LambdaTest to cherish the maximum result with minimum efforts. These tools will provide you a hassle free VM setup through their library of legacy as well as latest browsers running across multiple androids, iOS, MacOS or Windows devices.
Got Questions? Drop them on LambdaTest Community. Visit now