In a recent blog post we compared the performance of the Mojo C-120 to the Meraki MR42. In that blog we highlighted results of a test we ran last spring. When we test, we test the best of the competition with the latest software and published best practices at that point in time. When that test was run, the MR42 was the best Meraki had to offer. Once Meraki made the MR53 available, we tested it and here are the results.
We typically run comprehensive benchmark tests when we release a new access point or significant a new feature, to ensure we are competitive. When a competitor comes out with a new access point, we will acquire it and it will be incorporated in our next round of testing.
Why We Do Competitive Performance Testing
We don’t perform customer use case tests solely for marketing purposes. That’s really a bonus. The main reason we do these tests is to determine how well our APs will perform in customer’s networks. Another reason why we test is to inform our deployment best practice guides. Of course there are times when we discover issues with our APs while doing competitive performance testing. And when we find issues, we fix them. We also test because we know that customers and competitors will evaluate our performance and we want to know where we stand. Our goal is to ensure that we are at or near the top in performance, in all types of customer use cases.
Because we don’t want any surprises, our test philosophy is to always strive to configure our competitors optimally for each test. Our methodology ensures we test fairly and the tests are reproducible within the variables of wireless testing.
How Does the Meraki MR53 Stack Up?
Last July we obtained Meraki MR53 and put it through one of our most demanding tests: the 50 Client Mixed Application test. This test evaluates 50 clients concurrently running voice, video, and data for good user experience. The following table shows the data / client / direction and the service level to qualify for a good user experience:
The following graph shows the number of passing clients for the C-120, MR42 and the MR53, for each application category. The MR42 results are also presented to show that the MR42 performed better than the MR53, which was an unexpected result, considering that the MR42 is a 3x3 AP and the MR53 is a newer 4x4 AP.
In the best MR53 run, only six data clients were able to get 1 Mbps throughput. None of the voice streams met a MOS (Mean Opinion Score) of at least 3.8 in both directions, and no video clients came close to passing. The average video delay factor was over 250, five times more than would pass (50 or over). The media loss rate was in the double digits, while passing is less than .004. The overall passing client score for the MR53 was 12%.
This is a step back from the performance of the MR42. The MR42 also didn’t have any passing voice or video clients, but all of its data clients passed with an overall score of 60%. The Mojo C-120 came out on top with a 96% passing client score.