As machine learning (ML) software and artificial intelligence (AI) processors become increasingly common in consumer-grade products, customers are bound to ask themselves “How can I benchmark AI on my device?” or “Which ML benchmarking ratings can I trust when shopping for an upgrade or a new device?” Unfortunately, these questions do not yet have a clear answer.

Unlike the gaming industry, which has seemingly dozens of benchmark software options to allow both gamers and developers to find the best CPUs and GPUs for their particular use case, the ML and AI communities has no such resources available to them. While strong gaming-specific benchmarks may be an indicator of a good hardware choice for machine learning or artificial intelligence, this is not always the case. Thanks to the nature of complex mathematical processes utilized in tasks like deep learning it is not uncommon for the same program to run at different performance rates on different types of hardware. Due to this trend, a large scale, industry-specific ML/AI is needed to ensure that enthusiasts and developers alike have access to transparent information and competitive hardware choices.

Let’s take the gaming industry as an example of just how beneficial widespread hardware benchmarking can be for consumers. Applications like GeekbenchBasemark, and 3DMark allow gaming enthusiasts to get a strong understanding of their hardware’s capabilities in several key areas of performance, such as graphical frame rates or CPU speed. In turn, this information can assist consumers when they are making purchasing decisions, as data collected from these tests is often compiled into user-friendly websites like this, which allow users to see how different hardware combinations may affect their system’s performance. While an excellent benchmark ecosystem exists to assist PC gamers with their purchasing decisions, there is a distinct lack of such options for ML/AI enthusiasts.

This industry-wide deficiency may not seem like a huge issue at this point in time, but this will likely change in the near future. With the widespread release of user-friendly software packages such as Microsoft’s Windows MLand Apple’s Core ML set to increase machine learning’s accessibility for new enthusiasts and developers alike, it is imperative that those interested in pursuing machine learning or artificial intelligence have access to quality benchmark data in an effort to inform their purchasing decisions. If the ML/AI community is to grow as quickly as we wish, it is extremely important that those interested in working with this emerging technology are able to make well-informed purchasing decisions when investing in expensive hardware.

Despite all of the faults the nearly nonexistent ML/AI hardware benchmarking space currently has, it is important to note that there are currently a couple of open-source hardware benchmark software options available. For example, a select number of enthusiasts and developers have decided that they are not willing to wait around for benchmarking software to be developed. As a result, there are a couple different open-source benchmarking solutions available on GitHub, such as DeepBench, which measure’s a hardware’s ability to efficiently run deep learning algorithms.

While our research indicates that there are multiple instances of benchmarking software that measure’s a particular ML program’s efficiency, it is important to note that these benchmarks fall into a different category. In these instances, the program itself, not the hardware being utilized (is what is being tested). Due to this fact, it is hard to say whether such benchmarks really supply much information when it comes to the effectiveness of different hardware options. Overall, it is abundantly clear that there is currently a lack of user-friendly and accessible hardware benchmark software options for consumers.

So how can machine learning and artificial intelligence enthusiasts help solve this problem? For one, enthusiasts and developers can create and contribute to open-source projects that help to solve the industry’s current lack of user-friendly hardware benchmark software. While the greater industry may be lagging when it comes to this issue, there is nothing stopping those with the time and passion for the problem to assist in solving it themselves. Furthermore, any universally accepted hardware benchmark must yield well communicated results. Whether stemming from open-source or industry-sponsored software, the availability of such data can only assist the ML/AI community’s growth. Easy access to such information would greatly benefit consumers as they look to buy new or upgrade their existing PC hardware. Additionally, the development of an agreed upon universal benchmark would assist hardware manufacturers as they work to market their products to a growing number of machine learning enthusiasts.

Most importantly, the establishment of an industry standard benchmark for machine learning and artificial intelligence software would help create more competition within the market sector. Such a standard would bolster the need for hardware developers and manufacturers to be transparent, ensuring that consumers are able to get the most “bang for their buck” when shopping for new hardware. This trend would also serve to increase competition within the marketplace, which would encourage hardware developers to be innovative. As a result of this innovation, the ML/AI community could benefit from greatly improved hardware, yielding increased effectiveness of machine learning or artificial intelligence programs as they are deployed on these new devices.

While it is evident that there is currently a glaring lack of options for ML/AI developers and enthusiasts to benchmark their hardware, this does not have to be the case in the future. There are promising signals from the open-source ML/AI community when it comes to this problem. It would certainly be helpful for companies like Intel, AMD, and Nvidia to continue contributing their resources to this cause. However, with the number of knowledgeable and highly skilled machine learning and artificial intelligence enthusiasts increasing every day, it is likely that we will see a better software solution for ML/AI hardware benchmarking developed in the near future.