Category: Artificial Intelligence

Want to Help Save the World? Get involved in the Call For Code Challenge

As the world’s climate continues to undergo gradual changes due to human activity, the number of natural disasters occurring each year continues to grow. Not only are natural disasters becoming more frequent, but they are also becoming increasingly severe, as highlighted by the ongoing recovery of Puerto Rico, which was devastated by a powerful hurricane a little over a year ago.

After recognizing that technology could provide a great deal of help for first responders, a group of professionals, humanitarians, and industry leaders created Call For Code. The mission of Call For Code is to create software tools that could prove transformative for the work undertaken by humanitarian and non-profit organizations alike. Each year, the organization puts out a “call” for software engineers, developers, and IT professionals to lend their skills on a volunteer basis to help tackle complex technical problems that could help change the course of humanity.

What is the focus for the 2018 Call For Code Global Challenge?

As mentioned above, the focus for 2018 is on creating tools that can help to provide assistance both during an in the aftermath of natural disasters. Specifically, this year’s Global Challenge is focused on reducing the disruptive impact of such disasters on human lives, health, and wellbeing. As the first event put on by the Call For Code Global Initiative, this project marks a unique opportunity for artificial intelligence professionals to lend their expertise in a way that can assist the greater good of humanity.

How Can AI be used to Respond to Natural Disasters?

As with many applications of artificial intelligence, the applications of the technology are only limited by the creativeness and ingenuity of developers. For natural disasters, there are a number of positive impacts that artificial intelligence-based software can have. Namely, the ability to utilize machine learning when attempting to comb through large data sets can assist in informing first responders on what their priorities should be.

Additionally, artificial intelligence could be utilized in a proactive manner, assisting humanitarian organizations in predicting which areas will be hardest hit by a particular event. This sort of capability would not only help first responders prepare the necessary equipment and provisions for their response, but could also allow government officials to create effective evacuation plans based on predictive analytics.

Why Should People Join this Effort?

Those with in-depth knowledge of artificial intelligence are in a unique situation. Not only are they highly skilled, but they are oftentimes extremely driven by the transformative power that artificial intelligence will have in the future. However, developing AI software in a corporate setting can oftentimes have drawbacks, namely the fact that developers may sometimes work on projects that they are not passionate about.

If you’re a believer in the positive impact that artificial intelligence will have on humanity, Call For Code presents a great opportunity to help make that change a reality today. While it may be years until AI technology touches the lives of every consumer, first responders and humanitarian organizations do not have the luxury of being able to wait that long. As a result, choosing to answer the Call For Code means that your time and effort could lead to the sort of transformative change you’d like to see in the world in a matter of a few months, as opposed to a few years.

What Projects Can Volunteers Work On?

Volunteers have the ability to work on any type of project that they want. Thanks to the Call For Code Global Initiative’s team-based, competitive format, you are able to pursue any sort of solution that you deem to be effective and meaningful. This format allows teams the freedom to pursue solutions that they believe to be game-changing, but might have been overlooked in the past.

In addition to the freedom to work on any sort of project that you would like, teams are able to leverage numerous datasets provided to them throughout the development process. Data set catalogues from the United Nations and the Red Cross are provided to teams regardless of their project topic, with the hope that they will utilize this data to train any models that are developed in the process. Overall, Call For Code is allowing participants to pursue any solution that the feel would benefit the work of first responders. This is an exciting proposition, and will likely allow for the creation of innovative and groundbreaking solutions along the way.

Here is a brief overview of some of the key projects that Call For Code is looking to accomplish:

Predicting Wildfire Intensity

– This project let’s you tap into the power of Watson Studio to combine machine learning and NASA data sets to help predict the intensity of future wildfires

Identify Cities From Space

– IMB Watson’s Visual Recognition tools could be utilized to spot cities from space at night, which would allow first responders to have an idea of which areas may or may not have electric power available.

Create a Smarter Procurement System with Watson

– Participants have the opportunity to use Watson Knowledge Studio to model the best possible supply chain — a key, yet often overlooked aspect of natural disaster response overlooked in the past.

Create an App to Perform Intelligent Searches on Data

– Such an application could help humanitarian workers search through large data sets to find the information that is related to the situation they are trying to address.

Can Call For Code Help Prevent Loss of Life During Natural Disasters?

Yes, that is the hope! The whole idea behind this year’s Global Challenge is that the creation of software that is custom-build for natural disaster response can help to lessen the devastation felt by those affected by natural disasters. This means that your volunteerism could help to save lives. Not only that, but your work could help to lessen the negative impacts of natural disasters, both in terms of lives lost and damages experienced by communities throughout the world.

Why Does This Initiative Matter?

In short, this initiative matters because it presents developers and artificial intelligence experts a unique opportunity to contribute to a project that might not come across their desk at their day job. Through Call For Code’s competitive, team-based format, anyone can get together with a few of their colleagues and develop a software package that can help change the way that natural disasters are treated by non-profits and humanitarian missions alike.

As you can see, the 2018 competition put on by the Call For Code Global Initiative is set to be an impactful one. I encourage you to participate in this great project, as your work can help to change the scope of how natural disasters are approached throughout the world. If you’d like to find out more about Call For Code, we invite you to check out the organization’s website today. Join us as we answer the Call For Code and help to lessen the pain caused by natural disasters around the globe.

Benchmarking Machine Learning and Artificial Intelligence Processors

As machine learning (ML) software and artificial intelligence (AI) processors become increasingly common in consumer-grade products, customers are bound to ask themselves “How can I benchmark AI on my device?” or “Which ML benchmarking ratings can I trust when shopping for an upgrade or a new device?” Unfortunately, these questions do not yet have a clear answer.

Unlike the gaming industry, which has seemingly dozens of benchmark software options to allow both gamers and developers to find the best CPUs and GPUs for their particular use case, the ML and AI communities has no such resources available to them. While strong gaming-specific benchmarks may be an indicator of a good hardware choice for machine learning or artificial intelligence, this is not always the case. Thanks to the nature of complex mathematical processes utilized in tasks like deep learning it is not uncommon for the same program to run at different performance rates on different types of hardware. Due to this trend, a large scale, industry-specific ML/AI is needed to ensure that enthusiasts and developers alike have access to transparent information and competitive hardware choices.

Let’s take the gaming industry as an example of just how beneficial widespread hardware benchmarking can be for consumers. Applications like GeekbenchBasemark, and 3DMark allow gaming enthusiasts to get a strong understanding of their hardware’s capabilities in several key areas of performance, such as graphical frame rates or CPU speed. In turn, this information can assist consumers when they are making purchasing decisions, as data collected from these tests is often compiled into user-friendly websites like this, which allow users to see how different hardware combinations may affect their system’s performance. While an excellent benchmark ecosystem exists to assist PC gamers with their purchasing decisions, there is a distinct lack of such options for ML/AI enthusiasts.

This industry-wide deficiency may not seem like a huge issue at this point in time, but this will likely change in the near future. With the widespread release of user-friendly software packages such as Microsoft’s Windows MLand Apple’s Core ML set to increase machine learning’s accessibility for new enthusiasts and developers alike, it is imperative that those interested in pursuing machine learning or artificial intelligence have access to quality benchmark data in an effort to inform their purchasing decisions. If the ML/AI community is to grow as quickly as we wish, it is extremely important that those interested in working with this emerging technology are able to make well-informed purchasing decisions when investing in expensive hardware.

Despite all of the faults the nearly nonexistent ML/AI hardware benchmarking space currently has, it is important to note that there are currently a couple of open-source hardware benchmark software options available. For example, a select number of enthusiasts and developers have decided that they are not willing to wait around for benchmarking software to be developed. As a result, there are a couple different open-source benchmarking solutions available on GitHub, such as DeepBench, which measure’s a hardware’s ability to efficiently run deep learning algorithms.

While our research indicates that there are multiple instances of benchmarking software that measure’s a particular ML program’s efficiency, it is important to note that these benchmarks fall into a different category. In these instances, the program itself, not the hardware being utilized (is what is being tested). Due to this fact, it is hard to say whether such benchmarks really supply much information when it comes to the effectiveness of different hardware options. Overall, it is abundantly clear that there is currently a lack of user-friendly and accessible hardware benchmark software options for consumers.

So how can machine learning and artificial intelligence enthusiasts help solve this problem? For one, enthusiasts and developers can create and contribute to open-source projects that help to solve the industry’s current lack of user-friendly hardware benchmark software. While the greater industry may be lagging when it comes to this issue, there is nothing stopping those with the time and passion for the problem to assist in solving it themselves. Furthermore, any universally accepted hardware benchmark must yield well communicated results. Whether stemming from open-source or industry-sponsored software, the availability of such data can only assist the ML/AI community’s growth. Easy access to such information would greatly benefit consumers as they look to buy new or upgrade their existing PC hardware. Additionally, the development of an agreed upon universal benchmark would assist hardware manufacturers as they work to market their products to a growing number of machine learning enthusiasts.

Most importantly, the establishment of an industry standard benchmark for machine learning and artificial intelligence software would help create more competition within the market sector. Such a standard would bolster the need for hardware developers and manufacturers to be transparent, ensuring that consumers are able to get the most “bang for their buck” when shopping for new hardware. This trend would also serve to increase competition within the marketplace, which would encourage hardware developers to be innovative. As a result of this innovation, the ML/AI community could benefit from greatly improved hardware, yielding increased effectiveness of machine learning or artificial intelligence programs as they are deployed on these new devices.

While it is evident that there is currently a glaring lack of options for ML/AI developers and enthusiasts to benchmark their hardware, this does not have to be the case in the future. There are promising signals from the open-source ML/AI community when it comes to this problem. It would certainly be helpful for companies like Intel, AMD, and Nvidia to continue contributing their resources to this cause. However, with the number of knowledgeable and highly skilled machine learning and artificial intelligence enthusiasts increasing every day, it is likely that we will see a better software solution for ML/AI hardware benchmarking developed in the near future.

Powered by WordPress & Theme by Anders Norén