NVIDIA’s New AI Algorithms and Supercomputers
On April 5th NVIDIA announced that it was going “all-in” on artificial intelligence and virtual reality programs, launching a new engine to be used in self-driving vehicles as well as virtual reality algorithms to allow users to experience more realistic VR scenes and enable medical researchers to more accurately and quickly identify and treat medical conditions.
Supercomputers for AI research
NVIDIA unveiled the DGX-1 supercomputer to be used in AI research applications. The DGX-1 carries a price tag of $129,000 and boasts eight Tesla P100 cores in order to perform on an equivalent level of 250 CPU servers at a time. These supercomputers will enable researchers to analyze and apply biometric data from past and present patients in order to more quickly and effectively evaluate and treat a wide array of medical conditions, leading to higher patient survival rates and an overall increased quality of life in survivors of the conditions.
The DGX-1 will first be sent to installations that are actively pursuing AI research, such as hospitals and universities. It will allow them to apply algorithms from the data of large pools of patients to present scenarios; this will enable the hospitals and universities to interpret and apply the mathematical patterns from the data pools to new patients, thus enabling researchers to more accurately assess them and determine the best course of action for a given issue.
Advances in self-driving cars
NVIDIA’s computers that are involved in autonomous vehicle travel have benefited from an update in the processors that run them. NVIDIA’s Drive PX 2 system uses a combination of sensors and data from past experiences to enable the car to “understand” the environment in which it is operating.
Because the Drive PX 2 requires a significant amount of computing power in order to properly operate, NVIDIA relies on Tegra processors as well as two different GPUs to assist the abilities of a self-driving car. The processors can replace hundreds of traditional CPU nodes, enabling the vehicle to operate more efficiently than ever before: indeed, the processors and GPUs combine to bring the vehicle’s computing abilities up to the level of 24 trillion operations per second.
As a result of the updated processing capabilities brought to the table by the Drive PX 2, vehicles can now operate by simultaneously processing information from 12 cameras at the same time they are utilizing data from lidar, radar, and even ultrasonic sensors. The data from all these input devices allows the vehicle to accurately identify potential obstacles and adjust its path of travel accordingly.
Because the self-driving cars now possess “deep learning” abilities, they are able to address unexpected obstacles in the environment such as debris, other drivers, and so on. The cars are now able to process enough data from the various sensors to obtain 360-degree awareness around the vehicle, meaning that they will—eventually, if they are not already doing so—outperform human operators in safety and overall driving abilities.
Whether it’s by improving the ability of medical professionals to diagnose and treat conditions, or by increasing the safety with which we travel, NVIDIA’s accomplishments are laudable and will lead to a better tomorrow for all of us.