AI, Autonomous Vehicles and Crypto Drive Future Compute Requirements

New technologies like artificial intelligence, autonomous vehicles, and cryptocurrency are driving current and future compute needs.. A new DCF special report courtesy of NTT explores how hybrid architectures help support these workloads.

AI, Autonomous Vehicles and Crypto Drive Future Compute Requirements

This week, we conclude our special report series on the hybrid cloud by looking at how new technologies like artificial intelligence, autonomous vehicles, and cryptocurrency are driving current and future compute needs.

compute needs

Get the full report

Hyperscaling and retail colocation are coming together for reasons beyond simple and steady business growth and the ability to bring additional capacity online in response to surges in workloads due to increased demand for services, either seasonally such as in e-commerce or periodically for software distribution/updating and media distribution of books, songs, and videos. New technology, such as AI/ML, and resource-intensive applications such as cryptocurrency are fueling expenditures and growth outside the traditional data center and cloud models.

AI/ML

Artificial intelligence and machine learning applications are booming as enterprises apply those tools to complex tasks and massive volumes of data. Amazon’s Alexa digital voice assistant uses multiple tasks to process voice, including speech recognition and natural language processing with machine learning taking place to continually improve Alexa’s recognition rate and conversation skills as well as enabling it to adapt and learn new concepts.

Artificial Intelligence/Machine Learning is vital to the growing use of analyzing still images and video, both in real time and over time for changes.

AI/ML is vital to the growing use of analyzing still images and video, both in real time for security tasks such as motion detection and facial recognition and for analyzing catalogs of images over time for changes. The commercial satellite imagery industry catalogs the entire earth’s surface daily, enabling companies such as BlackSky, Capella Space, and Planet Labs to offer “time machine” services, spotting changes in areas of interest and economic indicators. Using this combination of AI and Big Data, insurance companies can quickly assess damage from hurricanes and other natural disasters, finance companies can track oil storage worldwide and the flow of goods around the world through monitoring ships and containers, and the oil industry can check pipelines for leaks and theft just to name a few examples.

The combination of Big Data, with petabytes of imaging being collected daily, and AI analysis tools requires massive storage and GPU and FPGA-based servers providing petaflops of compute capacity easily exceeding the power and compute capabilities of a traditional data center or a cloud service. High-end colocation becomes the logical solution for enterprise AI/ML applications.

Anything-as-a-Service

Software-as-a-Service (SaaS) providers like PayPay, Salesforce, and SAP need hyperscale-capable facilities that can provide 2 to 5 megawatts at a time with the ability to expand into more space as they grow their customer base, with the need to be able to deploy on a local and regional basis to reduce latency and distribute loads for redundancy/ resilience purposes.

Autonomous vehicles

The quest towards a safe self-driving car continues, with companies like Uber and Tesla tapping into hyperscaling and supercomputer-level computing resources to digest large data sets from thousands operational vehicles on the roads and highways and feeding the large data sets into machine learning models to train and improve the underlying software.

Tesla’s Dojo supercomputer, utilizing customized silicon developed in house, is specifically engineered to train the company’s self-driving AI software. A single D1 chip, designed specifically for machine learning, is capable of up to 363 teraflops and includes 4 Tbps of off-chip bandwidth. The entire Dojo machine will include 3,000 D1 chips for 1.1 exaflops of AI computer. Dojo will add onto its existing HPC resources which include 10,000 GPUs spread across three HPC clusters.

Uber uses a combination of highly optimized computer, storage, and GPU servers to train autonomous vehicle models, run simulations, and test new software releases. The company has forced on the refinement of its software stack to ensure that its distributing deep learning can run in a high-performance computer environment.

Cryptocurrency

Cryptocurrency miners are at the leading edge of purchasing GPUs and other specialized hardware and are now moving into a phase where for cost and environmental reasons need to move from existing energy-intensive methods to more efficient ones that generate more currency per megawatt.

Dedicated cryptocurrency mining operations today are built around dedicated, compute-dense hardware consuming massive amounts of electricity. Last year, Bitcoin was estimated to have consumed 67 Terawatt hours (TWh) of electricity and is on track to consume 91 TWh by the end of 2021, in the same neighborhood of power consumption as the Philippines.

Bank of America has spent over $25 billion since 2010 to rearchitect major systems, including building an internal cloud and rapidly building software.

Finance

Consumer banking is under pressure from Apple Amazon, Facebook, and Google, as well as non- traditional FinTech players such as Intuit, PayPal, and Square. Retail banking has had to embrace e-commerce and online services for sheer survival, building hyperscaling facilities to support their existing customers and to prevent large-scale migrations to Cash App and Venmo over the long term.

Bank of America has spent over $25 billion since 2010 to rearchitect major systems, including building an internal cloud and rapidly building software. Its decision to keep its infrastructure in-house looks perceptive, given Capital One’s 2019 data breech on Amazon Web Service services, exposing 106 million customers’ personal information.

Real-Time analytics

Real-time analytics provide businesses with rapid insights on process flows. For example, a retail company might want to analyze point-of-sale data for fraud detection, scanning massive amounts of information flowing in to spot anomalies in purchasing behavior with sufficient accuracy that doesn’t affect normal consume purchases and ends up generating more work for a human call center.

For successful implementation, real-time analytics need to have low-latency in processing transactions, quick analysis to spot anomalous behavior, and enough intelligence to spot patterns and trends within a larger body of information.

NVIDIA has introduced silicon specifically designed for data center computing, built around a standardized multi-core CPU, a high-performance network interface to transfer data at line rates to GPUs and CPUs as needed, and a group of flexible and programmable acceleration engines to approve applications performance in a number of fields

Download the full report Hybrid Cloud, courtesy of NTT for exclusive tips on how to make colocation work for you.