The powerful new AI hardware of the future

0

As an observer of artificial intelligence in recent years at TrendsDSAITit is fascinating to observe the dichotomy between the amount of research and development in AI and its chilling impact on the real world.

Undoubtedly, we have many jaw-dropping developments from AI-synthesized faces that are indistinguishable from real faces, AI models that can explain jokes, and the ability to create images and original and realistic artwork from text descriptions.

But that hasn’t translated into commercial benefits for more than a handful of leading companies. For the most part, companies are still wrestling with their boardrooms over whether to implement AI or strive to operationalize it.

In the meantime, ethical dilemmas remain unresolved, biases are rampant, and at least one regulator has warned banks against using AI.

A popular business quote comes to mind: “We tend to overestimate the effect of a technology in the short term and underestimate the effect in the long term.”

So yes, while the immediate gains of AI seem to be lacking, the long-term impact of AI could still exceed our wildest expectations. And powerful new AI hardware may well accelerate AI developments.

More powerful AI hardware

But why the fascination with more powerful hardware? In the groundbreaking “Scaling Laws for Neural Language Models” paper published in 2020, researchers from Open AI concluded that larger AI models will continue to perform better and be “much more sample-efficient” than previously thought.

While the researchers warned that more work is needed to test whether the scaling holds, the current hypothesis is that more powerful AI hardware could train much larger models that will give capabilities well to the beyond the current AI model.

GPUs from NVIDIA and AMD, along with specialized AI processors from tech giants such as Google, would lead the charge on the hardware front. For instance:

  • NVIDIA’s upcoming H100 AI processor promises to deliver next-generation performance by packing some 80 billion transistors into a chip that sits at the cutting edge of today’s chip manufacturing equipment.
  • Google’s Tensor Processing Unit (TPU) v4 chips launched in April deliver a staggering 275 teraflops of ML-targeted bfloat16 (“brain floating-point”) performance, and more can be packed into a “pod” or rack. ‘IA.
  • Although not as big as NVIDIA, chipmaker AMD has won a few remarkable offers like the US Department of Energy’s exascale supercomputer trio, which incorporates its Instinct MI250X GPUs.

Go off the beaten track

There are also areas of research that could impact the development of AI. For example, the Loihi 2 is an experimental second-generation “neuromorph” chip from Intel. Announced last year, a neuromorphic processor mimics the human brain by using programmable components to simulate neurons.

According to his technical file (pdf), the Loihi 2 has 128 cores and potentially has over a million digital neurons due to its asynchronous design. The human brain has around 90 billion interconnected neurons, so there is still a long way to go.

Chips like the Loihi 2 have another advantage, however. As one notes report on The register, high-end AI systems such as DeepMind’s AlphaGo require thousands of processing units running in parallel, each consuming around 200 watts. That’s a lot of power and we haven’t even factored in auxiliary systems or cooling gear yet.

For its part, neuromorphic hardware promises between four and 16 times the energy efficiency of other AI models running on conventional hardware.

Step up to the next level with quantum computing

While the Loihi 2 is made up of traditional transistors – there are 2.3 billion of them in the Loihi 2 – another race is underway to make a completely different type of computer known as quantum computers.

According to a report on AIMultiple, quantum computing can be used for the rapid training of machine learning models and for creating optimized algorithms. Of course, it should be pointed out that quantum computers are much more complex to build due to the special materials and operating environments required to access the required quantum states.

Indeed, experts estimate that it could take another two decades to produce a general quantum computer, even if quantum computers running up to 127 qubits exist.

In Southeast Asia, Singapore is steps up investment in quantum computing with new initiatives to boost talent development and provide access to technology. This includes a foundry to develop the components and materials needed to build quantum computers.

Whatever the future holds for AI in the decades to come, it won’t be for lack of computational prowess.

Paul Mah is the editor of DSAITrends. A former system administrator, programmer and professor of computer science, he enjoys writing code and prose. You can reach him at [email protected].​

Photo credit: iStockphoto/jiefeng jiang

Share.

About Author

Comments are closed.