Not just techies, but scientistic community loves GPUs

2022-05-21 03:29:54 By : Mr. Youda Electric

The supercomputers, built in the 70s and 80s, depended on the Single instruction multiple data (SIMD) paradigm. In the 90s, there were other forms of cheaper and faster MIMD platforms like clusters and GRIDs, and SIMD was slowly left out. However, it has made a comeback today, led by its computational power and economic cost of building architectures that have turned the scientific community’s heads.

Nowadays, modern research has become highly demanding because of the large quantity of data and processes combined with complex algorithms required to conduct a study. This has set in motion the need to adopt GPU across all fields because it gives you a high performance system with enormous computational power and hardware configuration.

In order to understand it further, let us take a look at the scientific community and their feelings for adopting GPU.

GPU computing was born when the scientific community wanted to use its raw processing power for intensive computations. This comes after the power of graphics processing units (GPUs) is weighed against the computational power of hundreds of CPU cores. But, due to its architecture, this power can fully be utilised using specialised algorithms.

The characteristics of graphics computations are extremely parallel and highly arithmetic, and it guided the development of advanced GPUs with little or no cache at all. Another approach to high-performance computing (HPC) grew rapidly where the usage of general-purpose multi-core devices like Many Integrated Cores (MIC) co-processors and GPUs evolved. In particular, GPUs gained the most popularity due to their price and highly efficient parallel multi-core functions.

This helps build a series of cheaper, energy-efficient systems and allows tera-scale performance on common workstations. But, the tera-scale output gives a theoretical peak acquired by distributing the workload among each core. One of the most important factors between GPUs and HPCs is that the former can provide quality output without job scheduling and sharing confidential data.

Systems that leverage GPUs help scientists quickly investigate data run examinations and simulations. GPU acceleration helps reduce output time, resulting in faster results. These faster results can lead to many breakthroughs, and from a human perspective, it is less stress-inducing and saves a lot of material and energy.

The GPU operations are vectorised, where one operation is performed on up to four values at once. Historically, CPUs used hardware-managed caches, and earlier, GPUs gave only software-managed local memories. However, GPUs are increasingly used for general-purpose applications and are now made with hardware-managed multi-level caches – making them mainstream computing-ready. It also has large register files and helps in reducing context-switching latency.

As a frontrunner in producing large-scale GPUs, NVIDIA had introduced its colossal parallel architecture known as the compute unified device architecture (CUDA) back in 2006. This led to the evolution of the GPU programming model. It uses a parallel compute engine from the NVIDIA GPU to resolve massive computational problems. CUDA programs can be executed in a host (CPU) and device (GPU).

CUDA has three important parts that need to be used effectively to achieve maximum computation capability. The grids, blocks, and threads form the CUDA architecture and execute many parallel threads. Under these three levels, hierarchical architecture is independent of the execution level. A grid consists of thread blocks that are executed independently. Meanwhile, blocks are organised in a 3D array of threads and have a unique block ID (blockIdx). The kernel function then executes threads, and it also has a unique thread ID (threadIdx). GPU devices have multiple memories for executing threads. However, the total size of a block is 1024 threads. Each thread can access variables from the GPUs’ local memory and registers. Meanwhile, registers have the largest bandwidth, and each block has its shared memory of size 16 KB or 48 KB.

Some examples of GPUs deployed in various fields are listed below.

Astrophysics: The researchers usually work with datasets of 100 terabytes in size or more, and GPU acceleration and AI help them separate signal from noise in these massive datasets. It is also used to run large-scale simulations of the universe and assists in understanding phenomena like neutron star collisions, etc.

Medical and Biology: In the medical field, deep learning tools are used to assist doctors in identifying diseases and abnormalities, for example, spotting glioblastoma tumours with the help of brain scans. Meanwhile, developers can use supercomputing resources to create more complex models and subsequently increase the accuracy of a cancer diagnosis.

Medication: Discovering critical compounds for drug candidates are computationally demanding, time-consuming, and expensive. But, with the help of GPU-accelerated systems, researchers can quickly simulate protein folding and narrow down candidates to test.

Environment: Widely used for weather modelling and energy research, scientists rely on high-fidelity simulations to inspect complex natural systems. The researchers use it to predict weather, events, and earthquakes. It performs efficiently to give precise information on agricultural projects, inform precision agriculture projects, and scout potential energy sources like nuclear fusion.

Conference, in-person (Bangalore) MachineCon 2022 24th Jun

Conference, Virtual Deep Learning DevCon 2022 30th Jul

Conference, in-person (Bangalore) Cypher 2022 21-23rd Sep

Stay Connected with a larger ecosystem of data science and ML Professionals

Discover special offers, top stories, upcoming events, and more.

The tech accelerator’s early investments include Dropbox, Coinbase, Airbnb, and Reddit.

LTI and Mindtree both play in Analytics services businesses, just like most other large IT/ITes service providers. But, what would the analytics services business of the merged entity look like?

GitHub’s math rendering capability uses MathJax; an open-source, JavaScript-based display engine.

Meta recently organised messaging event called ‘Conversations.’

The studio will leverage Wipro’s deep reservoir of IPs, patents, and innovation DNA.

BMRCL plans to introduce the technology at its automatic fare collection gates.

In the next few months, DealShare looks to grow its data science team by 15-20 members.

The idea was if I give you a sequence of amino acids, can you predict what will be the structure or the shape that it will take in the 3D space?

GeoIQ’s AI-based location tool will help Lenskart with its aggressive store rollout strategy.

The main highlights of this release are performance enhancement with oneDNN and the release of a new API for model distribution, called DTensor

Stay up to date with our latest news, receive exclusive deals, and more.

© Analytics India Magazine Pvt Ltd 2022