A Guide to Computer Operation and Functionality
India is making strides in the world of artificial intelligence (AI) with the main specialized AI chip manufacturer, Vervesemi Microelectronics, planning to produce advanced chips with embedded machine learning capabilities by late 2026 or early 2027. Other Indian startups like 3rdiTech, Netrasemi, BigEndian Semiconductors, and Mindgrove Technologies are also contributing to this effort under the government-supported DLI Scheme promoting indigenous chip design.
Vervesemi's AI chips offer advantages such as self-healing, predictive diagnostics, and efficiency tailored for AI tasks. These chips outperform CPUs and general GPUs in specialized performance and energy efficiency. However, they may have disadvantages like less flexibility compared to CPUs and the broader programmability of GPUs.
The Compute Stack
The world of AI is built on a three-tiered stack: hardware, software, and infrastructure. The hardware layer, which includes chips like graphic processing units (GPUs), provides the raw capability that AI models utilize to process data and generate insights.
GPUs and Beyond
GPUs are specialized chips that contain thousands of arithmetic logic units for parallel processing. Beyond graphics, GPUs have found applications notably in AI. Other specialized AI chips include application-specific integrated circuits (ASICs) like Google's TPUs and field programmable gate arrays (FPGAs) like Intel's Agilex.
ASICs and FPGAs are more efficient than GPUs when it comes to inference. ASICs are custom designed for specific tasks or applications, while FPGAs can be modified even after manufacturing.
India's AI Ecosystem
The Indian government has taken significant steps to build a robust AI ecosystem in India. In October 2023, a proposal was put forward to set up 25,000 GPUs in the country for AI development and use. The Indian cabinet approved a Rs. 10,372-crore outlay in March 2024 for the IndiaAI Mission, with the aim to establish compute capacity of at least 10,000 GPUs. Prime Minister Narendra Modi shared a vision in December 2023 to establish adequate AI compute power in India.
The Power of Compute
As a metric of measurement, compute refers to the number of floating-point operations per second (FLOPS) or calculations that a processor can do in one second. The selection of a chip depends on the specific application they are required for.
Several studies have found that specialized AI chips perform ten to a thousand times better in terms of speed and efficiency in deep learning models compared to CPUs. Google's Tensor Processing Units (TPUs) are specialized AI chips built to handle matrix multiplication.
The Software Layer
The software layer of the compute stack enables users to communicate instructions to the hardware and monitor results. It plays a vital role in ensuring that the hardware is utilized to its full potential.
The world's fastest supercomputer, Frontier, has a peak speed of 1,679.82 petaflops, showcasing the potential of what can be achieved with advanced compute technology. As India continues to invest in its AI ecosystem, we can expect to see significant advancements in the field.
Read also:
- Nightly sweat episodes linked to GERD: Crucial insights explained
- Antitussives: List of Examples, Functions, Adverse Reactions, and Additional Details
- Asthma Diagnosis: Exploring FeNO Tests and Related Treatments
- Unauthorized disclosure of Azure AD Client Secrets: Privacy in the digital realm under threat due to exposure of cloud credentials