The revolutionary capabilities of Artificial Intelligence (AI) are reshaping the landscape of computing as we know it. Thanks to its aptitude for real-time learning, AI presents a world of opportunity for intelligent devices across a range of industries to enhance user experience and redefine the limits of smart technology.
There’s just one problem: harnessing these capabilities demands immense computational resources that traditional chips are ill-equipped to deliver. This has forced a departure from the established trajectory of silicon design, challenging the enduring principles of Moore's Law and Dennard Scaling that have served as cornerstones of computing for decades. Taking their place are novel silicon architectures: tailor-made, exceptionally efficient chip designs engineered to act as AI accelerators that are poised to unlock the overwhelming potential of AI.
The AI revolution and the need for novel silicon architectures
Find out more about the future of computing
Moore's Law set the stage for a relentless doubling of transistors on integrated circuits every two years, while Dennard Scaling expected the power consumption of computer chips to steadily decrease as transistors shrank. These unerring predictions were followed by a consistent rise in computing power while simultaneously driving down computing costs.
Yet these formerly unstoppable trends are slowing down as we enter a new era of computing. Why? Because the demands of AI have brought the physical limits of creating ever smaller transistors into sharp focus. Although chip density continues to increase, its pace has slowed, as has the reduction in power consumption per unit area, resulting in larger chips that risk overheating when confronted with the computational requirements of AI.
To overcome these limitations, the future of silicon-based electronics hinges on the adoption of novel silicon architectures that employ layered and parallel processing designs to prioritise both high performance and low power consumption. These customised, silicon-based chips are designed to address the growing need for AI integration, paving the way for an array of pioneering applications that can deliver better digital experiences for industry and consumers alike.
So, what exactly are novel silicon architectures?
Simply put, novel silicon architectures are silicon-based chips designed to meet the fast computation and low power consumption that AI demands. They can easily handle the immense computational resources required for AI applications while maintaining remarkable energy efficiency, opening up immeasurable opportunities for intelligent AI devices to possess local data storage capabilities and run at impressively high speeds. They do this by stepping away from a reliance on merely scaling transistor size, and instead work within several novel frameworks:
Multi-layered chiplet-based designs
A stacked, 3D design made up of multiple chiplets (tiny integrated circuits with a specific function), that when combined form a bespoke, high-performance, low-power architecture. These designs can be compared to a high-tech version of building blocks, mixing and matching different chips with the required functions to create a highly focused chiplet built for a specific task.
Benefits of multi-layered chiplet-based design:
The chiplets can be made with different sized transistor nodes, each optimised for energy efficiency and cost.
The design can achieve high transistor density using well-established process technology without pushing the transistor size to its limit, therefore increasing yield and lowering the cost.
Can be highly optimised for specific tasks.
An architecture where multiple processing units run in tandem to handle multiple tasks simultaneously. Neuromorphic computing architectures are a good example of parallel processing, mimicking the human brain's efficiency by processing complex data in parallel and at high speed.
Benefits of parallel processing:
Increases computation speed without increasing processor clock speed - or can even bring clock speed down - reducing power consumption considerably.
Significantly boosts the capability to handle large amounts of computation data, such as with AI, while maintaining low latency and energy efficiency.
Power consumption is further reduced with effective use of so called “dark silicon”, an architecture where some parts of the chip can be put in “sleep mode” when not needed.
Analog compute-in-memory (CIM)
This refers to the combination of analog circuits with compute-in-memory to enable extremely low power and low latency, allowing for exceptionally low power Edge AI performance.
Benefits of analog CIM:
Highly suited to devices running at the edge that require extremely low power consumption.
Not slowed down by the latency of fetching data from an external memory, making it ideal for processing computationally intensive AI workloads such as video analytics applications for object detection, classification, segmentation and depth estimation.
By rejecting a one-size-fits-all model these designs, whether used independently or combined, can better cope with the diverse demands of modern computing, adapting easily to users' needs to provide elite efficiency in both performance and power consumption.
Applications of novel silicon architectures
There are countless potential applications for the novel silicon architectures we’ve described, particularly for devices running low power Edge AI. One possible use of such low power edge AI capabilities is smart wearables. For example, AI accelerators could provide professional cyclists with real-time insights during races via a lightweight wearable device. By monitoring a combined set of signals, such as current performance along with past data like sleep patterns and previous training sessions, cyclists can adjust their effort with live notifications sent to themselves, their trainer, or the closest hospital in extreme circumstances. These essential and immediate insights are facilitated by AI accelerators that, within small, light and wearable devices, could transform the way we engage with technology in our daily lives. We go more in depth into this use case in our Innovation Briefing - ‘Future of computing: the new commercial horizon.’
Another use case that specifically utilises analog CIM is edge computing within drone technology. A key requirement of drones is to run complex AI algorithms locally to provide immediate and accurate information to control stations. Analog CIM can process these workloads efficiently on-device with extremely low power consumption and low latency, enabling longer drone journeys for tasks like agricultural monitoring, infrastructure inspection (e.g., power lines), and disaster assessment.
These are just two examples of the exciting opportunities novel silicon architectures can offer for AI technology and beyond. From wearable devices to drone and security applications, they can facilitate incredible growth across a huge range of markets. As we look ahead, it’s clear that novel silicon architectures will shape the digital landscape, providing a solution to the limitations of traditional scaling and allowing silicon technology to continue innovating and flourishing in an AI-led future.