Event Sensors Bring Just the Right Data to Device Makers

They’re ultraefficient because they detect only change and motion

10 min read
Two photos of a dancer in motion. The left, conventional, photo has blurry elements while the right, event sensor-enhanced photo, is sharp all around.

Engineers can tune event sensors to sense, and send, less data but only the necessary data. The image on the left was captured by a conventional image sensor. The image on the right was enhanced using event sensor data.

Prophesee

Anatomically, the human eye is like a sophisticated tentacle that reaches out from the brain, with the retina acting as the tentacle’s tip and touching everything the person sees. Evolution worked a wonder with this complex nervous structure.

Now, contrast the eye’s anatomy to the engineering of the most widely used machine-vision systems today: a charge-coupled device (CCD) or a CMOS imaging chip, each of which consists of a grid of pixels. The eye is orders of magnitude more efficient than these flat-chipped computer-vision kits. Here’s why: For any scene it observes, a chip’s pixel grid is updated periodically—and in its entirety—over the course of receiving the light from the environment. The eye, though, is much more parsimonious, focusing its attention only on a small part of the visual scene at any one time—namely, the part of the scene that changes, like the fluttering of a leaf or a golf ball splashing into water.

My company, Prophesee, and our competitors call these changes in a scene “events.” And we call the biologically inspired, machine-vision systems built to capture these events neuromorphic event sensors. Compared to CCDs and CMOS imaging chips, event sensors respond faster, offer a higher dynamic range—meaning they can detect both in dark and bright parts of the scene at the same time—and capture quick movements without blur, all while producing new data only when and where an event is sensed, which makes the sensors highly energy and data efficient. We and others are using these biologically inspired supersensors to significantly upgrade a wide array of devices and machines, including high-dynamic-range cameras, augmented-reality wearables, drones, and medical robots.

So wherever you look at machines these days, they’re starting to look back—and, thanks to event sensors, they’re looking back more the way we do.

Event-sensing videos may seem unnatural to humans, but they capture just what computers need to know: motion.Prophesee

Event Sensors vs. CMOS Imaging Chips

Digital sensors inspired by the human eye date back decades. The first attempts to make them were in the 1980s at the California Institute of Technology. Pioneering electrical engineers Carver A. Mead, Misha Mahowald, and their colleagues used analog circuitry to mimic the functions of the excitable cells in the human retina, resulting in their “silicon retina.” In the 1990s, Mead cofounded Foveon to develop neurally inspired CMOS image sensors with improved color accuracy, less noise at low light, and sharper images. In 2008, camera maker Sigma purchased Foveon and continues to develop the technology for photography.

A number of research institutions continued to pursue bioinspired imaging technology through the 1990s and 2000s. In 2006, a team at the Institute of Neuroinformatics at the University of Zurich, built the first practical temporal-contrast event sensor, which captured changes in light intensity over time. By 2010, researchers at the Seville Institute of Microelectronics had designed sensors that could be tuned to detect changes in either space or time. Then, in 2010, my group at the Austrian Institute of Technology, in Vienna, combined temporal contrast detection with photocurrent integration at the pixel-level to both detect relative changes in intensity and acquire absolute light levels in each individual pixel . More recently, in 2022, a team at the Institut de la Vision, in Paris, and their spin-off, Pixium Vision, applied neuromorphic sensor technology to a biomedical application—a retinal implant to restore some vision to blind people. (Pixium has since been acquired by Science Corp., the Alameda, Calif.–based maker of brain-computer interfaces.)

Other startups that pioneered event sensors for real-world vision tasks include iniVation in Zurich (which merged with SynSense in China), CelePixel in Singapore (now part of OmniVision), and my company, Prophesee (formerly Chronocam), in Paris.

TABLE 1: Who’s Developing Neuromorphic Event Sensors

Date releasedCompanySensorEvent pixel resolutionStatus
2023OmniVisionCelex VII1,032 x 928Prototype
2023PropheseeGenX320320 x 320Commercial
2023SonyGen31,920 x 1,084Prototype
2021Prophesee & SonyIMX636/637/646/6471,280 x 720Commercial
2020SamsungGen41,280 x 960Prototype
2018SamsungGen3640 x 480Commercial

Among the leading CMOS image sensor companies, Samsung was the first to present its own event-sensor designs. Today other major players, such as Sony and OmniVision, are also exploring and implementing event sensors. Among the wide range of applications that companies are targeting are machine vision in cars, drone detection, blood-cell tracking, and robotic systems used in manufacturing.

How an Event Sensor Works

To grasp the power of the event sensor, consider a conventional video camera recording a tennis ball crossing a court at 150 kilometers per hour. Depending on the camera, it will capture 24 to 60 frames per second, which can result in an undersampling of the fast motion due to large displacement of the ball between frames and possibly cause motion blur because of the movement of the ball during the exposure time. At the same time, the camera essentially oversamples the static background, such as the net and other parts of the court that don’t move.

If you then ask a machine-vision system to analyze the dynamics in the scene, it has to rely on this sequence of static images—the video camera’s frames—which contain both too little information about the important things and too much redundant information about things that don’t matter. It’s a fundamentally mismatched approach that’s led the builders of machine-vision systems to invest in complex and power-hungry processing infrastructure to make up for the inadequate data. These machine-vision systems are too costly to use in applications that require real-time understanding of the scene, such as autonomous vehicles, and they use too much energy, bandwidth, and computing resources for applications like battery-powered smart glasses, drones, and robots.

Ideally, an image sensor would use high sampling rates for the parts of the scene that contain fast motion and changes, and slow rates for the slow-changing parts, with the sampling rate going to zero if nothing changes. This is exactly what an event sensor does. Each pixel acts independently and determines the timing of its own sampling by reacting to changes in the amount of incident light. The entire sampling process is no longer governed by a fixed clock with no relation to the scene’s dynamics, as with conventional cameras, but instead adapts to subtle variations in the scene.

An application that’s tracking the red ball, and nothing else in the scene, won’t need to record or transmit all the data in each frame.

Prophesee

Let’s dig deeper into the mechanics. When the light intensity on a given pixel crosses a predefined threshold, the system records the time with microsecond precision. This time stamp and the pixel’s coordinates in the sensor array form a message describing the “event,” which the sensor transmits as a digital data package. Each pixel can do this without the need for an external intervention such as a clock signal and independently of the other pixels. Not only is this architecture vital for accurately capturing quick movements, but it’s also critical for increasing an image’s dynamic range. Since each pixel is independent, the lowest light in a scene and the brightest light in a scene are simultaneously recorded; there’s no issue of over- or underexposed images.

Each pixel in an event sensor is independent and sends information only if the light hitting it changes more than a preset amount.Prophesee

The output generated by a video camera equipped with an event sensor is not a sequence of images but rather a continuous stream of individual pixel data, generated and transmitted based on changes happening in the scene. Since in many scenes, most pixels do not change very often, event sensors promise to save energy compared to conventional CMOS imaging, especially when you include the energy of data transmission and processing. For many tasks, our sensors consume about a tenth the power of a conventional sensor. Certain tasks, for example eye tracking for smart glasses, require even less energy for sensing and processing. In the case of the tennis ball, where the changes represent a small fraction of the overall field of vision, the data to be transmitted and processed is tiny compared to conventional sensors, and the advantages of an event sensor approach are enormous: perhaps five or even six orders of magnitude.

Event Sensors in Action

To imagine where we will see event sensors in the future, think of any application that requires a fast, energy- and data-efficient camera that can work in both low and high light. For example, they would be ideal for edge devices: Internet-connected gadgets that are often small, have power constraints, are worn close to the body (such as a smart ring), or operate far from high-bandwidth, robust network connections (such as livestock monitors).

Event sensors’ low power requirements and ability to detect subtle movement also make them ideal for human-computer interfaces—for example, in systems for eye and gaze tracking, lipreading, and gesture control in smartwatches, augmented-reality glasses, game controllers, and digital kiosks at fast food restaurants.

For the home, engineers are testing wall-mounted event sensors in health monitors for the elderly, to detect when a person falls. Here, event sensors have another advantage—they don’t need to capture a full image, just the event of the fall. This means the monitor sends only an alert, and the use of a camera doesn’t raise the usual privacy concerns.

Event sensors can also augment traditional digital photography. Such applications are still in the development stage, but researchers have demonstrated that when an event sensor is used alongside a phone’s camera, the extra information about the motion within the scene as well as the high and low lighting from the event sensor can be used to remove blur from the original image, add more crispness, or boost the dynamic range.

Event sensors could be used to remove motion in the other direction, too: Currently, cameras rely on electromechanical stabilization technologies to keep the camera steady. Event-sensor data can be used to algorithmically produce a steady image in real time, even as the camera shakes. And because event sensors record data at microsecond intervals, faster than the fastest CCD or CMOS image sensors, it’s also possible to fill in the gaps between the frames of traditional video capture. This can effectively boost the frame rate from tens of frames per second to tens of thousands, enabling ultraslow-motion video on demand after the recording has finished. Two obvious applications of this technique are helping referees at sporting events resolve questions right after a play, and helping authorities reconstruct the details of traffic collisions.

An event sensor records and sends data only when light changes more than a user-defined threshold. The size of the arrows in the video at right convey how fast different parts of the dancer and her dress are moving. Prophesee

Meanwhile, a wide range of early-stage inventors are developing applications of event sensors for situational awareness in space, including satellite and space-debris tracking. They’re also investigating the use of event sensors for biological applications, including microfluidics analysis and flow visualization, flow cytometry, and contamination detection for cell therapy.

But right now, industrial applications of event sensors are the most mature. Companies have deployed them in quality control on beverage-carton production lines, in laser welding robots, and in Internet of Things devices. And developers are working on using event sensors to count objects on fast-moving conveyor belts, provide visual-feedback control for industrial robots, and to make touchless vibration measurements of equipment, for predictive maintenance.

The Data Challenge for Event Sensors

There is still work to be done to improve the capabilities of the technology. One of the biggest challenges is in the kind of data event sensors produce. Machine-vision systems use algorithms designed to interpret static scenes. Event data is temporal in nature, effectively capturing the swings of a robot arm or the spinning of a gear, but those distinct data signatures aren’t easily parsed by current machine-vision systems.

Engineers can calibrate an event sensor to send a signal only when the number of photons changes more than a preset amount. This way, the sensor sends less, but more relevant, data. In this chart, only changes to the intensity [black curve] greater than a certain amount [dotted horizontal lines] set off an event message [blue or red, depending on the direction of the change]. Note that the y-axis is logarithmic and so the detected changes are relative changesProphesee

This is where Prophesee comes in. My company offers products and services that help other companies more easily build event-sensor technology into their applications. So we’ve been working on making it easier to incorporate temporal data into existing systems in three ways: by designing a new generation of event sensors with industry-standard interfaces and data protocols; by formatting the data for efficient use by a computer-vision algorithm or a neural network; and by providing always-on low-power mode capabilities. To this end, last year we partnered with chipmaker AMD to enable our Metavision HD event sensor to be used with AMD’s Kria KV260 Vision AI Starter Kit, a collection of hardware and software that lets developers test their event-sensor applications. The Prophesee and AMD development platform manages some of the data challenges so that developers can experiment more freely with this new kind of camera.

One approach that we and others have found promising for managing the data of event sensors is to take a cue from the biologically inspired neural networks used in today’s machine-learning architectures. For instance, spiking neural networks, or SNNs, act more like biological neurons than traditional neural networks do—specifically, SNNs transmit information only when discrete “spikes” of activity are detected, while traditional neural nets process continuous values. SNNs thus offer an event-based computational approach that is well matched to the way that event sensors capture scene dynamics.

Another kind of neural network that’s attracting attention is called a graph neural network, or GNN. These types of neural networks accept graphs as input data, which means they’re useful for any kind of data that’s represented by a mesh of nodes and their connections—for example, social networks, recommendation systems, molecular structures, and the behavior of biological and digital viruses. As it happens, the data that event sensors produce can also be represented by a graph that’s 3D, where there are two dimensions of space and one dimension of time. The GNN can effectively compress the graph from an event sensor by picking out features such as 2D images, distinct types of objects, estimates of the direction and speed of objects, and even bodily gestures. We think GNNs will be especially useful for event-based edge-computing applications with limited power, connectivity, and processing. We’re currently working to put a GNN almost directly into an event sensor and eventually to incorporate both the event sensor and the GNN process into the same millimeter-dimension chip.

In the future, we expect to see machine-vision systems that follow nature’s successful strategy of capturing the right data at just the right time and processing it in the most efficient way. Ultimately, that approach will allow our machines to see the wider world in a new way, which will benefit both us and them.

{"imageShortcodeIds":[]}
The Conversation (0)
Sort by

From Bottleneck to Breakthrough: AI in Chip Verification

How AI is transforming chip design with smarter verification methods

8 min read
Close-up of a blue circuit board featuring a large, central white microchip.

Using advanced machine learning algorithms, Vision AI analyzes every error to find groups with common failure causes. This means designers can attack the root cause once, fixing problems for hundreds of checks at a time instead of tediously resolving them one by one.

Siemens

This is a sponsored article brought to you by Siemens.

In the world of electronics, integrated circuits (IC) chips are the unseen powerhouse behind progress. Every leap—whether it’s smarter phones, more capable cars, or breakthroughs in healthcare and science—relies on chips that are more complex, faster, and packed with more features than ever before. But creating these chips is not just a question of sheer engineering talent or ambition. The design process itself has reached staggering levels of complexity, and with it, the challenge to keep productivity and quality moving forward.

As we push against the boundaries of physics, chipmakers face more than just technical hurdles. The workforce challenges, tight timelines, and the requirements for building reliable chips are stricter than ever. Enormous effort goes into making sure chip layouts follow detailed constraints—such as maintaining minimum feature sizes for transistors and wires, keeping proper spacing between different layers like metal, polysilicon, and active areas, and ensuring vias overlap correctly to create solid electrical connections. These design rules multiply with every new technology generation. For every innovation, there’s pressure to deliver more with less. So, the question becomes: How do we help designers meet these demands, and how can technology help us handle the complexity without compromising on quality?

Shifting the paradigm: the rise of AI in electronic design automation

A major wave of change is moving through the entire field of electronic design automation (EDA), the specialized area of software and tools that chipmakers use to design, analyze, and verify the complex integrated circuits inside today’s chips. Artificial intelligence is already touching many parts of the chip design flow—helping with placement and routing, predicting yield outcomes, tuning analog circuits, automating simulation, and even guiding early architecture planning. Rather than simply speeding up old steps, AI is opening doors to new ways of thinking and working.

Machine learning models can help predict defect hotspots or prioritize risky areas long before sending a chip to be manufactured.

Instead of brute-force computation or countless lines of custom code, AI uses advanced algorithms to spot patterns, organize massive datasets, and highlight issues that might otherwise take weeks of manual work to uncover. For example, generative AI can help designers ask questions and get answers in natural language, streamlining routine tasks. Machine learning models can help predict defect hotspots or prioritize risky areas long before sending a chip to be manufactured.

This growing partnership between human expertise and machine intelligence is paving the way for what some call a “shift left” or concurrent build revolution—finding and fixing problems much earlier in the design process, before they grow into expensive setbacks. For chipmakers, this means higher quality and faster time to market. For designers, it means a chance to focus on innovation rather than chasing bugs.

Figure 1. Shift-left and concurrent build of IC chips performs multiple tasks simultaneously that use to be done sequentially.Siemens

The physical verification bottleneck: why design rule checking is harder than ever

As chips grow more complex, the part of the design called physical verification becomes a critical bottleneck. Physical verification checks whether a chip layout meets the manufacturer’s strict rules and faithfully matches the original functional schematic. Its main goal is to ensure the design can be reliably manufactured into a working chip, free of physical defects that might cause failures later on.

Design rule checking (DRC) is the backbone of physical verification. DRC software scans every corner of a chip’s layout for violations—features that might cause defects, reduce yield, or simply make the design un-manufacturable. But today’s chips aren’t just bigger; they’re more intricate, woven from many layers of logic, memory, and analog components, sometimes stacked in three dimensions. The rules aren’t simple either. They may depend on the geometry, the context, the manufacturing process and even the interactions between distant layout features.

Priyank Jain leads product management for Calibre Interfaces at Siemens EDA.Siemens

Traditionally, DRC is performed late in the flow, when all components are assembled into the final chip layout. At this stage, it’s common to uncover millions of violations—and fixing these late-stage issues requires extensive effort, leading to costly delays.

To minimize this burden, there’s a growing focus on shifting DRC earlier in the flow—a strategy called “shift-left.” Instead of waiting until the entire design is complete, engineers try to identify and address DRC errors much sooner at block and cell levels. This concurrent design and verification approach allows the bulk of errors to be caught when fixes are faster and less disruptive.

However, running DRC earlier in the flow on a full chip when the blocks are not DRC clean produces results datasets of breathtaking scale—often tens of millions to billions of “errors,” warnings, or flags because the unfinished chip design is “dirty” compared to a chip that’s been through the full design process. Navigating these “dirty” results is a challenge all on its own. Designers must prioritize which issues to tackle, identify patterns that point to systematic problems, and decide what truly matters. In many cases, this work is slow and “manual,” depending on the ability of engineers to sort through data, filter what matters, and share findings across teams.

To cope, design teams have crafted ways to limit the flood of information. They might cap the number of errors per rule, or use informal shortcuts—passing databases or screenshots by email to team members, sharing filters in chat messages, and relying on experts to know where to look. Yet this approach is not sustainable. It risks missing major, chip-wide issues that can cascade through the final product. It slows down response and makes collaboration labor-intensive.

With ongoing workforce challenges and the surging complexity of modern chips, the need for smarter, more automated DRC analysis becomes urgent. So what could a better solution look like—and how can AI help bridge the gap?

The rise of AI-powered DRC analysis

Recent breakthroughs in AI have changed the game for DRC analysis in ways that were unthinkable even a few years ago. Rather than scanning line by line or check by check, AI-powered systems can process billions of errors, cluster them into meaningful groups, and help designers find the root causes much faster. These tools use techniques from computer vision, advanced machine learning, and big data analytics to turn what once seemed like an impossible pile of information into a roadmap for action.

AI’s ability to organize chaotic datasets—finding systematic problems hidden across multiple rules or regions—helps catch risks that basic filtering might miss. By grouping related errors and highlighting hot spots, designers can see the big picture and focus their time where it counts. AI-based clustering algorithms reliably transform weeks of manual investigation into minutes of guided analysis.

AI-powered systems can process billions of errors, cluster them into meaningful groups, and help designers find the root causes much faster.

Another benefit: collaboration. By treating results as shared, living datasets—rather than static tables—modern tools let teams assign owners, annotate findings and pass exact analysis views between block and partition engineers, even across organizational boundaries. Dynamic bookmarks and shared UI states cut down on confusion and rework. Instead of “back and forth,” teams move forward together.

Many of these innovations tease at what’s possible when AI is built into the heart of the verification flow. Not only do they help designers analyze the results; they help everyone reason about the data, summarize findings and make better design decisions all the way to tape out.

A real-world breakthrough in DRC analysis and collaboration: Siemens’ Calibre Vision AI

One of the most striking examples of AI-powered DRC analysis comes from Siemens, whose Calibre Vision AI platform is setting new standards for how full-chip verification happens. Building on years of experience in physical verification, Siemens realized that breaking bottlenecks required not only smarter algorithms but rethinking how teams work together and how data moves across the flow.

Vision AI is designed for speed and scalability. It uses a compact error database and a multi-threaded engine to load millions—or even billions—of errors in minutes, visualizing them so engineers see clusters and hot spots across the entire die. Instead of a wall of error codes or isolated rule violations, the tool presents a heat map of the layout, highlighting areas with the highest concentration of issues. By enabling or disabling layers (layout, markers, heat map) and adjusting layer opacity, users get a clear, customizable view of what’s happening—and where to look next.

Using advanced machine learning algorithms, Vision AI analyzes every error to find groups with common failure causes.

But the real magic is in AI-guided clustering. Using advanced machine learning algorithms, Vision AI analyzes every error to find groups with common failure causes. This means designers can attack the root cause once, fixing problems for hundreds of checks at a time instead of tediously resolving them one by one. In cases where legacy tools would force teams to slog through, for example, 3,400 checks with 600 million errors, Vision AI’s clustering can reduce that effort to investigating just 381 groups—turning mountains into molehills and speeding debug time by at least 2x.

Figure 2. The Calibre Vision AI software automates and simplifies the chip-level DRC verification process.Siemens

Vision AI is also highly collaborative. Dynamic bookmarks capture the exact state of analysis, from layer filters to zoomed layout areas, along with annotations and owner assignments. Sharing a bookmark sends a living analysis—not just a static snapshot—to coworkers, so everyone is working from the same view. Teams can export results databases, distribute actionable groups to block owners, and seamlessly import findings into other Siemens EDA tools for further debug.

Empowering every designer: reducing the expertise gap

A frequent pain point in chip verification is the need for deep expertise—knowing which errors matter, which patterns mean trouble, and how to interpret complex results. Calibre Vision AI helps level the playing field. Its AI-based algorithms consistently create the same clusters and debug paths that senior experts would identify, but does so in minutes. New users can quickly find systematic issues and perform like seasoned engineers, helping chip companies address workforce shortages and staff turnover.

Beyond clusters and bookmarks, Vision AI lets designers build custom signals by leveraging their own data. The platform secures customer models and data for exclusive use, making sure sensitive information stays within the company. And by integrating with Siemens’ EDA AI ecosystem, Calibre Vision AI supports generative AI chatbots and reasoning assistants. Designers can ask direct questions—about syntax, about a signal, about the flow—and get prompt—accurate answers, streamlining training and adoption.

Real results: speeding analysis and sharing insight

Customer feedback from leading IC companies shows the real-world value of AI for full-chip DRC analysis and debug. One company reported that Vision AI reduced their debug effort by at least half—a gain that makes the difference between tapeout and delay. Another noted the platform’s signals algorithm automatically creates the same check groups that experienced users would manually identify, saving not just time but energy.

Quantitative gains are dramatic. For example, Calibre Vision AI can load and visualize error files significantly faster than traditional debug flows. Figure 3 shows the difference in four different test cases: a results file that took 350 minutes with the traditional flow, took Calibre Vision AI only 31 minutes. In another test case (not shown), it took just five minutes to analyze and cluster 3.2 billion errors from more than 380 rule checks into 17 meaningful groups. Instead of getting lost in gigabytes of error data, designers now spend time solving real problems.

Figure 3. Charting the results load time between the traditional DRC debug flow and the Calibre Vision AI flow.Siemens

Looking ahead: the future of AI in chip design

Today’s chips demand more than incremental improvements in EDA software. As the need for speed, quality and collaboration continues to grow, the story of physical verification will be shaped by smarter, more adaptive technologies. With AI-powered DRC analysis, we see a clear path: a faster and more productive way to find systematic issues, intelligent debug, stronger collaboration and the chance for every designer to make an expert impact.

By combining the creativity of engineers with the speed and insight of AI, platforms like Calibre Vision AI are driving a new productivity curve in full-chip analysis. With these tools, teams don’t just keep up with complexity—they turn it into a competitive advantage.

At Siemens, the future of chip verification is already taking shape—where intelligence works hand in hand with intuition, and new ideas find their way to silicon faster than ever before. As the industry continues to push boundaries and unlock the next generation of devices, AI will help chip design reach new heights.

For more on Calibre Vision AI and how Siemens is shaping the future of chip design, visit eda.sw.siemens.com and search for Calibre Vision AI.

Keep Reading ↓ Show less

EPICS in IEEE Funds Record-Breaking Number of Projects

Smart braille system and air-quality tracker stand out

5 min read
MUET EPICS team and volunteers visited the project community partner at Ida Rieu School for the blind and deaf.

The EPICS in IEEE team from Pakistan’s Mehran University of Engineering and Technology and volunteers created posters describing their generative AI-powered voice interactive braille learning platform being used at the Ida Rieu School for the blind and deaf.

EPICS in IEEE

The EPICS (Engineering Projects in Community Service) in IEEE initiative had a record year in 2025, funding 48 projects involving nearly 1,000 students from 17 countries. The IEEE Educational Activities program approved the most projects this year, distributing US $290,000 in funding and engaging more students than ever before in innovative, hands-on engineering systems.

The program offers students opportunities to engage in service learning and collaborate with engineering professionals and community organizations to develop solutions that address local community challenges. The projects undertaken by IEEE groups encompass student branches, sections, society chapters, and affinity groups including Women in Engineering and Young Professionals.

EPICS in IEEE provides funding up to $10,000, along with resources and mentorship, for projects focused on four key areas of community improvement: education and outreach, environment, access and abilities, and human services.

This year, EPICS partnered with four IEEE societies and the IEEE Standards Association on 23 of the 48 approved projects. The Antennas and Propagation Society supported three, the Industry Applications Society (IAS) funded nine, the Instrumentation and Measurement Society (IMS) sponsored five, the Robotics and Automation Society supported two, the Solid State Circuits Society (SSCS) provided funding for three, and the IEEE Standards Association sponsored one.

The stories of the partner-funded projects demonstrate the impact and the effect the projects have on the students and their communities.

Matoruco agroecological garden

The IAS student branch at the Universidad Pontificia Bolivariana in Colombia worked on a project that involved water storage, automated irrigation, and waste management. The goal was to transform the Matoruco agroecological garden at the Institución Educativa Los Garzones into a more lively, sustainable, and accessible space.

These EPICS in IEEE team members from the Universidad Pontificia Bolivariana in Colombia are configuring a radio communications network that will send data to an online dashboard showing the solar power usage, pump status, and soil moisture for the Matoruco agroecological garden at the Institución Educativa Los Garzones. EPICS in IEEE

By using an irrigation automation system, electric pump control, and soil moisture monitoring, the team aimed to show how engineering concepts combine academic knowledge and practical application. The initiative uses monocrystalline solar panels for power, a programmable logic controller to automatically manage pumps and valves, soil moisture sensors for real-time data, and a LoRa One network (a proprietary radio communication system based on spread spectrum modulation) to send data to an online dashboard showing solar power usage, pump status, and soil moisture.

Los Garzones preuniversity students were taught about the irrigation system through hands-on projects, received training on organic waste management from university students, and participated in installation activities. The university team also organizes garden cleanup events to engage younger students with the community garden.

“We seek to generate a true sense of belonging by offering students and faculty a gathering place for hands-on learning and shared responsibility,” says Rafael Gustavo Ramos Noriega, the team lead and fourth-year electronics engineering student. “By integrating technical knowledge with fun activities and training sessions, we empower the community to keep the garden alive and continue improving it.

“This project has been an unmatched platform for preparing me for a professional career,” he added. “By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results. All of this reinforces my goal of dedicating myself to research and development in automation and embedded systems and contributing innovation in the agricultural and environmental sectors to help more communities and make my mark.”

The project received $7,950 from IAS.

Students give a tour of the systems they installed at the Matoruco agroecological garden.

A smart braille system

More than 1.5 million individuals in Pakistan are blind, including thousands of children who face barriers to accessing essential learning resources, according to the International Agency for the Prevention of Blindness. To address the need for accessible learning tools, a student team from the Mehran University of Engineering and Technology (MUET) and the IEEE Karachi Section created BrailleGenAI: Empowering Braille Learning With Edge AI and Voice Interaction.

The interactive system for blind children combines edge artificial intelligence, generative AI, and embedded systems, says Kainat Fizzah Muhammad, a project leader and electrical engineering student at MUET. The system uses a camera to recognize tactile braille blocks and provide real-time audio feedback via text-to-speech technology. It includes gamified modules designed to support literacy, numeracy, logical reasoning, and voice recognition.

The team partnered with the Hands Welfare Foundation, a nonprofit in Pakistan that focuses on inclusive education, disability empowerment, and community development. The team collaborated with the Ida Rieu School, part of the Ida Rieu Welfare Association, which serves the visually and hearing impaired.

“These partnerships have been instrumental in helping us plan outreach activities, gather input from experts and caregivers, and prepare for usability testing across diverse environments,” says Attiya Baqai, a professor in the MUET electronic engineering department. Support from the Hands foundation ensured the solution was shaped by the real-world needs of the visually impaired community.

SSCS provided $9,155 in funding.

The student team shows how the smart braille system they developed works.

Tackling air pollution

Macedonia’s capital, Skopje, is among Europe’s most polluted cities, particularly in winter, due to thick smog caused by temperature changes, according to the World Health Organization. The WHO reports that the city’s air contains particles that can cause health issues without early warning signs—known as silent killers.

A team at Sts. Cyril and Methodius University created a system to measure and publicize local air pollution levels through its What We Breathe project. It aims to raise awareness and improve health outcomes, particularly among the city’s children.

“Our goal is to provide people with information on current pollution levels so they can make informed decisions regarding their exposure and take protective measures,” says Andrej Ilievski, an IEEE student member majoring in computer hardware engineering and electronics. “We chose to focus on schools first because children’s lungs and immune systems are still developing, making them one of our population’s most vulnerable demographics.”

The project involved 10 university students working with high schools, faculty, and the Society of Environmental Engineers of Macedonia to design and build a sensing and display tool that communicates via the Internet.

“By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results.” —Rafael Gustavo Ramos Noriega

“Our sensing unit detects particulate matter, temperature, and humidity,” says project leader Josif Kjosev, an electronics professor at the university. “It then transmits that data through a Wi-Fi connection to a public server every 5 minutes, while our display unit retrieves the data from the server.”

“Since deploying the system,” Ilievski says, “everyone on the team has been enthusiastic about how well the project connects with their high school audience.”

The team says it hopes students will continue to work on new versions of the devices and provide them to other interested schools in the area.

“For most of my life, my academic success has been on paper,” Ilievski says. “But thanks to our EPICS in IEEE project, I finally have a real, physical object that I helped create.

“We’re grateful for the opportunity to make this project a reality and be part of something bigger.”

The project received $8,645 from the IMS.

Society partnerships count

Thanks to partnerships with IEEE societies, EPICS can provide more opportunities to students around the world. The program also includes mentors from societies and travel grants for conferences, enhancing the student experience.

The collaborations motivate students to apply technologies in the IEEE societies’ areas of interest to real-world problems, helping them improve their communities and fostering continued engagement with the society and IEEE.

You can learn how to get involved with EPICS by visiting its website.

Keep Reading ↓ Show less

Keys to Building an AI University: A Framework from NVIDIA

Five strategic steps to accelerate AI integration across your campus

1 min read

As artificial intelligence reshapes every industry, universities face a critical choice: lead the transformation or risk falling behind. The institutions that integrate AI across disciplines, invest in computing infrastructure, and conduct groundbreaking research will become destinations for top students, faculty, and research funding.

This industry brief provides a practical roadmap for building a comprehensive AI strategy that drives enrollment, attracts research dollars, and delivers career-ready graduates.

Keep Reading ↓ Show less

EPICS in IEEE Funds Record-Breaking Number of Projects

Smart braille system and air-quality tracker stand out

5 min read
MUET EPICS team and volunteers visited the project community partner at Ida Rieu School for the blind and deaf.

The EPICS in IEEE team from Pakistan’s Mehran University of Engineering and Technology and volunteers created posters describing their generative AI-powered voice interactive braille learning platform being used at the Ida Rieu School for the blind and deaf.

EPICS in IEEE

The EPICS (Engineering Projects in Community Service) in IEEE initiative had a record year in 2025, funding 48 projects involving nearly 1,000 students from 17 countries. The IEEE Educational Activities program approved the most projects this year, distributing US $290,000 in funding and engaging more students than ever before in innovative, hands-on engineering systems.

The program offers students opportunities to engage in service learning and collaborate with engineering professionals and community organizations to develop solutions that address local community challenges. The projects undertaken by IEEE groups encompass student branches, sections, society chapters, and affinity groups including Women in Engineering and Young Professionals.

EPICS in IEEE provides funding up to $10,000, along with resources and mentorship, for projects focused on four key areas of community improvement: education and outreach, environment, access and abilities, and human services.

This year, EPICS partnered with four IEEE societies and the IEEE Standards Association on 23 of the 48 approved projects. The Antennas and Propagation Society supported three, the Industry Applications Society (IAS) funded nine, the Instrumentation and Measurement Society (IMS) sponsored five, the Robotics and Automation Society supported two, the Solid State Circuits Society (SSCS) provided funding for three, and the IEEE Standards Association sponsored one.

The stories of the partner-funded projects demonstrate the impact and the effect the projects have on the students and their communities.

Matoruco agroecological garden

The IAS student branch at the Universidad Pontificia Bolivariana in Colombia worked on a project that involved water storage, automated irrigation, and waste management. The goal was to transform the Matoruco agroecological garden at the Institución Educativa Los Garzones into a more lively, sustainable, and accessible space.

These EPICS in IEEE team members from the Universidad Pontificia Bolivariana in Colombia are configuring a radio communications network that will send data to an online dashboard showing the solar power usage, pump status, and soil moisture for the Matoruco agroecological garden at the Institución Educativa Los Garzones. EPICS in IEEE

By using an irrigation automation system, electric pump control, and soil moisture monitoring, the team aimed to show how engineering concepts combine academic knowledge and practical application. The initiative uses monocrystalline solar panels for power, a programmable logic controller to automatically manage pumps and valves, soil moisture sensors for real-time data, and a LoRa One network (a proprietary radio communication system based on spread spectrum modulation) to send data to an online dashboard showing solar power usage, pump status, and soil moisture.

Los Garzones preuniversity students were taught about the irrigation system through hands-on projects, received training on organic waste management from university students, and participated in installation activities. The university team also organizes garden cleanup events to engage younger students with the community garden.

“We seek to generate a true sense of belonging by offering students and faculty a gathering place for hands-on learning and shared responsibility,” says Rafael Gustavo Ramos Noriega, the team lead and fourth-year electronics engineering student. “By integrating technical knowledge with fun activities and training sessions, we empower the community to keep the garden alive and continue improving it.

“This project has been an unmatched platform for preparing me for a professional career,” he added. “By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results. All of this reinforces my goal of dedicating myself to research and development in automation and embedded systems and contributing innovation in the agricultural and environmental sectors to help more communities and make my mark.”

The project received $7,950 from IAS.

Students give a tour of the systems they installed at the Matoruco agroecological garden.

A smart braille system

More than 1.5 million individuals in Pakistan are blind, including thousands of children who face barriers to accessing essential learning resources, according to the International Agency for the Prevention of Blindness. To address the need for accessible learning tools, a student team from the Mehran University of Engineering and Technology (MUET) and the IEEE Karachi Section created BrailleGenAI: Empowering Braille Learning With Edge AI and Voice Interaction.

The interactive system for blind children combines edge artificial intelligence, generative AI, and embedded systems, says Kainat Fizzah Muhammad, a project leader and electrical engineering student at MUET. The system uses a camera to recognize tactile braille blocks and provide real-time audio feedback via text-to-speech technology. It includes gamified modules designed to support literacy, numeracy, logical reasoning, and voice recognition.

The team partnered with the Hands Welfare Foundation, a nonprofit in Pakistan that focuses on inclusive education, disability empowerment, and community development. The team collaborated with the Ida Rieu School, part of the Ida Rieu Welfare Association, which serves the visually and hearing impaired.

“These partnerships have been instrumental in helping us plan outreach activities, gather input from experts and caregivers, and prepare for usability testing across diverse environments,” says Attiya Baqai, a professor in the MUET electronic engineering department. Support from the Hands foundation ensured the solution was shaped by the real-world needs of the visually impaired community.

SSCS provided $9,155 in funding.

The student team shows how the smart braille system they developed works.

Tackling air pollution

Macedonia’s capital, Skopje, is among Europe’s most polluted cities, particularly in winter, due to thick smog caused by temperature changes, according to the World Health Organization. The WHO reports that the city’s air contains particles that can cause health issues without early warning signs—known as silent killers.

A team at Sts. Cyril and Methodius University created a system to measure and publicize local air pollution levels through its What We Breathe project. It aims to raise awareness and improve health outcomes, particularly among the city’s children.

“Our goal is to provide people with information on current pollution levels so they can make informed decisions regarding their exposure and take protective measures,” says Andrej Ilievski, an IEEE student member majoring in computer hardware engineering and electronics. “We chose to focus on schools first because children’s lungs and immune systems are still developing, making them one of our population’s most vulnerable demographics.”

The project involved 10 university students working with high schools, faculty, and the Society of Environmental Engineers of Macedonia to design and build a sensing and display tool that communicates via the Internet.

“By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results.” —Rafael Gustavo Ramos Noriega

“Our sensing unit detects particulate matter, temperature, and humidity,” says project leader Josif Kjosev, an electronics professor at the university. “It then transmits that data through a Wi-Fi connection to a public server every 5 minutes, while our display unit retrieves the data from the server.”

“Since deploying the system,” Ilievski says, “everyone on the team has been enthusiastic about how well the project connects with their high school audience.”

The team says it hopes students will continue to work on new versions of the devices and provide them to other interested schools in the area.

“For most of my life, my academic success has been on paper,” Ilievski says. “But thanks to our EPICS in IEEE project, I finally have a real, physical object that I helped create.

“We’re grateful for the opportunity to make this project a reality and be part of something bigger.”

The project received $8,645 from the IMS.

Society partnerships count

Thanks to partnerships with IEEE societies, EPICS can provide more opportunities to students around the world. The program also includes mentors from societies and travel grants for conferences, enhancing the student experience.

The collaborations motivate students to apply technologies in the IEEE societies’ areas of interest to real-world problems, helping them improve their communities and fostering continued engagement with the society and IEEE.

You can learn how to get involved with EPICS by visiting its website.

Keep Reading ↓ Show less

Diamond Blankets Will Keep Future Chips Cool

A micrometers-thick integrated layer spreads out the heat

11 min read
Vertical
Diamond Blankets Will Keep Future Chips Cool
DarkBlue1

Summary

Today’s stunning computing power is allowing us to move from human intelligence toward artificial intelligence. And as our machines gain more power, they’re becoming not just tools but decision-makers shaping our future.

But with great power comes great…heat!

This article is part of The Hot, Hot Future of Chips.

As nanometer-scale transistors switch at gigahertz speeds, electrons race through circuits, losing energy as heat—which you feel when your laptop or your phone toasts your fingers. As we’ve crammed more and more transistors onto chips, we’ve lost the room to release that heat efficiently. Instead of the heat spreading out quickly across the silicon, which makes it much easier to remove, it builds up to form hot spots, which can be tens of degrees warmer than the rest of the chip. That extreme heat forces systems to throttle the performance of CPUs and GPUs to avoid degrading the chips.

In other words, what began as a quest for miniaturization has turned into a battle against thermal energy. This challenge extends across all electronics. In computing, high-performance processors demand ever-increasing power densities. (New Nvidia GPU B300 servers will consume nearly 15 kilowatts of power.) In communication, both digital and analog systems push transistors to deliver more power for stronger signals and faster data rates. In the power electronics used for energy conversion and distribution, efficiency gains are being countered by thermal constraints.

The ability to grow large-grained polycrystalline diamond at low temperature led to a new way to combat heat in transistors. Mohamadali Malakoutian

Rather than allowing heat to build up, what if we could spread it out right from the start, inside the chip?—diluting it like a cup of boiling water dropped into a swimming pool. Spreading out the heat would lower the temperature of the most critical devices and circuits and let the other time-tested cooling technologies work more efficiently. To do that, we’d have to introduce a highly thermally conductive material inside the IC, mere nanometers from the transistors, without messing up any of their very precise and sensitive properties. Enter an unexpected material—diamond.

In some ways, diamond is ideal. It’s one of the most thermally conductive materials on the planet—many times more efficient than copper—yet it’s also electrically insulating. However, integrating it into chips is tricky: Until recently we knew how to grow it only at circuit-slagging temperatures in excess of 1,000 °C.

But my research group at Stanford University has managed what seemed impossible. We can now grow a form of diamond suitable for spreading heat, directly atop semiconductor devices at low enough temperatures that even the most delicate interconnects inside advanced chips will survive. To be clear, this isn’t the kind of diamond you see in jewelry, which is a large single crystal. Our diamonds are a polycrystalline coating no more than a couple of micrometers thick.

The potential benefits could be huge. In some of our earliest gallium-nitride radio-frequency transistors, the addition of diamond dropped the device temperature by more than 50 °C. At the lower temperature, the transistors amplified X-band radio signals five times as well as before. We think our diamond will be even more important for advanced CMOS chips. Researchers predict that upcoming chipmaking technologies could make hot spots almost 10 °C hotter [see , “Future Chips Will Be Hotter Than Ever”, in this issue]. That’s probably why our research is drawing intense interest from the chip industry, including Applied Materials, Samsung, and TSMC. If our work continues to succeed as it has, heat will become a far less onerous constraint in CMOS and other electronics too.

Where Heat Begins and Ends in Chips

At the boundary between the diamond and the semiconductor, a thin layer of silicon carbide forms. It acts as a bridge for heat to flow into the diamond. Mohamadali Malakoutian

Heat starts within transistors and the interconnects that link them, as the flow of current meets resistance. That means most of it is generated near the surface of the semiconductor substrate. From there it rises either through layers of metal and insulation or through the semiconductor itself, depending on the package architecture. The heat then encounters a thermal interface material designed to spread it out before it ultimately reaches a heat sink, a radiator, or some sort of liquid cooling, where air or fluid carries the heat away.

The dominant cooling strategies today center around advances in heat sinks, fans, and radiators. In pursuit of even better cooling, researchers have explored liquid cooling using microfluidic channels and removing heat using phase-change materials. Some computer clusters go so far as to submerge the servers in thermally conductive, dielectric—electrically insulating—liquids.

These innovations are critical steps forward, but they still have limitations. Some are so expensive they’re worthwhile only for the highest-performing chips; others are simply too bulky for the job. (Your smartphone can’t carry a conventional fan.) And none are likely to be very effective as we move toward chip architectures resembling silicon skyscrapers that stack multiple layers of chips. Such 3D systems are only as viable as our ability to remove heat from every layer within it.

The big problem is that chip materials are poor heat conductors, so the heat becomes trapped and concentrated, causing the temperature to skyrocket within the chip. At higher temperatures, transistors leak more current, wasting power; they age more quickly, too.

Heat spreaders allow the heat to move laterally, diluting it and allowing the circuits to cool. But they’re positioned far—relatively, of course—from where the heat is generated, and so they’re of little help with these hot spots. We need a heat-spreading technology that can exist within nanometers of where the heat is generated. This is where our new low-temperature diamond could be essential.

How to Make Diamonds

Before my lab turned to developing diamond as a heat-spreading material, we were working on it as a semiconductor. In its single-crystal form—like the kind on your finger—it has a wide bandgap and ability to withstand enormous electric fields. Single-crystalline diamond also offers some of the highest thermal conductivity recorded in any material, reaching 2,200 to 2,400 watts per meter per kelvin—roughly six times as conductive as copper. Polycrystalline diamond—an easier to make material—can approach these values when grown thick. Even in this form, it outperforms copper.

As attractive as diamond transistors might be, I was keenly aware—based on my experience researching gallium nitride devices—of the long road ahead. The problem is one of scale. Several companies are working to scale high-purity diamond substrates to 50, 75, and even 100 millimeters but the diamond substrates we could acquire commercially were only about 3 mm across.

Gallium nitride high-electron-mobility transistors were an ideal test case for diamond cooling. The devices are 3D and the critical heat-generating part, the two-dimensional electron gas, is close to the surface. Chris Philpot

So we decided instead to try growing diamond films on large silicon wafers, in the hope of moving toward commercial-scale diamond substrates. In general, this is done by reacting methane and hydrogen at high temperatures, 900 °C or more. This results in not a single crystal but a forest of narrow columns. As they grow taller, the nanocolumns coalesce into a uniform film, but by the time they form high-quality polycrystalline diamond, the film is already very thick. This thick growth adds stress to the material and often leads to cracking and other problems.

But what if we used this polycrystalline coating as a heat spreader for other devices? If we could get diamond to grow within nanometers of transistors, get it to spread heat both vertically and laterally, and integrate it seamlessly with the silicon, metal, and dielectric in chips, it might do the job.

There were good reasons to think it would work. Diamond is electrically insulating, and it has a relatively low dielectric constant. That means it makes a poor capacitor, so signals sent through diamond-encrusted interconnects might not degrade much. Thus diamond could act as a “thermal dielectric,” one that is electrically insulating but thermally conducting.

Polycrystalline diamond could help reduce temperatures inside 3D chips. Diamond thermal vias would grow inside micrometers-deep holes so heat can flow from vertically from one chip to a diamond heat spreader in another chip that’s stacked atop it. Dennis Rich

For our plan to work, we were going to have to learn to grow diamond differently. We knew there wasn’t room to grow a thick film inside a chip. We also knew the narrow, spiky crystal pillars made in the first part of the growth process don’t transmit heat laterally very well, so we’d need to grow large-grained crystals from the start to get the heat moving horizontally. A third problem was that the existing diamond films didn’t form a coating on the sides of devices, which would be important for inherently 3D devices. But the biggest impediment was the high temperature needed to grow the diamond film, which would damage, if not destroy, an IC’s circuits. We were going to have to cut the growth temperature at least in half.

Just lowering the temperature doesn’t work. (We tried: You wind up, basically, with soot, which is electrically conductive—the opposite of what’s needed.) We found that adding oxygen to the mix helped, because it continuously etched away carbon deposits that weren’t diamond. And through extensive experimentation, we were able to find a formula that produced coatings of large-grained polycrystalline diamond all around devices at 400 °C, which is a survivable temperature for CMOS circuits and other devices.

Thermal Boundary Resistance

Although we had found a way to grow the right kind of diamond coatings, we faced another critical challenge—the phonon bottleneck, also known as thermal boundary resistance (TBR). Phonons are packets of heat energy, in the way that photons are packets of electromagnetic energy. Specifically, they’re a quantized version of the vibration of a crystal lattice. These phonons can pile up at the boundary between materials, resisting the flow of heat. Reducing TBR has long been a goal in thermal interface engineering, and it is often done by introducing different materials at the boundary. But semiconductors are compatible only with certain materials, limiting our choices.

Thermal scaffolding would link layers of heat-spreading polycrystalline diamond in one chip to those in another chip in a 3D-stacked silicon. The thermal pillars would traverse each chip’s interconnects and dielectric material to move heat vertically through the stack. Srabanti Chowdhury

In the end, we got lucky. While growing diamond on GaN capped with silicon nitride, we observed something unexpected: The measured TBR was much lower than prior reports led us to expect. (The low TBR was independently measured, initially by Martin Kuball at the University of Bristol, in England, and later by Samuel Graham Jr., then at Georgia Tech, who both have been coauthors and collaborators in several of our papers.)

Through further investigation of the interface science and engineering, and in collaboration with K.J. Cho at the University of Texas at Dallas, we identified the cause of the lower TBR. Intermixing at the interface between the diamond and silicon nitride led to the formation of silicon carbide, which acted as a kind of bridge for the phonons, allowing more efficient heat transfer. Though this began as a scientific discovery, its technological impact was immediate—with a silicon carbide interface, our devices exhibited significantly improved thermal performance.

GaN HEMTs: The First Test Case

We began testing our new low-TBR diamond coatings in gallium nitride high-electron-mobility transistors (HEMTs). These devices amplify RF signals by controlling current through a two-dimensional electron gas that forms within its channel. We leveraged the pioneering research on HEMTs done by Umesh Mishra’s laboratory at the University of California, Santa Barbara, where I had been a graduate student. The Mishra lab invented a particular form of the material called N-polar gallium nitride. Their N-polar GaN HEMTs demonstrate exceptional power density at high frequencies, particularly in the W-band, the 75- to 110-gigahertz part of the microwave spectrum.

What made these HEMTs such a good test case is one defining feature of the device: The gate, which controls the flow of current through the device, is within tens of nanometers of the transistor’s channel. That means that heat is generated very close to the surface of the device, and any interference our diamond coating could cause would quickly show in the device’s operation.

We introduced the diamond layer so that it surrounded the HEMT completely, even on the sides. By maintaining a growth temperature below 400 °C, we hoped to preserve core device functionality. While we did see some decline in high-frequency performance, the thermal benefits were substantial—channel temperatures dropped by a remarkable 70 °C. This breakthrough could be a potentially transformative solution for RF systems, allowing them to operate at higher power than ever before.

Diamond in CMOS

We wondered if our diamond layer could also work in high-power CMOS chips. My colleagues at Stanford, H.-S. Philip Wong and Subhasish Mitra, have long championed 3D-stacked chip architectures. In CMOS computing chips, 3D stacking appears to be the most viable way forward to increase integration density, improve performance, and overcome the limitations of traditional transistor scaling. It’s already used in some advanced AI chips, such as AMD’s MI300 series. And it’s established in the high-bandwidth memory chips that pump data through Nvidia GPUs and other AI processors. The multiple layers of silicon in these 3D stacks are mostly connected by microscopic balls of solder, or in some advanced cases just by their copper terminals. Getting signals and power out of these stacks requires vertical copper links that burrow through the silicon to reach the chip package’s substrate.

In one of our discussions, Mitra pointed out that a critical issue with 3D-stacked chips is the thermal bottlenecks that form within the stack. In 3D architectures, the traditional heat sinks and other techniques used for 2D chips aren’t sufficient. Extracting heat from each layer is essential.

Our research could redefine thermal management across industries.

Our experiments on thermal boundary resistance in GaN suggested a similar approach would work in silicon. And when we integrated diamond with silicon, the results were remarkable: An interlayer of silicon carbide formed, leading to diamond with an excellent thermal interface.

Our effort introduced the concept of thermal scaffolding. In that scheme, nanometers-thick layers of polycrystalline diamond would be integrated within the dielectric layers above the transistors to spread heat. These layers would then be connected by vertical heat conductors, called thermal pillars, made of copper or more diamond. These pillars would connect to another heat spreader, which in turn would link to thermal pillars on the next chip in the 3D stack, and so on until the heat reached the heat sink or other cooling device.

The more tiers of computing silicon in a 3D chip, the bigger difference thermal scaffolding makes. An AI accelerator with more than five tiers would well exceed typical temperature limits unless the scaffolding was employed. Srabanti Chowdhury

In a collaboration with Mitra, we used simulations of heat generated by real computational workloads to operate a proof-of-concept structure. This structure consisted of dummy heaters to mimic hot spots in a two-chip stack along with diamond heat spreaders and copper thermal pillars. Using this, we reduced the temperature to one-tenth its value without the scaffolding.

There are hurdles still to overcome. In particular, we still have to figure out a way to make the top of our diamond coatings atomically flat. But, in collaboration with industry partners and researchers, we are systematically studying that problem and other scientific and technological issues. We and our partners think this research could offer a disruptive new path for thermal management and a crucial step toward sustaining high-performance computing into the future.

Developing Diamond Thermal Solutions

We now intend to move toward industry integration. For example, we’re working with the Defense Advanced Research Projects Agency Threads program, which aims to use device-level thermal management to develop highly efficient and reliable X-band power amplifiers with a power density 6 to 8 times as efficient as today’s devices. The program, which was conceived and initially run by Tom Kazior, is a critical platform for validating the use of low-temperature diamond integration in GaN HEMT manufacturing. It’s enabled us to collaborate closely with industry teams while protecting both our and our partners’ processes. Defense applications demand exceptional reliability, and our diamond-integrated HEMTs are undergoing rigorous testing with industry partners. The early results are promising, guiding refinements in growth processes and integration techniques that we’ll make with our partners over the next two years.

But our vision extends beyond GaN HEMTs to other materials and particularly silicon computational chips. For the latter, we have an established collaboration with TSMC, and we’re expanding on newer opportunities with Applied Materials, Micron, Samsung, and others through the Stanford SystemX Alliance and the Semiconductor Research Corp. This is an extraordinary level of collaboration among otherwise fierce competitors. But then, heat is a universal challenge in chip manufacturing, and everyone is motivated to find the best solutions.

If successful, our research could redefine thermal management across industries. In my work on gallium nitride devices, I have seen firsthand how once-radical ideas like this transition to become industry standards, and I believe diamond-based heat extraction will follow the same trajectory, becoming a critical enabler for a generation of electronics that is no longer hindered by heat.

This article appears in the November 2025 print issue as “Diamond Blankets Will Chill Future Chips.”

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

Teaching AI to Predict What Cells Will Look Like Before Running Any Experiments

This powerful generative AI tool could accelerate drug discovery

5 min read

This is a sponsored article brought to you by MBZUAI.

If you’ve ever tried to guess how a cell will change shape after a drug or a gene edit, you know it’s part science, part art, and mostly expensive trial-and-error. Imaging thousands of conditions is slow; exploring millions is impossible.

A new paper in Nature Communications proposes a different route: simulate those cellular “after” images directly from molecular readouts, so you can preview the morphology before you pick up a pipette. The team calls their model MorphDiff, and it’s a diffusion model guided by the transcriptome, the pattern of genes turned up or down after a perturbation.

At a high level, the idea flips a familiar workflow. High-throughput imaging is a proven way to discover a compound’s mechanism or spot bioactivity but profiling every candidate drug or CRISPR target isn’t feasible. MorphDiff learns from cases where both gene expression and cell morphology are known, then uses only the L1000 gene expression profile as a condition to generate realistic post-perturbation images, either from scratch or by transforming a control image into its perturbed counterpart. The claim is that competitive fidelity on held-out (unseen) perturbations across large drug and genetic datasets plus gains on mechanism-of-action (MOA) retrieval can rival real images.

aspect_ratio

This research led by MBZUAI researchers starts from a biological observation: gene expression ultimately drives proteins and pathways that shape what a cell looks like under the microscope. The mapping isn’t one-to-one, but there’s enough shared signal for learning. Conditioning on the transcriptome offers a practical bonus too: there’s simply far more publicly accessible L1000 data than paired morphology, making it easier to cover a wide swath of perturbation space. In other words, when a new compound arrives, you’re likely to find its gene signature which MorphDiff can then leverage.

Under the hood, MorphDiff blends two pieces. First, a Morphology Variational Autoencoder (MVAE) compresses five-channel microscope images into a compact latent space and learns to reconstruct them with high perceptual fidelity. Second, a Latent Diffusion Model learns to denoise samples in that latent space, steering each denoising step with the L1000 vector via attention.

Wang et al., Nature Communications (2025), CC BY 4.0

Diffusion is a good fit here: it’s intrinsically robust to noise, and the latent space variant is efficient enough to train while preserving image detail. The team implements both gene-to-image (G2I) generation (start from noise, condition on the transcriptome) and image-to-image (I2I) transformation (push a control image toward its perturbed state using the same transcriptomic condition). The latter requires no retraining thanks to an SDEdit-style procedure, which is handy when you want to explain changes relative to a control.

It’s one thing to generate photogenic pictures; it’s another to generate biologically faithful ones. The paper leans into both: on the generative side, MorphDiff is benchmarked against GAN and diffusion baselines using standard metrics like FID, Inception Score, coverage, density, and a CLIP-based CMMD. Across JUMP (genetic) and CDRP/LINCS (drug) test splits, MorphDiff’s two modes typically land first and second, with significance tests run across multiple random seeds or independent control plates. The result is consistent: better fidelity and diversity, especially on OOD perturbations where practical value lives.

The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments.

More interesting for biologists, the authors step beyond image aesthetics to morphology features. They extract hundreds of CellProfiler features (textures, intensities, granularity, cross-channel correlations) and ask whether the generated distributions match the ground truth.

In side-by-side comparisons, MorphDiff’s feature clouds line up with real data more closely than baselines like IMPA. Statistical tests show that over 70 percent of generated feature distributions are indistinguishable from real ones, and feature-wise scatter plots show the model correctly captures differences from control on the most perturbed features. Crucially, the model also preserves correlation structure between gene expression and morphology features, with higher agreement to ground truth than prior methods, evidence that it’s modeling more than surface style.

Wang et al., Nature Communications (2025), CC BY 4.0

The drug results scale up that story to thousands of treatments. Using DeepProfiler embeddings as a compact morphology fingerprint, the team demonstrates that MorphDiff’s generated profiles are discriminative: classifiers trained on real embeddings also separate generated ones by perturbation, and pairwise distances between drug effects are preserved.

Wang et al., Nature Communications (2025), CC BY 4.0

That matters for the downstream task everyone cares about: MOA retrieval. Given a query profile, can you find reference drugs with the same mechanism? MorphDiff’s generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images. In top-k retrieval experiments, the average improvement over the strongest baseline is 16.9 percent and 8.0 percent over transcriptome-only, with robustness shown across several k values and metrics like mean average precision and folds-of-enrichment. That’s a strong signal that simulated morphology contains complementary information to chemical structure and transcriptomics which is enough to help find look-alike mechanisms even when the molecules themselves look nothing alike.

MorphDiff’s generated morphologies not only beat prior image-generation baselines but also outperform retrieval using gene expression alone, and they approach the accuracy you get using real images.

The paper also lists some current limitations that hint at potential future improvements. Inference with diffusion remains relatively slow; the authors suggest plugging in newer samplers to speed generation. Time and concentration (two factors that biologists care about) aren’t explicitly encoded due to data constraints; the architecture could take them as additional conditions when matched datasets become available. And because MorphDiff depends on perturbed gene expression as input, it can’t conjure morphology for perturbations that lack transcriptome measurements; a natural extension is to chain with models that predict gene expression for unseen drugs (the paper cites GEARS as an example). Finally, generalization inevitably weakens as you stray far from the training distribution; larger, better-matched multimodal datasets will help, as will conditioning on more modalities such as structures, text descriptions, or chromatin accessibility.

What does this mean in practice? Imagine a screening team with a large L1000 library but a smaller imaging budget. MorphDiff becomes a phenotypic copilot: generate predicted morphologies for new compounds, cluster them by similarity to known mechanisms, and prioritize which to image for confirmation. Because the model also surfaces interpretable feature shifts, researchers can peek under the hood. Did ER texture and mitochondrial intensity move the way we’d expect for an EGFR inhibitor? Did two structurally unrelated molecules land in the same phenotypic neighborhood? Those are the kinds of hypotheses that accelerate mechanism hunting and repurposing.

The bigger picture is that generative AI has finally reached a fidelity level where in-silico microscopy can stand in for first-pass experiments. We’ve already seen text-to-image models explode in consumer domains; here, a transcriptome-to-morphology model shows that the same diffusion machinery can do scientifically useful work such as capturing subtle, multi-channel phenotypes and preserving the relationships that make those images more than eye candy. It won’t replace the microscope. But if it reduces the number of plates you have to run to find what matters, that’s time and money you can spend validating the hits that count.

Keep Reading ↓ Show less

EPICS in IEEE Funds Record-Breaking Number of Projects

Smart braille system and air-quality tracker stand out

5 min read
MUET EPICS team and volunteers visited the project community partner at Ida Rieu School for the blind and deaf.

The EPICS in IEEE team from Pakistan’s Mehran University of Engineering and Technology and volunteers created posters describing their generative AI-powered voice interactive braille learning platform being used at the Ida Rieu School for the blind and deaf.

EPICS in IEEE

The EPICS (Engineering Projects in Community Service) in IEEE initiative had a record year in 2025, funding 48 projects involving nearly 1,000 students from 17 countries. The IEEE Educational Activities program approved the most projects this year, distributing US $290,000 in funding and engaging more students than ever before in innovative, hands-on engineering systems.

The program offers students opportunities to engage in service learning and collaborate with engineering professionals and community organizations to develop solutions that address local community challenges. The projects undertaken by IEEE groups encompass student branches, sections, society chapters, and affinity groups including Women in Engineering and Young Professionals.

EPICS in IEEE provides funding up to $10,000, along with resources and mentorship, for projects focused on four key areas of community improvement: education and outreach, environment, access and abilities, and human services.

This year, EPICS partnered with four IEEE societies and the IEEE Standards Association on 23 of the 48 approved projects. The Antennas and Propagation Society supported three, the Industry Applications Society (IAS) funded nine, the Instrumentation and Measurement Society (IMS) sponsored five, the Robotics and Automation Society supported two, the Solid State Circuits Society (SSCS) provided funding for three, and the IEEE Standards Association sponsored one.

The stories of the partner-funded projects demonstrate the impact and the effect the projects have on the students and their communities.

Matoruco agroecological garden

The IAS student branch at the Universidad Pontificia Bolivariana in Colombia worked on a project that involved water storage, automated irrigation, and waste management. The goal was to transform the Matoruco agroecological garden at the Institución Educativa Los Garzones into a more lively, sustainable, and accessible space.

These EPICS in IEEE team members from the Universidad Pontificia Bolivariana in Colombia are configuring a radio communications network that will send data to an online dashboard showing the solar power usage, pump status, and soil moisture for the Matoruco agroecological garden at the Institución Educativa Los Garzones. EPICS in IEEE

By using an irrigation automation system, electric pump control, and soil moisture monitoring, the team aimed to show how engineering concepts combine academic knowledge and practical application. The initiative uses monocrystalline solar panels for power, a programmable logic controller to automatically manage pumps and valves, soil moisture sensors for real-time data, and a LoRa One network (a proprietary radio communication system based on spread spectrum modulation) to send data to an online dashboard showing solar power usage, pump status, and soil moisture.

Los Garzones preuniversity students were taught about the irrigation system through hands-on projects, received training on organic waste management from university students, and participated in installation activities. The university team also organizes garden cleanup events to engage younger students with the community garden.

“We seek to generate a true sense of belonging by offering students and faculty a gathering place for hands-on learning and shared responsibility,” says Rafael Gustavo Ramos Noriega, the team lead and fourth-year electronics engineering student. “By integrating technical knowledge with fun activities and training sessions, we empower the community to keep the garden alive and continue improving it.

“This project has been an unmatched platform for preparing me for a professional career,” he added. “By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results. All of this reinforces my goal of dedicating myself to research and development in automation and embedded systems and contributing innovation in the agricultural and environmental sectors to help more communities and make my mark.”

The project received $7,950 from IAS.

Students give a tour of the systems they installed at the Matoruco agroecological garden.

A smart braille system

More than 1.5 million individuals in Pakistan are blind, including thousands of children who face barriers to accessing essential learning resources, according to the International Agency for the Prevention of Blindness. To address the need for accessible learning tools, a student team from the Mehran University of Engineering and Technology (MUET) and the IEEE Karachi Section created BrailleGenAI: Empowering Braille Learning With Edge AI and Voice Interaction.

The interactive system for blind children combines edge artificial intelligence, generative AI, and embedded systems, says Kainat Fizzah Muhammad, a project leader and electrical engineering student at MUET. The system uses a camera to recognize tactile braille blocks and provide real-time audio feedback via text-to-speech technology. It includes gamified modules designed to support literacy, numeracy, logical reasoning, and voice recognition.

The team partnered with the Hands Welfare Foundation, a nonprofit in Pakistan that focuses on inclusive education, disability empowerment, and community development. The team collaborated with the Ida Rieu School, part of the Ida Rieu Welfare Association, which serves the visually and hearing impaired.

“These partnerships have been instrumental in helping us plan outreach activities, gather input from experts and caregivers, and prepare for usability testing across diverse environments,” says Attiya Baqai, a professor in the MUET electronic engineering department. Support from the Hands foundation ensured the solution was shaped by the real-world needs of the visually impaired community.

SSCS provided $9,155 in funding.

The student team shows how the smart braille system they developed works.

Tackling air pollution

Macedonia’s capital, Skopje, is among Europe’s most polluted cities, particularly in winter, due to thick smog caused by temperature changes, according to the World Health Organization. The WHO reports that the city’s air contains particles that can cause health issues without early warning signs—known as silent killers.

A team at Sts. Cyril and Methodius University created a system to measure and publicize local air pollution levels through its What We Breathe project. It aims to raise awareness and improve health outcomes, particularly among the city’s children.

“Our goal is to provide people with information on current pollution levels so they can make informed decisions regarding their exposure and take protective measures,” says Andrej Ilievski, an IEEE student member majoring in computer hardware engineering and electronics. “We chose to focus on schools first because children’s lungs and immune systems are still developing, making them one of our population’s most vulnerable demographics.”

The project involved 10 university students working with high schools, faculty, and the Society of Environmental Engineers of Macedonia to design and build a sensing and display tool that communicates via the Internet.

“By leading everything from budget planning to the final installation, I have experienced firsthand all the stages of a real engineering project: scope definition, resource management, team coordination, troubleshooting, and delivering tangible results.” —Rafael Gustavo Ramos Noriega

“Our sensing unit detects particulate matter, temperature, and humidity,” says project leader Josif Kjosev, an electronics professor at the university. “It then transmits that data through a Wi-Fi connection to a public server every 5 minutes, while our display unit retrieves the data from the server.”

“Since deploying the system,” Ilievski says, “everyone on the team has been enthusiastic about how well the project connects with their high school audience.”

The team says it hopes students will continue to work on new versions of the devices and provide them to other interested schools in the area.

“For most of my life, my academic success has been on paper,” Ilievski says. “But thanks to our EPICS in IEEE project, I finally have a real, physical object that I helped create.

“We’re grateful for the opportunity to make this project a reality and be part of something bigger.”

The project received $8,645 from the IMS.

Society partnerships count

Thanks to partnerships with IEEE societies, EPICS can provide more opportunities to students around the world. The program also includes mentors from societies and travel grants for conferences, enhancing the student experience.

The collaborations motivate students to apply technologies in the IEEE societies’ areas of interest to real-world problems, helping them improve their communities and fostering continued engagement with the society and IEEE.

You can learn how to get involved with EPICS by visiting its website.

Keep Reading ↓ Show less

Advanced Connector Technology Meets Demanding Requirements of Portable Medical Devices

Connectivity challenges in high-impact environments

1 min read

Healthcare is rapidly evolving with a growing reliance on portable medical devices in both clinical and home-care environments. These devices—used for diagnostics, monitoring, and life-support functions like ventilators—improve accessibility and outcomes by enabling continuous monitoring and timely interventions. However, their mobility and usage in high-impact environments demand rugged, compact, and high-speed components, particularly reliable internal connectors that can withstand shock, vibration, and physical stress.

This white paper highlights how the growth of portable and in-home medical devices has pushed the need for miniaturized, high-performance connectors. It explores how connector technology must balance reduced size, high data speeds, rugged durability, and simplified assembly to support modern healthcare demands.

Keep Reading ↓ Show less

How to Measure Nothing Better

Atomic sensors could support big science, semiconductors, and more

8 min read
Vertical
How to Measure Nothing Better
Orange

There’s no such thing as a complete vacuum. Even in the cosmic void between galaxies, there’s an estimated density of about one hydrogen or helium atom per cubic meter. But these estimates are largely theoretical—no one has yet launched a sensor into intergalactic space and beamed back the result. On top of that, we have no means of measuring vacuums that low.

At least, not yet.

Keep Reading ↓ Show less

This Machine Finds Defects Hiding Deep Inside Microchips

How advanced defect detection is enabling the next wave of chip innovation

7 min read
Equipment featuring CFE technology and AI Image Recognition from Applied Materials.

Applied Materials’ SEMVision H20 system combines the industry’s most sensitive eBeam system with cold field emission (CFE) technology and advanced AI image recognition to enable better and faster analysis of buried nanoscale defects in the world’s most advanced chips.

Applied Materials

This is a sponsored article brought to you by Applied Materials.

The semiconductor industry is in the midst of a transformative era as it bumps up against the physical limits of making faster and more efficient microchips. As we progress toward the “angstrom era,” where chip features are measured in mere atoms, the challenges of manufacturing have reached unprecedented levels. Today’s most advanced chips, such as those at the 2nm node and beyond, are demanding innovations not only in design but also in the tools and processes used to create them.

Keep Reading ↓ Show less
{"imageShortcodeIds":[]}

How Shiitake Mushrooms Can Remember Electrical States

Yes, you might want to plug a mushroom into your circuit

4 min read
Conceptual collage of memristor symbols filled with the textures of mushrooms, honey, and blood.
Source images: iStock

From the honey in your tea to the blood in your veins, materials all around you have a hidden talent. Some of these substances, when engineered in specific ways, can act as memristors—electrical components that can “remember” past states.

Memristors are often used in chips that both perform computations and store data. They are devices that store data as particular levels of resistance. Today, they are constructed as a thin layer of titanium dioxide or similar dielectric material sandwiched between two metal electrodes. Applying enough voltage to the device causes tiny regions in the dielectric layer—where oxygen atoms are missing—to form filaments that bridge the electrodes or otherwise move in a way that makes the layer more conductive. Reversing the voltage undoes the process. Thus, the process essentially gives the memristor a memory of past electrical activity.

Keep Reading ↓ Show less

Configuring and Controlling Complex Test Equipment Setups for Silicon Device Test and Characterization

Reducing signal path complexity with multiple integrated instruments

1 min read

In this webinar, we will explore efficient, accurate, and scalable techniques for analog and mixed-signal device testing using reconfigurable test setups. As semiconductor devices grow more complex, engineers face the challenge of validating performance and catching edge cases under tight schedules. Test setups often include oscilloscopes, waveform generators, network analyzers, and more, potentially from different vendors with unique automation and configuration considerations. In order to keep pace with semiconductor validation requirements, multi-channel test setups designed for flexibility and performance can help engineers scale effectively.

Register now for this free webinar!
.