The expansion of artificial intelligence in the latest application spaces is creating problems at the board and chip level, with over 90% of the power consumption in AI workloads coming from the movement of data. The ability to shorten data paths by moving the compute element closer to where the data is stored can significantly reduce power consumption. This would also enable an unprecedented increase in compute density.
In this podcast, we talk to Robert Beachler, Vice President of Product at Untether AI, a company launched with the goal of addressing the major compute and efficiency bottleneck, memory access, and data management. The company’s at-memory compute architecture takes on this bottleneck, significantly improving performance while reducing power consumption.