Building and performing at the embedded edge with SiMa.ai

Jake Flomenberg
Author
No items found.
Today, we’re thrilled to see SiMa's vision take a massive step forward, and to congratulate the SiMa.ai team as they mark two major milestones in their journey to revolutionize machine learning at the edge.

Startup investing, especially in seed stage companies, is a long game—you’re betting on an idea and promise that’s years in the future, and on a team and product in its earliest stages.

When we met Krishna Rangasayee of SiMa.ai in 2019, we chose to invest in his vision to make SiMa the trusted computer vision partner for the embedded edge market. We saw the company’s MLSoC solution as having the opportunity to greatly expand what’s possible to do on the edge, and provide significant runtime improvements in power-constrained environments. 

Computer vision has already improved performance for existing applications, like in inspecting manufacturing processes, security and surveillance, and robotics. It has also begun to enable new applications that were not possible in the past, like semi-autonomous vehicles. SiMa aims to be a key enabler of both.

Today, we’re so excited to see that vision take a massive step forward, and to congratulate the SiMa.ai team as they mark two major milestones in their journey to revolutionize machine learning at the edge: a significant product launch with the no-code tool Palette Edgematic, to enable a pushbutton ML experience; and their recent groundbreaking performance in the MLPerf Closed Edge category in the MLCommons® ML Perf 3.1 benchmark, outperforming NVIDIA by over 85 percent in terms of frames per second per watt.

Right now, deployment and optimization of computer vision models, on the edge, are really difficult. With Palette Edgematic, teams can develop a computer vision model and deploy it with minimal code and minimal iterations—doing what’s previously taken months in just minutes. 

Perhaps even more significantly, though, we were thrilled to see the SiMa’s team performance in the MLCommons ML Perf 3.1 benchmark. Inside data centers, frames per second per watt is what really matters—it’s the emerging performance standard for edge AI and ML. The fact SiMa is beating market leader NVIDIA by 85 percent in that regard is a win. 

That said, I believe it is even more than meets the eye. NVIDIA spends significant engineering resources to optimize their results for these types of benchmarks. In live computer vision applications, some do see these types of performance numbers. However, many see significantly worse performance—meaning this result is the best case for what NVIDIA can do, an impressive high watermark but one that is hard to meet in the real world. 

SiMa, by contrast, has focused on foundational improvements which optimize code execution and resource allocation. These initiatives will dramatically increase performance for any type of ML network, not just ResNet50. Thus, I think of this as closer to an average or slightly above average investment from SiMa to achieve these results. What this means is that real world applications should and are seeing results much closer to these benchmarks. From my vantage point, the real world gap is wider than it is in the lab.

Congratulations to the SiMa team on these two landmark occasions—we’re just getting started, and can’t wait to see what comes next from this incredible team.

Wing Logo
Thanks for signing up!
Form error, try again.