温馨提示:本站仅提供公开网络链接索引服务,不存储、不篡改任何第三方内容,所有内容版权归原作者所有
AI智能索引来源:http://www.sst.com/membrain-products
点击访问原文链接

memBrain™ Products | SST - Silicon Storage Technology

memBrain™ Products | SST - Silicon Storage Technology Home Products and Services Overview Standalone Products SuperFlash® Technology Products Process License Design License Design Services memBrain™ Products Foundry Availability GF 55nm IP List Services Applications Overview Automotive Secure Smart Cards Internet of Things Artificial Intelligence Industrial Consumer About Corporate Overview Events Press Releases News Contact Us Home Products and Services memBrain™ Products memBrain™ Products As artificial intelligence (AI) processing moves from the cloud to the edge of the network, battery-powered and deeply embedded devices are challenged to perform Artificial Intelligence (AI) functions - like video and voice recognition.  Deep Neural Networks (DNNs) used AI applications require a vast number of Multiply-Accumulate (MAC) operations to generate weight values. These weights then need to be kept in local storage for further processing. This huge amount of data cannot fit into the on-board memory of a stand-alone digital edge processor. Based on SuperFlash® technology and optimized to manage Vector Matrix Multiplication (VMM) for neural network inference, our memBrain™ neuromorphic memory product improves system architecture implementation of VMM through an analog compute-in-memory approach, enhancing AI inference at the edge. Current neural net models may require 50M or more weights for processing. The memBrain neuromorphic memory product stores synaptic weights inside the floating gate to offer significant system latency improvements such as reducing system bus latencies when fetching from off-chip DRAM. When compared to traditional digital DSP and SRAM/DRAM based approaches, it delivers 10 to 20 times power reduction and significantly lower cost with improved inference frame latency.

memBrain™ Multiply-Accumulate operations (MACs)DNNs require vast numbers of Multiply-Accumulate operations (MACs). memBrain™ solves this problem by storing the weights in eFlash and using analog cell operation to perform the MAC operations inside the storage array. 185313-MAC_Circuit-Diagram.jpg × memBrain™ Multiply-Accumulate operations (MACs)
DNNs require vast numbers of Multiply-Accumulate operations (MACs). memBrain™ solves this problem by storing the weights in eFlash and using analog cell operation to perform the MAC operations inside the storage array.

Close memBrain™ TileMultiplication happens through cell operation characteristicsSummation happens along the word or bit line depending on configurationMultiple "Tiles" can be connected to support a large neural systemExample tile full frame cycle time 10-30 us Depends on D-A and A-D powerEnergy is 0.3pJ per MAC with D/A+A/D @ 30 us frame cycle timeArea with D to A input and A to D output blocks = 0.48 mm^2 on 40 nm (512×512 tile available, other Tiles in development) 185759-memBrain-Tile.jpg × memBrain™ TileMultiplication happens through cell operation characteristicsSummation happens along the word or bit line depending on configurationMultiple "Tiles" can be connected to support a large neural systemExample tile full frame cycle time 10-30 us Depends on D-A and A-D powerEnergy is 0.3pJ per MAC with D/A+A/D @ 30 us frame cycle timeArea with D to A input and A to D output blocks = 0.48 mm^2 on 40 nm (512×512 tile available, other Tiles in development)

Close

智能索引记录