AI Chip

  As an epoch-making technology,deep learning has been widely and successfully applied in the fields of computer vision,speech recognition and natural language processing,and related products have sprung up like bamboo shoots after a rain shower. In response to the huge amount of data and calculation requirements of deep learning networks,various AI chips for the acceleration of deep learning networks have emerged. However,the existing AI chips with von Neumann architecture still suffer from excessive power consumption due to storage walls. The extreme power consumption makes it difficult to deploy AI chips with existing digital architectures on edge devices,especially battery-powered IoT terminal devices.

  In the mainstream von Neumann architecture,the computing unit and the memory unit are two completely separate units. The computing unit reads data from the memory according to instructions,completes the calculation in the computing unit,and then stores it back into the memory. Data needs to be frequently moved between the computing unit and the storage unit,which brings great power consumption and extremely low computing efficiency. The storage-calculation integrated architecture is to combine the computing unit and the memory unit into one,and perform calculations while storing data,thereby greatly reducing the time and energy consumption of data access during the calculation process.

  The AIchip based on the NOR Flash storage and calculation integrated architecture is highly parallelized to complete the matrix calculation through the simulation calculation of the Flash array. The specific method is to map the weight to the Flash array,then convert the input into a voltage,and input it into the Flash array for analog calculation,and the collected output current is the calculation result. The integrated storage and computing architecture based on NOR Flash achieves two points:First,the Flash unit is both a storage unit and a computing unit,eliminating the memory movement of neural network weights,greatly reducing power consumption and improving energy efficiency ratio. Second,each flash is equivalent to a multiplier. When performing matrix operations,tens of thousands of multiplications and accumulations are performed in parallel,which greatly improves throughput.

  The company’s latest NOR Flash-based ultra-low power consumption,high-performance AI chip for edge computing uses the latest storage-computing integrated architecture,without the limitations of the traditional Von Neumann structure,and can be completed with ultra-low power consumption Large-scale parallel multiplication and accumulation calculations. Compared with traditional von Neumann-based chips such as CPU,DSP,and GPU,this chip guarantees sufficient computing power while reducing power consumption by a hundredfold,which can bring a hundredfold increase in computing performance and cost reduction. This AI chip with integrated storage and computing architecture has extremely low power consumption,strong computing power,low price,and very small area,making large-scale deep learning applications from the cloud to the edge computing field unlimited possibilities. Since then,edge devices,especially battery-powered IoT devices,do not need to transmit data to the cloud,complete calculations locally,use extremely low power consumption to complete real-time AI inference,and enjoy data security. This AI chip with integrated storage and computing architecture saves a lot of storage units and computing units,and does not require advanced semiconductor processing technology,so the product cost is very low. The goal of Hengshuo is to enable everyone to enjoy extremely high computing power at the price of a Flash. At present,the company's first version of the AIchip has been successfully taped out,and the chip is equipped with a live demonstration of a deep learning algorithm for face recognition.

  This AI chip with integrated storage and computing architecture focuses on the edge computing field with low latency,low power consumption and high computing power requirements,especially in the field of battery-powered IoT terminal devices,such as smart phones,wearable devices,and smart homes.,Drones,smart cameras,hearing aids,etc. In the coming era of the Internet of Everything,the chip will bring more traditional application changes and the emergence of new application products.

  Current wearable devices cannot detect the health of the human body through ECG in real time due to the limitation of single-chip computing power. The current method is to transmit the ECG data to the cloud,complete the ECG detection in the cloud and then send it back to the local area. High latency and high power consumption seriously reduce the battery life and user experience of the device,and it cannot be used in an environment without a network or a complex network. In the future,the single-chip microcomputer of the wearable device only needs to mount a NOR Flash ultra-low power AI chip,which can complete the inference detection of dozens of heart diseases locally in real time,without network,without delay,and can be applied to Various network environments,real-time attention to human health,greatly improve the endurance and use experience of wearable devices.

  Due to the limitation of computing power and power consumption,the current voice recognition terminal equipment can only transmit voice information to the cloud for recognition,and then feed it back to the device. This method has high delay,high power consumption,and cannot perform real-time voice processing. Various complex network environments. In the future,the voice recognition equipment equipped with Hengshuo AI chip can complete the real-time inference and recognition of local voice signals without relying on the Internet. It can be applied to various network conditions and can enjoy data security without uploading voice information. Improve the experience and battery life of current speech recognition equipment.

  Most of the current smart phone terminals use NPU coprocessors to complete the accelerated calculation of deep learning algorithms. These NPU coprocessors use the traditional Von Neumann architecture and are embedded in the CPU in the form of IP cores,which greatly increases the chip area And power consumption has affected the development of smart phones. In the future,these manufacturers only need to mount Hengshuo's ultra-low power AI chips by mounting or IP integration,and they can enjoy strong computing power with extremely low power consumption,greatly improving mobile phone battery life and deep learning application scenarios. The deployment of various complex deep learning applications to mobile terminals brings unlimited possibilities,such as face recognition,AI recognition,language translation,etc.