Shorterm Memory AI model shows that during the silent period of memory, the brain can use the short-term plasticity of synaptic connections between neurons to memorize information.CGRA computing energy efficiency can reach 1000 times of CPU computing architecture,100-1000 times of GPU computing architecture, and more than 100 times of FPGA computing architecture.FpgaConvNet, ALAMO and Snowflake are mainly concerned with the feature extractor part of CNN.DeepBurning and FP-DNN support recurrent neural network (RNN) and long-term and short-term memory (LSTM) networks.In a paper in Physical Review X, MIT researchers describe a new photon accelerator that uses optical components and optical signal processing technology to reduce chip size, which will allow the chip to expand to neural networks several orders of magnitude larger than electrical chips. By taking hardware performance and power consumption as indicators in the training phase, hardware adjustable parameters, model weight and topology will be jointly modified in the optimization process to jointly optimize the application-level accuracy and the required reasoning execution time and power consumption.Artificial intelligence with deep learning architecture is still in infancy. But it has already brought a lot of help to mankind.
Tseng, Dr. Kuo-Kun Associate Professor, "State of the Art of Deep Learning Technology and its Next Generation Architecture" (2019). Invited Talks. 29.