Portfolio

Patents

This invention introduces a computing device that achieves 3-4 times higher power efficiency during convolution and max-pooling compared to that in a conventional setting. This computing device is configured to perform at least (1) accepting as inputs at least an input feature map having one or more first elements and at least one or more filters having one or more second elements (2) computing at least a first prediction value for a first stride (3) computing at least a second prediction value associated with the at least a second stride, wherein the first prediction value and the at least a second prediction value define a group of prediction values associated with the stride tensor (4) determining the greatest prediction value, the greatest prediction value is associated with one selected stride of the stride tensor and (5) executing a convolution operation only for the selected stride of the stride tensor. Please view full patent (WO2023249762A1, International Publication Number – WO 2023/249762 A1) for details.

This cryptographic method involves parsing a message into blocks and dynamically generating target values associated with distinct keys. Encrypting message blocks with XOR operations using these target values, accessed via their respective keys, ensures security. The encrypted ciphertext is then transmitted to a second chiplet via a processing device. Please view full patent (US-20230275742-A1) for details.

This cryptographic method involves receiving a message and encrypting it using XOR operations. A tag is computed on a concatenated set of data, and the tag is encrypted using XOR operations. The resulting tag ciphertext is appended to the message ciphertext, forming the final ciphertext, which is then transmitted to a second chiplet. Please view full patent (US-20230275761-A1) for details.

The invention involves a lightweight node in a decentralized network storing a blockchain with successive blocks. Each block has a header containing a data merkle root derived from a data merkle tree with leaf nodes represented by hashes. The data merkle root, along with a public key and intermediate hash, enables partial verification of the public key’s validity in the decentralized network. Please view full patent (US-20220417008-A1) for details.

In this invention, an unenrolled lightweight node operates within a decentralized network alongside a trusted node and multiple peers. Utilizing a lightweight blockchain consensus algorithm, the node stores a token with a signature, including a first identifier signed by the trusted node. The node broadcasts an enrollment request to peers, providing authentication with a second identifier and a corresponding signature, facilitating secure blockchain enrollment. Please view full patent (US-20220417030-A1) for details.

Publications

This paper performs an in-depth study of error-resiliency of Neural Networks(NNs). Our investigation has resulted into two important findings. First, we found that Binary Neural Networks (BNNs) are more error-tolerant than 32-bits NNs. Second, in BNNs the network accuracy is more sensitive to errors in Batch Normalization Parameters(BNPs) than that in binary weights. A detailed discussion on the same is presented in the paper. Based on these findings, we propose a split memory architecture for low power BNNs, suitable for IoTs. In the proposed split memory architecture, weights are stored in area-efficient 6T SRAM, and BNPs are stored in robust 12T SRAM. The proposed split memory architecture for BNNs synthesized in UMC 28nm is highly energy efficient as the V min (minimum operating voltage) can be reduced to 0.36 V, 0.52 V, and 0.52 V for the MNIST, CIFAR10, and ImageNet datasets respectively, with accuracy drop of less than 1%. View full paper.

Neural networks are both compute and memory intensive, and consume significant power while inferencing. Bit reduction of weights is one of the key techniques used to make them power and area efficient without degrading performance. In this paper, we show that inferencing accuracy changes insignificantly even when floating-point weights are represented using 10-bits (lower for certain other neural networks), instead of 32-bits. We have considered a set of 8 neural networks. Further, we propose a mathematical formula for finding the optimum number of bits required to represent the exponent of floating point weights, below which the accuracy drops drastically. We also show that mantissa is highly dependent on the number of layers of a neural network and propose a mathematical proof for the same. Our simulation results show that bit reduction gives better throughput, power efficiency, and area efficiency as compared to those of the models with full precision weights. View full paper.

This brief compares quantized float-point representation in posit and fixed-posit formats for a wide variety of pre-trained deep neural networks (DNNs). We observe that fixed-posit representation is far more suitable for DNNs as it results in a faster and low-power computation circuit. We show that accuracy remains within the range of 0.3% and 0.57% of top-1 accuracy for posit and fixed-posit quantization. We further show that the posit-based multiplier requires higher power-delay-product (PDP) and area, whereas fixed-posit reduces PDP and area consumption by 71% and 36%, respectively, compared to (Devnath et al., 2020) for the same bit-width. View full paper.

Traditionally, the crop analysis and agricultural production predictions were done based on statistical models. However, with the climate of the world changing to drastic degrees, these statistical models have become very ambiguous. Hence, it becomes prudent that we resort to other less vague methods. Through a traditional model, user interacts primarily with a mathematical computations and its results and helps to solve well-defined and structured problems. Whereas, in a data driven model, user interacts primarily with the data and helps to solve mainly unstructured problems. At this point, enters the concept of Machine Learning. In this work we tried to find a new approach to reduce the input feature to reduce the processing power needed. We have attempted at predicting the agricultural outputs of rice production in an area by implementing a pixel count based classification machine learning model. Through this model, we tried to predict the approximate crop yield based on NDVI values analyzed for a particular season and area. View full paper.

Projects