Fast Efficient And Robust Learning With Brain Inspired Hyperdimensional Computing

Download Fast Efficient And Robust Learning With Brain Inspired Hyperdimensional Computing PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Fast Efficient And Robust Learning With Brain Inspired Hyperdimensional Computing book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
Fast, Efficient, and Robust Learning with Brain-Inspired Hyperdimensional Computing

With the emergence of the Internet of Things (IoT), devices will generate massive datastreams demanding services that pose huge technical challenges due to limited device resources. Furthermore, IoT systems increasingly need to run complex and energy intensive Machine Learning (ML) algorithms, but do not have the resources to run many state-of-the-art ML models, instead opting to send their data to the cloud for computing. This results in insufficient security, slower moving data, and energy intensive data centers. In order to achieve real-time learning in IoT systems, we need to redesign the algorithms themselves using strategies that more closely model the ultimate efficient learning machine: the human brain. This dissertation focuses on increasing the computing efficiency of machine learning on IoT devices with the application of Hyperdimensional Computing (HDC). HDC mimics several desirable properties of the human brain, including: robustness to noise, robustness to hardware failures, and single-pass learning where training happens in one-shot without storing the training data points or using complex gradient-based algorithms. These features make HDC a promising solution for today's embedded devices with limited storage, battery, and resources, and the potential for noise and variability. Research in the HDC field has targeted improving these key features of HDC and expanding to include even more features. There are four main paths in HDC research: (1) Algorithmic changes for faster and more energy efficient learning, (2) Novel architectures to accelerate HDC, usually targeting lower power IoT devices, (3) Extending HDC applications beyond classification, (4) Exploiting the robust property of HDC for more efficient and faster inference, and (5) HDC Theory, its connection to neuroscience and mathematics. This dissertation contributes to four of these research paths in HDC. Our contributions include: (1) We introduce the first adaptive bitwidth model for HDC. In this work we propose a new quantization method and during inference we iterate through the bits along all dimensions taking the hamming distance. At each iteration, we check if the current hamming distance passes a threshold similarity, if it does, we terminate execution early to save energy and time. (2) We create a redesign of the entire HDC process with a locality-based encoding, quantized retraining, and online dimension reduction during inference, all accelerated by a new novel FPGA design. In this work we our locality-based encoding removes random memory accesses from HDC encoding as well as adds sparsity for more efficiency. We also introduce a general method to quantize to any desired model bitwidth. Finally, we propose a method to find any insignificant dimensions in the HDC model and remove them for more energy efficiency during inference. (3) We extend HDC to support multi-label classification. We perform multi-label classification by creating a binary classification model for each label. Upon inference, our models determine if each label exists independently. This is different than prior work that took the power set of the labels to reduce the problem to a single label classification as HDC scales poorly with this method. (4) Finally, we experimentally evaluate the robustness of HDC for the first time and create a new analog PIM architecture with reduced precision Analog to Digital Converters (ADC), exploiting that robustness. We test HDC robustness in a federated learning environment where edge devices send encoded hypervectors to a central server wirelessly. We evaluate the impact of any wireless transmission errors on this data and show that HDC is 48× more robust than other classifiers. We then use this knowledge that HDC is robust to create a more efficient analog PIM circuit by reducing the bitwidth of the ADCs.
Green Machine Learning Protocols for Future Communication Networks

Machine learning has shown tremendous benefits in solving complex network problems and providing situation and parameter prediction. However, heavy resources are required to process and analyze the data, which can be done either offline or using edge computing but also requires heavy transmission resources to provide a timely response. The need here is to provide lightweight machine learning protocols that can process and analyze the data at run time and provide a timely and efficient response. These algorithms have grown in terms of computation and memory requirements due to the availability of large data sets. These models/algorithms also require high levels of resources such as computing, memory, communication, and storage. The focus so far was on producing highly accurate models for these communication networks without considering the energy consumption of these machine learning algorithms. For future scalable and sustainable network applications, efforts are required toward designing new machine learning protocols and modifying the existing ones, which consume less energy, i.e., green machine learning protocols. In other words, novel and lightweight green machine learning algorithms/protocols are required to reduce energy consumption which can also reduce the carbon footprint. To realize the green machine learning protocols, this book presents different aspects of green machine learning for future communication networks. This book highlights mainly the green machine learning protocols for cellular communication, federated learning-based models, and protocols for Beyond Fifth Generation networks, approaches for cloud-based communications, and Internet-of-Things. This book also highlights the design considerations and challenges for green machine learning protocols for different future applications.
Machine Learning and Knowledge Discovery in Databases: Research Track

The multi-volume set LNAI 14169 until 14175 constitutes the refereed proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases, ECML PKDD 2023, which took place in Turin, Italy, in September 2023. The 196 papers were selected from the 829 submissions for the Research Track, and 58 papers were selected from the 239 submissions for the Applied Data Science Track. The volumes are organized in topical sections as follows: Part I: Active Learning; Adversarial Machine Learning; Anomaly Detection; Applications; Bayesian Methods; Causality; Clustering. Part II: Computer Vision; Deep Learning; Fairness; Federated Learning; Few-shot learning; Generative Models; Graph Contrastive Learning. Part III: Graph Neural Networks; Graphs; Interpretability; Knowledge Graphs; Large-scale Learning. Part IV: Natural Language Processing; Neuro/Symbolic Learning; Optimization; Recommender Systems; Reinforcement Learning; Representation Learning. Part V: Robustness; Time Series; Transfer and Multitask Learning. Part VI: Applied Machine Learning; Computational Social Sciences; Finance; Hardware and Systems; Healthcare & Bioinformatics; Human-Computer Interaction; Recommendation and Information Retrieval. Part VII: Sustainability, Climate, and Environment.- Transportation & Urban Planning.- Demo.