Thesis
LARGE SCALE LEARNING ON TINY DEVICES: OPTIMIZATION TECHNIQUES FOR ADAPTIVE AUTONOMOUS DRIVING SYSTEMS
Washington State University
Master of Science (MS), Washington State University
05/2025
DOI:
https://doi.org/10.7273/000007490
Abstract
With recent advances in computing and sensing technologies, autonomous driving has gained increasing attention and emerged as a promising platform for the next generation of intelligent transportation systems. A key requirement for autonomous driving systems is the ability to utilize artificial intelligence and machine learning techniques to make reliable, real-time decisions on edge devices. However, deploying robust machine learning models on resource-constrained hardware in real-time environments remains a challenging task. In this work, we explore two different techniques for efficient AI inference on such devices and analyze their effectiveness through two distinct projects.
First, we conducted a comprehensive study on quantization and pruning techniques to enable efficient and accurate decision-making for autonomous driving applications, such as traffic sign detection. We implemented channel and fine-grained pruning on a custom-trained YOLOv8 model using the German Traffic Sign Recognition Benchmark (GTSRB) dataset, which contains 50,000 traffic sign images across 40 classes. We further optimized the detection layers and applied INT8 quantization using NVIDIA TensorRT along with k-means quantization. These methods collectively improved computational efficiency by 4.4x, reduced the model size by 90%, and resulted in only a 3% drop in accuracy. Experimental evaluations were performed on two devices: the Jetson Nano and the Jetson Orin Nano.
Second, we investigated the integration of large language models (LLMs) into autonomous driving systems, addressing the associated challenges and potential for optimization. Running LLMs directly on resource-constrained devices is impractical without quantization. To address this, we applied AWQ and Q4_0 quantization techniques to the Mistral 7B model. We used the Gymnasium simulation environment by OpenAI, specifically the Highway-env environment from the Farama Foundation, which simulates the ego vehicle and surrounding traffic. The extracted simulation data was converted into structured JSON format and pro- vided to the Mistral model via an agent prompt for classification tasks. This experiment was conducted on a Lambda system and the Jetson Orin Nano. While the model ran efficiently within the hardware limitations, the results were less accurate due to difficulties in handling complex, large-scale JSON data.
Metrics
3 File views/ downloads
11 Record Views
Details
- Title
- LARGE SCALE LEARNING ON TINY DEVICES
- Creators
- Ishparsh Uprety
- Contributors
- Xinghui Zhao (Chair)Jia Li (Committee Member)Xuechen Zhang (Committee Member)
- Awarding Institution
- Washington State University
- Academic Unit
- School of Electrical Engineering and Computer Science
- Theses and Dissertations
- Master of Science (MS), Washington State University
- Publisher
- Washington State University
- Number of pages
- 65
- Identifiers
- 99901220323101842
- Language
- English
- Resource Type
- Thesis