NEURAL NETWORK CONVOLUTIONAL MODELS FOR TRAFFIC LIGHT DETECTION AND RECOGNITION
Abstract and keywords
Abstract (English):
The article is devoted to the construction and study of convolutional neural network models for detection and recognition of traffic light signals. To solve these problems, various methods and algorithms are used, such as color filters, adaptive pattern matching, contour analysis, machine learning methods, computer vision algorithms. Many technology companies create systems for recognizing traffic light signals and road infrastructure elements. In this paper, a comparative analysis of two architectures of convolutional neural networks was carried out for recognizing traffic light signals: YOLOv8n and Faster R-CNN. The choice of these architectures is due to their high efficiency in solving problems of detecting and recognizing objects in images. The solution of the tasks required the implementation of the following stages: collecting and preparing the initial dataset for modeling, selecting the necessary tools for building models, selecting the architecture and training convolutional neural networks, as well as conducting experimental studies based on the constructed models. The LISA set was selected as the initial dataset, containing a large number of different training examples in the form of 1280 × 960 images. The source data was divided into training, validation (to control overfitting) and test (to assess the adequacy of the models) samples in the proportions of 80%, 10% and 10%. Night and daytime images were evenly distributed between all samples. In addition, geometric transformations and image changes at the pixel level were used to augment the source data: rotation by a small angle, cropping, scaling, changing the brightness and saturation of the image. The final number of images was 43107, including 34485 examples for training the models, 4311 for their validation and 4311 for testing. Python was chosen as the programming language. During training, the loss function value for the Faster R-CNN network was 0.11, and for YOLOv8n it was 0.25. At the same time, both models showed acceptable results in recognizing both near and far objects. At the same time, the YOLOv8n model was the most effective. It is slightly inferior in accuracy, but it recognizes traffic lights approximately 5.5 times faster than Faster R-CNN. This allows it to be used in real-time, for example, in autonomous vehicle control systems.

Keywords:
TRAFFIC LIGHT SIGNALS, CONVOLUTIONAL NEURAL NETWORKS, OBJECT RECOGNITION, FASTER R-CNN MODEL, YOLOV8N MODEL
Text
Text (PDF): Read Download
Login or Create
* Forgot password?