Explore Ultralytics YOLOv8 model architecture for computer vision that includes detection, segmentation, classification, and more.
What is YOLOv8?
The company that produced YOLOv5, Ultralytics, also developed YOLOv8, a computer vision model architecture. With Roboflow Inference, an open source Python tool for executing vision models, you may use Ultralytics YOLOv8 YOLOv8 models on a variety of devices, such as NVIDIA Jetson, NVIDIA GPUs, and macOS computers.
The most recent cutting-edge YOLO model, YOLOv8, is suitable for instance segmentation, picture classification, and object recognition applications. Ultralytics, who also developed the groundbreaking and industry-defining YOLOv5 model, developed YOLOv8. Many architectural and developer experience enhancements and modifications over YOLOv5 are included in YOLOv8.
As of this writing, Ultralytics is actively working on new features and addressing community feedback for YOLOv8. In fact, Ultralytics receives long-term support for its models after they are released because the company collaborates with the community to improve them.
YOLO Grew Into YOLOv8
In the field of computer vision, the YOLO (You Only Look Once) set of models has gained widespread recognition. The popularity of YOLO can be attributed to its high accuracy while keeping its model size small. A variety of developers can use YOLO models because they can be trained on a single GPU. It is inexpensive for machine learning practitioners to implement on edge hardware or in the cloud.
Since its initial introduction by Joseph Redmond in 2015, YOLO has been fostered by the computer vision community. Early on (versions 1-4), Redmond’s proprietary deep learning framework, Darknet, used C code to maintain YOLO.
Glenn Jocher of Ultralytics, the inventor of YOLOv8, shadowed the YOLOv3 repository in PyTorch, a Facebook deep learning framework. With improved training in the shadow repository, Ultralytics eventually released YOLOv5, their own model.
YOLOv5’s adaptable Pythonic framework made it the world’s SOTA repository very soon. This framework made it possible for the community to swiftly develop new modelling enhancements and distribute them around repositories using comparable PyTorch techniques.
The YOLOv5 maintainers have been dedicated to fostering a robust software ecosystem around the model in addition to providing solid model foundations. As the community requests, they aggressively address problems and advance the repository’s capabilities.
Scaled-YOLOv4, YOLOR, and YOLOv7 are some of the models that have split off from the YOLOv5 PyTorch repository in the past two years. Other models, including YOLOX and YOLOv6, have surfaced globally from their own PyTorch-based implementations. Every YOLO model has introduced fresh SOTA methods along the road, which keep improving the model’s precision and effectiveness.
Ultralytics spent the last six months investigating YOLOv8, the most recent SOTA version of YOLO. The launch date of YOLOv8 was January 10, 2023.
Why Should Use YOLOv8?
For your next computer vision project, you should think about utilising YOLOv8 for the following primary reasons:
- The accuracy rate of YOLOv8 is high, as indicated by Roboflow 100 and Microsoft COCO.
- From an intuitive CLI to a well-organized Python package, YOLOv8 offers a wealth of developer-convenience features.
- There are a lot of people in computer vision circles that might be able to help you when you need direction because there is a sizable community around YOLO and a developing community around the YOLOv8 model.
On COCO, YOLOv8 achieves high accuracy. For instance, when measured on COCO, the medium model, the YOLOv8m model, scores a mAP of 50.2%. YOLOv8 outperformed YOLOv5 by a significant margin when compared to Roboflow 100, a dataset that precisely assesses model performance on a variety of task-specific domains. The performance analysis later in the text goes into further detail on this.
Additionally, YOLOv8 has important developer-convenience features. YOLOv8 comes with a CLI that makes model training easier than with other models, where tasks are divided among numerous Python files that you can run. A Python package that offers a smoother coding experience than previous models is also included.
When choosing a model to utilize, the YOLO community is noteworthy. Numerous computer vision specialists are familiar with YOLO and its operation, and there is a wealth of internet information regarding its practical application. Even though YOLOv8 is brand-new as of the time this article was written, there are plenty of helpful web resources.
Let’s examine the architecture in detail and see how YOLOv8 differs from earlier YOLO models.
YOLOv8 model architecture: A Deep Dive
Since YOLOv8 hasn’t been published yet, one can don’t have direct knowledge of the ablation studies and direct research technique used in its development. In order to begin recording the new features in YOLOv8, first examined the repository and the data that was accessible about the model.
- Check out the YOLOv8 repository and view this code differential to see how part of the research was conducted if you want to look into the code yourself.
- After giving a brief overview of significant modelling revisions, the developers will examine the model’s evaluation, which is self-evident.
- RangeKing, a GitHub user, created the following image, which provides a thorough visualization of the network’s architecture.

YOLOv8 Accuracy Improvements
The main driving force behind YOLOv8 research was empirical assessment using the COCO benchmark. New tests are conducted to confirm the impact of the modifications on COCO modelling when each component of the network and training procedure is adjusted.
YOLOv8 COCO Accuracy
The industry standard benchmark for assessing object detection models is called COCO (Common Objects in Context). It consider the FPS measurement for inference speed and the mAP value when comparing models on COCO. Models ought to be compared at comparable rates of inference.
The following figure illustrates YOLOv8’s accuracy on COCO based on data gathered by the Ultralytics team and released in their YOLOv8 README:

YOLOv8 Datasets looks
Roboflow Universe offers pre-trained models that you may use right away as well as datasets for YOLOv8 model training.
Selected Datasets and Models for YOLOv8

Plane Detection
Count the number of aircraft captured by aerial imaging systems.
Warehouse Item Detection
Identify objects that you could encounter in a warehouse setting, such as pallets, forklifts, and small load carriers.
License Plate Detection
Recognise license plates in a picture.
YOLOv8 Model Sizes
For every work type, there are five different YOLO model sizes: nano, small, medium, large, and extra-large. This is the performance of YOLOv8 when benchmarked on the COCO dataset for object detection.
Model | Size (px) | mAPval |
---|---|---|
YOLOv8n | 640 | 37.3 |
YOLOv8s | 640 | 44.9 |
YOLOv8m | 640 | 50.2 |
YOLOv8l | 640 | 52.9 |
YOLOv8x | 640 | 53.9 |