This repository demonstrates object detection using two versions of the YOLO model: YOLOv5 and YOLOv8 from Ultralytics. The scripts provided allow you to detect objects in images and videos using pre-trained models, and display or save the annotated outputs using OpenCV.
Table of Contents
- Installation
- Usage
- Detecting Objects in Images (YOLOv5)
- Processing Videos (YOLOv5 & YOLOv8)
- Dependencies
- Model Options
- YOLOv5 Models
- YOLOv8 Models
- Credits
Installation
- Clone the repository:
git clone https://github.com/shahinur-alam/YOLO.git
- Install required dependencies: Make sure you have Python 3.x installed. Install the dependencies using
pip
:pip install -r requirements.txt
- Install YOLOv5 and YOLOv8 dependencies:
- For YOLOv5, the
torch.hub
feature is used to load the pre-trained model, so ensure PyTorch is installed:
pip install torch torchvision torchaudio
- For YOLOv8, the Ultralytics YOLO library is required:
pip install ultralytics
- Install OpenCV, which is required for video and image processing:
pip install opencv-python numpy
- For YOLOv5, the
Usage
Detecting Objects in Images (YOLOv5)
The code includes a function to detect objects in images using the YOLOv5 model.
- Prepare the image: Save the image you want to detect objects in the root directory or specify its path in the script.
- Uncomment and use the
detect_objects
function: Inside the code, the functiondetect_objects
is currently commented out. Uncomment the relevant code and specify the path to the image you want to analyze.
# Example usage
image_path = "path/to/your/image.jpg"
detect_objects(image_path)
- Running the script: Once you’ve uncommented the function and provided the image path, run the script:
python yolov5_image_detection.py
- Output:
- The detected objects will be highlighted with bounding boxes.
- The output image is saved as
output_image.jpg
in the root directory.
Processing Videos (YOLOv5 & YOLOv8)
Both YOLOv5 and YOLOv8 models can be used to process video files. Below are examples for each model.
YOLOv5 Video Processing
- Prepare the video file: Place the video you want to process in the project directory or reference its path in the script.
- Set the video source: Modify the
video_source
variable in the script to point to your video file or use “0” for a live webcam feed.
video_source = "path/to/your/video.mp4"
- Running the script: Run the following command to start processing the video:
python yolov5_video_detection.py
- Output:
- The script will display each frame of the video with detected objects highlighted by bounding boxes.
- The processed video will be saved as
output/output_video_v5.mp4
.
YOLOv8 Video Processing
- Specify the video path: Edit the
video_path
variable to point to your video file or webcam stream.
video_path = "path/to/your/video.mp4"
- Running the script: Run the following command to process the video with YOLOv8:
python yolov8_video_detection.py
- Output:
- The script will display the video frames with bounding boxes around detected objects.
- Press
q
to stop the video playback.
Dependencies
- Python 3.x
- PyTorch (for YOLOv5)
- Ultralytics YOLO library (for YOLOv8)
- OpenCV (
opencv-python
) - NumPy
To install all dependencies:
pip install torch torchvision torchaudio ultralytics opencv-python numpy
Model Options
Both YOLOv5 and YOLOv8 support multiple model sizes, each with varying trade-offs in speed and accuracy.
YOLOv5 Models
In the YOLOv5 script, the following line loads the yolov5m.pt
model:
model = torch.hub.load('ultralytics/yolov5', 'yolov5m', pretrained=True)
You can replace 'yolov5m'
with other model sizes:
'yolov5n'
: Nano (fastest, least accurate)'yolov5s'
: Small'yolov5m'
: Medium'yolov5l'
: Large'yolov5x'
: Extra-large (slowest, most accurate)
YOLOv8 Models
For YOLOv8, the model is loaded with the following line:
model = YOLO('yolov8x.pt') # 'n', 's', 'm', 'l', or 'x' for different sizes
You can replace 'yolov8x.pt'
with the desired model size:
yolov8n.pt
: Nano (fastest, least accurate)yolov8s.pt
: Smallyolov8m.pt
: Mediumyolov8l.pt
: Largeyolov8x.pt
: Extra-large (slowest, most accurate)
Credits
This repository utilizes YOLOv5 and YOLOv8 models, both developed by Ultralytics.
For more information on YOLO, please refer to the official YOLO documentation at Ultralytics.