# Surveillance System with Edge to Cloud Video Processing
# Surveillance System Wiki
## Overview
This project demonstrates a video processing surveillance system designed to enhance safety in residential areas and along roads by identifying and alerting residents and authorities to the presence of dangerous animals. The system integrates edge devices and cloud servers to ensure quick and accurate responses while minimizing data transmission to the cloud. A feedback mechanism is implemented to monitor, understand, and adjust to changing conditions, ensuring the system adheres to defined Service Level Objectives (SLOs).
This surveillance system is designed to process video feeds using a distributed Edge-to-Cloud architecture. Its primary goal is to identify and alert residents and authorities about dangerous animals detected in video footage. The system integrates edge devices (for real-time processing) and cloud servers (for intensive tasks like object recognition), ensuring quick and accurate responses. A feedback mechanism helps monitor and adjust the system, ensuring it operates within Service Level Objectives (SLOs).
-[Step 1: Build Docker Images for Services](#step-1-build-docker-images-for-services)
-[Step 2: Create or Update `docker-compose.yml`](#step-2-create-or-update-docker-composeyml)
-[Step 3: Deploy with Docker Compose](#step-3-deploy-with-docker-compose)
-[Step 4: Verify the Deployment](#step-4-verify-the-deployment)
## System Components
### Camera
### Camera
-**Function:** The Camera component captures video frames from either a camera feed or pre-recorded video files. It simulates realistic surveillance scenarios, including intervals of motion and no motion, with configurable event frequencies.
-**Process:**
- Resizes video frames to 640px width using OpenCV for optimized processing.
- Serializes frames and metadata (e.g., timestamp) using Python's `pickle` module.
- Sends serialized frames over a network connection to the Motion Detection service.
-**Key Features:**
- Supports pre-defined video sequences based on detected animal types (e.g., bear, tiger, wolf).
- Configurable appearance frequency for specific events (e.g., dangerous animals) within an hour.
- Monitors and records the frames-per-second (FPS) rate for performance analysis.
- Integrates with a **TracerProvider** to enable distributed tracing for detailed monitoring of the frame transmission process. Each frame transmission is traced to measure latency and ensure reliability.
-**Connection:** Establishes a persistent TCP connection to the Motion Detection service and ensures real-time frame transmission even during intermittent connectivity issues.
-**Function**: Captures video frames, resizes them for optimized processing, and serializes and transmits them to the Motion Detection service.
-**Key Features**:
- Handles video sequences based on detected animal types.
- Configurable appearance frequency for specific events.
- Tracks FPS rate and integrates with **TracerProvider** for distributed tracing.
-**Connection**: Maintains a persistent TCP connection for frame transmission.
### Motion Detection
-**Function:** The Motion Detection component processes video frames received from the Camera component to identify significant motion events. If motion is detected, relevant frames are sent to the Object Recognition service for further analysis.
-**Process:**
-**Frame Reception:** Receives video frames over a network connection, deserializes them, and calculates the transmission time between the Camera and Motion Detection service.
-**Motion Detection Algorithm:**
- Converts frames to grayscale and applies Gaussian blur for noise reduction.
- Compares consecutive frames to detect significant differences.
- Identifies and marks regions of motion using contour detection.
-**Action on Detection:** When motion is detected, it sends the frame to the Object Recognition component for object identification.
-**Real-time Metrics Monitoring:**
- Tracks CPU usage and frame processing time.
- Measures frames-per-second (FPS) rate and updates a histogram for performance analysis.
- Monitors the presence of motion events and transmission times using integrated gauges.
-**Key Features:**
-**TracerProvider Integration:** Enables distributed tracing for frame reception, processing, and transmission, providing insights into latency and bottlenecks.
-**Metrics Collection:**
- Edge-to-Cloud transmission time (`c2e_transmission_time`).
- Frame processing time per frame (`md_processing_time`).
- Real-time motion detection status (`md_detected_motion`).
- FPS rate monitoring with histogram and gauge metrics.
-**Robustness:** Handles network interruptions gracefully and retries sending frames when connections fail.
-**Connection:** Listens for incoming connections from the Camera service and communicates with the Object Recognition service over TCP.
-**Function**: Processes frames from the Camera to detect significant motion and forwards relevant frames to Object Recognition.
-**Key Features**:
- Motion detection using grayscale and Gaussian blur.
- Real-time performance monitoring of CPU usage, frame processing time, and FPS.
- Integrates with **TracerProvider** for tracing frame reception, processing, and transmission.
-**Connection**: Listens for incoming frames from the Camera and sends processed frames to Object Recognition.
### Object Recognizer
-**Function:** The Object Recognizer component processes frames received from the Motion Detector to identify objects using a pre-trained YOLO model. It tracks performance metrics such as processing time, queue length, and end-to-end response time for frames.
-**Process:**
-**Frame Reception:**
- Receives serialized frames sent by the Motion Detector.
- Measures the edge-to-cloud transmission time (`md_e2c_transmission_time`).
- Tracks the size of the incoming frame queue (`or_len_q`).
-**Object Detection:**
- Applies the YOLO algorithm to detect objects in the frame.
- Outputs bounding boxes, class labels, and detection confidence.
- Saves annotated frames for review or further processing.
-**Performance Monitoring:**
- Calculates and tracks frame processing time (`or_processing_time`).
- Measures the total response time for a frame from its capture to result generation (`response_time`).
-**Key Features:**
-**TracerProvider Integration:** Ensures distributed tracing across components for end-to-end visibility into delays and bottlenecks.
-**Metrics Collection:**
- Frame queue length monitoring (`or_len_q`).
- Processing time per frame (`or_processing_time`).
- Edge-to-cloud transmission time for frames (`md_e2c_transmission_time`).
- Response time from frame capture to detection completion (`response_time`).
-**YOLO-based Object Detection:** Utilizes YOLO v3 model for object detection, with configurable thresholds for confidence and non-maximum suppression.
-**Concurrency:** Supports multiple clients by handling frame processing in separate threads to ensure scalability and efficiency.
-**Connection:** Listens for connections from the Motion Detector and processes incoming frames asynchronously.
### Other Components (Monitoring and Distributed Tracing)
To monitor your application effectively, you can integrate the following components alongside OpenTelemetry to gather comprehensive metrics and performance data:
1.**OpenTelemetry Collector**
-**Function:** The OpenTelemetry Collector is a vendor-agnostic agent that collects, processes, and exports telemetry data (traces, metrics, logs).
-**Metrics Sent to Prometheus/Backends:**
-**Metrics Collection:** Collects and processes data from various services (e.g., Camera, Motion Detection, Object Recognition).
-**Exporters:** Sends processed telemetry data to Prometheus or any other backend of your choice for long-term storage and visualization.
2.**cAdvisor**
-**Function:** cAdvisor (Container Advisor) provides insights into resource usage and performance characteristics of running containers. It helps monitor containerized applications for CPU, memory, and network usage.
-**Metrics Sent to Prometheus:**
-**CPU Usage:** Percentage of CPU usage by containers.
-**Memory Usage:** Memory consumption per container.
-**Network I/O:** Amount of network traffic generated by containers.
-**Disk I/O:** Disk read/write activity.
-**Container Lifespan Metrics:** Metrics related to the lifecycle of containers.
3.**Node Exporter**
-**Function:** Node Exporter is a Prometheus exporter for hardware and OS metrics exposed by *nix kernels. It provides detailed data about system performance.
-**Metrics Sent to Prometheus:**
-**CPU Load:** System CPU load averages (1, 5, 15 minutes).
-**Memory Usage:** Memory and swap usage at the system level.
-**Disk Utilization:** Disk usage, including free and used space, disk I/O.
-**Network Stats:** Network interfaces' packet and byte counts, errors, and drops.
-**System Uptime:** System uptime and load average.
4.**Prometheus**
-**Function:** Prometheus is a monitoring and alerting toolkit designed for reliability and scalability. It scrapes and stores metrics from various exporters and services.
-**Metrics Collected:**
-**Custom Application Metrics:** Metrics sent from OpenTelemetry, cAdvisor, and Node Exporter (e.g., FPS, processing time, transmission times, etc.).
cd surveillance-system-edge-to-cloud-video-processing
```
2.**Build the Docker images locally**:
You don't need to navigate to each service's directory individually. From the root of the cloned repository, run the following commands to build the Docker images for each service:
These commands will automatically find the `Dockerfile` in each service's directory (`./services/camera`, `./services/motion_detector`, and `./services/object_recognizer`). The `latest` tag is used to mark the most recent image.
### Step 2: Create or Update `docker-compose.yml`
Make sure the `docker-compose.yml` file is set up correctly. Here's an example for reference: