Mentions légales du service

Skip to content
Snippets Groups Projects
user avatar
sidimohammedkaddour authored
bd6eea1a
History

ReadMe for Surveillance System with Edge to Cloud Video Processing

Overview

This project demonstrates a video processing surveillance system designed to enhance safety in residential areas and along roads by identifying and alerting residents and authorities to the presence of dangerous animals. The system integrates edge devices and cloud servers to ensure quick and accurate responses while minimizing data transmission to the cloud. A feedback mechanism is implemented to monitor, understand, and adjust to changing conditions, ensuring the system adheres to defined Service Level Objectives (SLOs).

System Components

Camera

  • Function: The Camera component captures video frames from either a camera feed or pre-recorded video files. It simulates realistic surveillance scenarios, including intervals of motion and no motion, with configurable event frequencies.
  • Process:
    • Resizes video frames to 640px width using OpenCV for optimized processing.
    • Serializes frames and metadata (e.g., timestamp) using Python's pickle module.
    • Sends serialized frames over a network connection to the Motion Detection service.
  • Key Features:
    • Supports pre-defined video sequences based on detected animal types (e.g., bear, tiger, wolf).
    • Configurable appearance frequency for specific events (e.g., dangerous animals) within an hour.
    • Monitors and records the frames-per-second (FPS) rate for performance analysis.
    • Integrates with a TracerProvider to enable distributed tracing for detailed monitoring of the frame transmission process. Each frame transmission is traced to measure latency and ensure reliability.
  • Connection: Establishes a persistent TCP connection to the Motion Detection service and ensures real-time frame transmission even during intermittent connectivity issues.

Motion Detection (Edge)

  • Function: The Motion Detection component processes video frames received from the Camera component to identify significant motion events. If motion is detected, relevant frames are sent to the Object Recognition service for further analysis.

  • Process:

    • Frame Reception: Receives video frames over a network connection, deserializes them, and calculates the transmission time between the Camera and Motion Detection service.
    • Motion Detection Algorithm:
      • Converts frames to grayscale and applies Gaussian blur for noise reduction.
      • Compares consecutive frames to detect significant differences.
      • Identifies and marks regions of motion using contour detection.
    • Action on Detection: When motion is detected, it sends the frame to the Object Recognition component for object identification.
    • Real-time Metrics Monitoring:
      • Tracks CPU usage and frame processing time.
      • Measures frames-per-second (FPS) rate and updates a histogram for performance analysis.
      • Monitors the presence of motion events and transmission times using integrated gauges.
  • Key Features:

    • TracerProvider Integration: Enables distributed tracing for frame reception, processing, and transmission, providing insights into latency and bottlenecks.
    • Metrics Collection:
      • Edge-to-Cloud transmission time (c2e_transmission_time).
      • Frame processing time per frame (md_processing_time).
      • Real-time motion detection status (md_detected_motion).
      • FPS rate monitoring with histogram and gauge metrics.
    • Robustness: Handles network interruptions gracefully and retries sending frames when connections fail.
  • Connection: Listens for incoming connections from the Camera service and communicates with the Object Recognition service over TCP.

Object Recognizer (Cloud)

  • Function: The Object Recognizer component processes frames received from the Motion Detector to identify objects using a pre-trained YOLO model. It tracks performance metrics such as processing time, queue length, and end-to-end response time for frames.

  • Process:

    • Frame Reception:
      • Receives serialized frames sent by the Motion Detector.
      • Measures the edge-to-cloud transmission time (md_e2c_transmission_time).
      • Tracks the size of the incoming frame queue (or_len_q).
    • Object Detection:
      • Applies the YOLO algorithm to detect objects in the frame.
      • Outputs bounding boxes, class labels, and detection confidence.
      • Saves annotated frames for review or further processing.
    • Performance Monitoring:
      • Calculates and tracks frame processing time (or_processing_time).
      • Measures the total response time for a frame from its capture to result generation (response_time).
  • Key Features:

    • TracerProvider Integration: Ensures distributed tracing across components for end-to-end visibility into delays and bottlenecks.
    • Metrics Collection:
      • Frame queue length monitoring (or_len_q).
      • Processing time per frame (or_processing_time).
      • Edge-to-cloud transmission time for frames (md_e2c_transmission_time).
      • Response time from frame capture to detection completion (response_time).
    • YOLO-based Object Detection: Utilizes YOLO v3 model for object detection, with configurable thresholds for confidence and non-maximum suppression.
    • Concurrency: Supports multiple clients by handling frame processing in separate threads to ensure scalability and efficiency.
  • Connection: Listens for connections from the Motion Detector and processes incoming frames asynchronously.

Other Components (monitoring and distributed tracing)

To monitor your application effectively, you can integrate the following components alongside OpenTelemetry to gather comprehensive metrics and performance data:

  1. OpenTelemetry Collector

    • Function: The OpenTelemetry Collector is a vendor-agnostic agent that collects, processes, and exports telemetry data (traces, metrics, logs).
    • Metrics Sent to Prometheus/Backends:
      • Metrics Collection: Collects and processes data from various services (e.g., Camera, Motion Detection, Object Recognition).
      • Exporters: Sends processed telemetry data to Prometheus or any other backend of your choice for long-term storage and visualization.
  2. cAdvisor

    • Function: cAdvisor (Container Advisor) provides insights into resource usage and performance characteristics of running containers. It helps monitor containerized applications for CPU, memory, and network usage.
    • Metrics Sent to Prometheus:
      • CPU Usage: Percentage of CPU usage by containers.
      • Memory Usage: Memory consumption per container.
      • Network I/O: Amount of network traffic generated by containers.
      • Disk I/O: Disk read/write activity.
      • Container Lifespan Metrics: Metrics related to the lifecycle of containers.
  3. Node Exporter

    • Function: Node Exporter is a Prometheus exporter for hardware and OS metrics exposed by *nix kernels. It provides detailed data about system performance.
    • Metrics Sent to Prometheus:
      • CPU Load: System CPU load averages (1, 5, 15 minutes).
      • Memory Usage: Memory and swap usage at the system level.
      • Disk Utilization: Disk usage, including free and used space, disk I/O.
      • Network Stats: Network interfaces' packet and byte counts, errors, and drops.
      • System Uptime: System uptime and load average.
  4. Prometheus

    • Function: Prometheus is a monitoring and alerting toolkit designed for reliability and scalability. It scrapes and stores metrics from various exporters and services.
    • Metrics Collected:
      • Custom Application Metrics: Metrics sent from OpenTelemetry, cAdvisor, and Node Exporter (e.g., FPS, processing time, transmission times, etc.).
      • Service Metrics: Metrics from distributed services (Camera, Motion Detection, Object Recognition).
      • System Metrics: Metrics from system-level exporters (e.g., CPU, memory usage, disk I/O from Node Exporter, cAdvisor).

These components together allow you to monitor the application comprehensively, including its performance, system health, container resource usage, and overall operational efficiency.

Scenarios Affecting Performance

Normal Conditions

  • Description: System detects and processes video frames efficiently, meeting all SLOs without any issues.

High Traffic Load

  • Description: During peak hours or significant increase in video feeds, the system experiences higher traffic load.
  • Impact: Potential decrease in frame rate frequency.

High Processing Time

  • Description: Some nodes may experience longer processing times due to a lack of computational resources.

Large Distance

  • Description: Motion Detection deployed far from cameras.
  • Impact: Delay in frame transmission.

Metrics Collected

Here are the metrics sent to OpenTelemetry:

  1. FPS (Frames per Second)
  2. Camera-to-Edge Transmission Time (c2e_transmission_time)
  3. Frame Processing Time (md_processing_time)
  4. Edge-to-Cloud Transmission Time (md_e2c_transmission_time)
  5. Response Time (response_time)
  6. Frame Queue Length (or_len_q)
  7. Processing Time (or_processing_time)

Actions

  1. Scaling CPU on Cloud: Adjusting computational resources on the cloud to handle increased load.
  2. Moving an instance of Motion Detector: Repositioning the Motion Detection service to optimize performance.

Service Level Objectives (SLOs)

  1. Response Time: The time taken to detect and respond to an event.
  2. Frame Dropping Rate: The frequency of dropped frames during transmission or processing.

Diagram

use case: Video Processing Edge to Cloud Application Figure 1: Real-time detection system for identifying and alerting dangerous animals.

References

  • OpenCV library for motion detection.
  • YOLOv3 model for object recognition.