Introduction to several architectures for large-scale IoT edge container cluster management - 0-edge containers and architectures

This article was last updated on: February 7, 2024 pm

📚️Reference:
IoT Edge Computing series

What is an edge container?

The concept of edge containers

Edge containers are decentralized computing resources that are as close as possible to the end user or device to reduce latency, save bandwidth, and enhance the overall digital experience.

The number of devices that have access to the internet is increasing every day. These include but are not limited to:

  • Smart TV
  • Smart home
  • Smartphones
  • Smart cars
  • IoT creates a wide variety of other smart devices

Most users run time-sensitive applications, and lag reduces the quality of the user experience. Distant centralized cloud services have high latency and are often to blame for poor application performance. Edge computing was developed to bring data processing closer to users and solve network-related performance issues.

Specifically, edge containers allow organizations to decentralize services by moving key components of applications to the edge of the network. By moving intelligence to the edge, organizations can achieve lower network costs and faster response times.

However, when organizations adopt edge containers/compute, they encounter issues such as managing heterogeneous devices (different processors, operating systems, etc.), resource-constrained devices, and intermittent connectivity.

📚️Reference:

Edge Computing will be 4x larger than cloud and will generate 75% of data worldwide by 2025. With hardware and
software spread across hundreds or thousands of locations, the only feasible way to manage these distributed systems are the simple paradigms around observability, loosely coupled systems, declarative APIs, and robust automation, that have made cloud native technologies so successful in the cloud. Kubernetes is already becoming a key part of the edge ecosystem, driving integrations and operations.

By 2025, 75% of the worldData will be generated at the edge, and edge computing will be larger than the cloud 4x**. Since hardware and software are scattered across hundreds or thousands of different locations, the only viable way to manage these distributed systems is aroundObservability, loosely coupled systems, declarative APIs, and powerful automationThe simple paradigms that have led to the success of cloud-native technologies in cloud computing. Kubernetes has become a key part of the edge ecosystem, driving its integration and operations.

Reported from Kubernetes on EDGE DAY

Why do you need containers?

Containers are easy-to-deploy software packages, and containerized applications are easy to distribute, making them a natural choice for edge computing solutions. Edge containers can be deployed in parallel to geographically different points of presence (PoPs) to achieve higher levels of availability than traditional cloud containers.

The main difference between cloud containers and edge containers is location. While cloud containers operate in remote regional data centers, edge containers sit at the edge of the network, closer to end users.

Since the main difference is location, edge containers use the same tools as cloud containers, so developers can use their existing Docker expertise for edge computing. To manage containers, organizations can use the web UI, terraform/ansible, or container orchestration systems (K8S, etc.).

Advantages of edge containers

  • Low latencyEdge containers offer extremely low latency because they are only the “last mile” from the end user.
  • Scalability: Edge networks have more PoPs than centralized clouds. As a result, edge containers can be deployed to multiple locations at the same time, giving organizations the opportunity to better meet regional needs.
  • MaturityContainer technologies such as :D Ocker are considered mature and battle-tested. Also, there’s no need to retrain, so developers testing edge containers can use the same Docker tools they’re familiar with.
  • Reduce bandwidthCentralized applications can incur high network fees because all traffic is concentrated in the cloud provider’s data center. Edge containers are close to the user and can provide preprocessing and caching.

Disadvantages of edge containers

  • Manage complexity: Distributing multiple containers, multiple operating systems, and multiple infrastructure devices across many regions requires careful planning and O&M/monitoring.
  • Increase the attack surface: Edge devices are often difficult to update in time due to hardware binding and scattered distribution, resulting in edge devices often becoming successful targets.
  • Network charges between PoPs: In addition to the regular inlet and egress fees, edge containers are also right between PoPsThere are separate charges for traffic that need to be taken into account. (e.g. network fee for 5G IoT card)
  • Resources are tight: CPU/ Memory / Storage Resources are tight, edge devices are more constrained than the resources of the cloud center, and cannot provide similar CPU/memory/storage of the cloud center. The resources of an edge device are generally between 1C0.5G8G and 2C8G32G
  • Poor network conditions: For example, if there is a 5G charging network, and the destination address needs to be activated and charged according to traffic, and due to 5G network conditions, the network transmission capacity is limited and unstable (may be offline for a period of time)

Application scenarios of edge computing

Here due to the author’s ability limitations, only some examples are made:

  • Commercial satellites
  • Aviation equipment: such as fighter jets, etc
  • Transportation Industry:
    • Toll booths
    • Smart traffic management
    • Vehicle-road coordination
    • Smart parking
  • Energy industry
    • Coal mining equipment
  • Industrial manufacturing
    • Production line
  • CDN
  • Smart cars
  • Smart Campus
  • Finance: Banking terminals
  • Smart logistics
  • electricity
    • Power inspection
  • Security monitoring

Common architecture for IoT edge container cluster management

Targeting one of the disadvantages of edge containers: the complexity of management, since hardware and software are scattered across hundreds or thousands of different locations, the only viable way to manage these distributed systems is around simple paradigms around observability, loosely coupled systems, declarative APIs, and powerful automation that have enabled the success of cloud-native technologies in cloud computing.

The common architecture is: cloud-edge-end three-tier architecture.

  1. cloud: Cloud Center, Unified Management, Core Computing;
  2. side: edge side, edge computing, edge network, connected to the cloud;
  3. end: End-side device.

At a minimum, the scenario needs to achieve the following goals:

  • Cloud-edge collaboration: Manage all container clusters at the edge through the cloud. Management includes at least 2 aspects: issuing instructions to check the health status;
  • Edge autonomy: When the cloud/edge network is interrupted, unstable, or abnormal, the edge cannot connect to the cloud, in which case the edge can run normally.
  • Lightweight edge: requires less resources, supports ARM architecture, and can run normally under limited resources

Several architectural scenarios for IoT edge container cluster management

In summary, there are several open source-based practices:

  1. Rancher + K3s: Rancher It is used for cloud scenarios and K3s is used for edge scenarios. The end is all kinds of equipment. Multiple K3s edge clusters are managed through Rancher.
  2. HashiCorp solutions: Nomad+ (consul optional) + docker/container, which is also a popular orchestration technology, but still slightly lower than k8s. Through the unified management of Nomad UI/API/CLI, nomad agent + (consul agent optional) + docker/other containers are used as edge/end scenarios.
  3. portainer+ dockerPortainer is a container management solution similar to rancher, but can manage a variety of container orchestration systems, such as Docker, Docker Swarm, Kubernetes, Nomad. Here you choose to directly manage Docker and Docker Swarm. Portainer is used for cloud scenarios as a unified management portal, and docker + portainer agent as an edge/end scenario.
  4. **KubeEdge**KubeEdge is an open source system for extending native containerized application orchestration to hosts at the edge. Built on top of Kubernetes, it provides basic infrastructure support for networking, application deployment, and metadata synchronization between the cloud and the edge. In this scheme, Kubernetes cluster/Kubeedge CloudCore is used as the cloud scenario, EdgeCore is used as the edge, and edged is used as the end.
  5. In addition to the more mature and case-effective scenarios above, there are also the following scenarios:
    1. OpenYurt
    2. SuperEdge
    3. Akri
    4. WasmEdgeBased on Wasm, WasmEdge provides a lightweight, fast, secure, and portable alternative to Linux containers in both cloud-native and edge-native environments. The future is promising.

Here is a brief description of the first 4 programs that the author believes are more mature and can now be implemented.

Keep reading

  1. Several architectures for large-scale IoT edge container cluster management - 1-Rancher+K3s
  2. Several architectures for large-scale IoT edge container cluster management - 2-HashiCorp solution Nomad
  3. Several architectures for large-scale IoT edge container cluster management - 3-Portainer
  4. Several architectures for large-scale IoT edge container cluster management - 4-Kubeedge
  5. Several architectures for large-scale IoT edge container cluster management - 5 - Summary

Resources


Introduction to several architectures for large-scale IoT edge container cluster management - 0-edge containers and architectures
https://e-whisper.com/posts/10785/
Author
east4ming
Posted on
February 19, 2023
Licensed under