Linux advanced topics

Deep dive into Linux container technology, virtualization, cluster management, high-performance computing, and cloud-native technologies

Back to Tutorial List

1. Container Technology

Container technology is an important advanced feature of Linux systems, providing lightweight virtualization solutions for isolating and managing applications. Docker is currently the most popular container platform.

1.1 Docker Basics

# Install Docker
sudo apt install docker.io  # Debian/Ubuntu
sudo yum install docker  # CentOS/RHEL

# Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker

# View Docker status
sudo systemctl status docker

# View Docker version
docker --version

# Run first container
docker run hello-world

# View run containers
docker ps

# View all containers
docker ps -a

# Pull image
docker pull ubuntu

# Run container in interactive mode
docker run -it ubuntu /bin/bash

# Exit container
exit

# Start stopped container
docker start container_id

# Stop run container
docker stop container_id

# Delete container
docker rm container_id

# View local images
docker images

# Delete image
docker rmi image_id

# build custom image
docker build -t myapp .

1.2 Docker Compose

# Install Docker Compose
sudo apt install docker-compose  # Debian/Ubuntu
sudo yum install docker-compose  # CentOS/RHEL

# View Docker Compose version
docker-compose --version

# Create docker-compose.yml file
nano docker-compose.yml
version: '3'
services:
  web:
    build: .
    ports:
      - "80:80"
    volumes:
      - ./app:/app
    depends_on:
      - db
  db:
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD: password
      MYSQL_DATABASE: myapp
    volumes:
      - db_data:/var/lib/mysql
volumes:
  db_data:

# Start services
docker-compose up

# Start services in background
docker-compose up -d

# Stop services
docker-compose down

# View service status
docker-compose ps

# View service logs
docker-compose logs

# Restart services
docker-compose restart

# build services
docker-compose build

1.3 Docker Swarm

# Initialize Docker Swarm
docker swarm init

# View Swarm status
docker info

# Add worker node
docker swarm join --token token manager_ip:2377

# View nodes
docker node ls

# Create service
docker service create --name web --replicas 3 -p 80:80 nginx

# View services
docker service ls

# View service details
docker service inspect web

# Scale service
docker service scale web=5

# Update service
docker service update --image nginx:alpine web

# Delete service
docker service rm web

# Leave Swarm
docker swarm leave --force

2. Virtualization Technology

Virtualization technology is an important advanced feature of Linux systems, allowing multiple virtual machines to run on a single physical server. KVM is the built-in virtualization solution in the Linux kernel.

2.1 KVM Basics

# Check if CPU supports virtualization
grep -E '(vmx|svm)' /proc/cpuinfo

# Install KVM and related tools
sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils  # Debian/Ubuntu
sudo yum install qemu-kvm libvirt libvirt-python libguestfs-tools virt-install  # CentOS/RHEL

# Start and enable libvirt
sudo systemctl start libvirtd
sudo systemctl enable libvirtd

# View libvirt status
sudo systemctl status libvirtd

# View virtual networks
sudo virsh net-list

# Create virtual machine
sudo virt-install \
  --name ubuntu-vm \
  --ram 2048 \
  --disk path=/var/lib/libvirt/images/ubuntu-vm.qcow2,size=20 \
  --vcpus 2 \
  --os-type linux \
  --os-variant ubuntu20.04 \
  --network network=default \
  --graphics vnc,listen=0.0.0.0 \
  --console pty,target_type=serial \
  --location 'http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/' \
  --extra-args 'console=ttyS0,115200n8 serial'

# View virtual machines
sudo virsh list

# View all virtual machines
sudo virsh list --all

# Start virtual machine
sudo virsh start ubuntu-vm

# Shutdown virtual machine
sudo virsh shutdown ubuntu-vm

# Force shutdown virtual machine
sudo virsh destroy ubuntu-vm

# Reboot virtual machine
sudo virsh reboot ubuntu-vm

# View virtual machine information
sudo virsh dominfo ubuntu-vm

# Delete virtual machine
sudo virsh undefine ubuntu-vm

# Export virtual machine configuration
sudo virsh dumpxml ubuntu-vm > ubuntu-vm.xml

# Import virtual machine configuration
sudo virsh define ubuntu-vm.xml

2.2 Virt-managementr

# Install Virt-managementr
sudo apt install virt-manager  # Debian/Ubuntu
sudo yum install virt-manager  # CentOS/RHEL

# Start Virt-managementr
virt-manager

# Connect to remote libvirt
sudo virt-manager --connect qemu+ssh://user@remote/system

# Create virtual machine using Virt-managementr
# 1. Open Virt-managementr
# 2. Click "New Virtual Machine"
# 3. Select installation method
# 4. configuration virtual machine parameters
# 5. Start installation

3. Kubernetes

Kubernetes is an open-source container orchestration platform for automating container deployment, scale, and management. It is a core component of cloud-native technology.

3.1 Kubernetes Basics

# Install kubectl
sudo apt install kubectl  # Debian/Ubuntu
sudo yum install kubectl  # CentOS/RHEL

# View kubectl version
kubectl version

# Install Minikube (local development environment)
curl -LO https://store.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# Start Minikube
minikube start

# View cluster status
kubectl cluster-info

# View nodes
kubectl get nodes

# deployment application
kubectl create deployment nginx --image=nginx

# View deployments
kubectl get deployments

# Expose service
kubectl expose deployment nginx --port=80 --type=NodePort

# View services
kubectl get services

# View Pods
kubectl get pods

# Enter Pod
kubectl exec -it pod_name -- /bin/bash

# View Pod logs
kubectl logs pod_name

# Scale deployment
kubectl scale deployment nginx --replicas=3

# Update deployment
kubectl set image deployment nginx nginx=nginx:alpine

# Delete deployment
kubectl delete deployment nginx

# Delete service
kubectl delete service nginx

# Stop Minikube
minikube stop

# Delete Minikube cluster
minikube delete

3.2 Kubernetes Configuration

# Create deploymentment configuration file
nano nginx-deployment.yaml
apiVersion: apps/v1
kind: deploymentment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

# Apply configuration
kubectl apply -f nginx-deployment.yaml

# Create Service configuration file
nano nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
  type: NodePort

# Apply configuration
kubectl apply -f nginx-service.yaml

# Create ConfigMap configuration file
nano configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  config.json: |
    {
      "database": "mysql",
      "host": "db.example.com",
      "port": 3306
    }

# Apply configuration
kubectl apply -f configmap.yaml

# Create Secret configuration file
nano secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: app-secret
type: Opaque
data:
  username: YWRtaW4=
  password: cGFzc3dvcmQ=

# Apply configuration
kubectl apply -f secret.yaml

# View configurations
kubectl get configmaps
kubectl get secrets

# Delete configurations
kubectl delete -f nginx-deployment.yaml
kubectl delete -f nginx-service.yaml
kubectl delete -f configmap.yaml
kubectl delete -f secret.yaml

4. High-Performance Computing

High-Performance Computing (HPC) is an important application area of Linux systems, used for processing large-scale computing tasks such as scientific simulations and data analysis.

4.1 Parallel Computing

# Install OpenMP
sudo apt install libomp-dev  # Debian/Ubuntu
sudo yum install libomp-devel  # CentOS/RHEL

# Create OpenMP test program
nano hello_omp.c
#include 
#include 

int main() {
    #pragma omp parallel
    {
        int id = omp_get_thread_num();
        int num_threads = omp_get_num_threads();
        printf("Hello from thread %d of %d\n", id, num_threads);
    }
    return 0;
}

# Compile OpenMP program
gcc -fopenmp hello_omp.c -o hello_omp

# Run OpenMP program
./hello_omp

# Install MPI
sudo apt install openmpi-bin libopenmpi-dev  # Debian/Ubuntu
sudo yum install openmpi openmpi-devel  # CentOS/RHEL

# Create MPI test program
nano hello_mpi.c
#include 
#include 

int main(int argc, char** argv) {
    int rank, size;
    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);
    printf("Hello from process %d of %d\n", rank, size);
    MPI_Finalize();
    return 0;
}

# Compile MPI program
mpicc hello_mpi.c -o hello_mpi

# Run MPI program
mpirun -np 4 ./hello_mpi

4.2 Performance Optimization

# Install performance analysis tools
sudo apt install perf valgrind gprof  # Debian/Ubuntu
sudo yum install perf valgrind gprof  # CentOS/RHEL

# analysis performance with perf
perf stat ./program
perf record ./program
perf report

# analysis memory with valgrind
valgrind --leak-check=full ./program

# analysis performance with gprof
gcc -pg program.c -o program
./program
gprof program gmon.out > analysis.txt

# Compilation optimizations
gcc -O0 -g program.c -o program_debug  # No optimization, debugging
cc -O1 program.c -o program_O1  # Basic optimization
cc -O2 program.c -o program_O2  # More optimization
cc -O3 program.c -o program_O3  # Highest optimization
cc -Ofast program.c -o program_Ofast  # Fast optimization

# Enable vector instructions during compilation
gcc -march=native -mtune=native program.c -o program

# View CPU information
lscpu
grep -m 1 "model name" /proc/cpuinfo

# View memory information
free -h
cat /proc/meminfo | head -20

# View disk I/O performance
iostat -d -x 1

# View network performance
ethtool eth0
netstat -i

# configuration system limits
sudo nano /etc/security/limits.conf
# Add the following configurations
# * soft nofile 65536
# * hard nofile 65536
# * soft nproc 65536
# * hard nproc 65536

5. Kernel programming

Kernel programming is an advanced topic in Linux systems, involving the development of Linux kernel modules and the tuning of kernel parameters.

5.1 Kernel Module Development

# Install kernel development tools
sudo apt install build-essential linux-headers-$(uname -r)  # Debian/Ubuntu
sudo yum install gcc kernel-devel kernel-headers  # CentOS/RHEL

# Create simple kernel module
nano hello_module.c
#include 
#include 
#include 

MODULE_LICENSE("GPL");
MODULE_AUTHOR("Your Name");
MODULE_DESCRIPTION("A simple Linux kernel module");
MODULE_VERSION("0.1");

static int __init hello_init(void) {
    printk(KERN_INFO "Hello, World!");
    return 0;
}

static void __exit hello_exit(void) {
    printk(KERN_INFO "Goodbye, World!");
}

module_init(hello_init);
module_exit(hello_exit);

# Create Makefile
nano Makefile
obj-m += hello_module.o

all:
    make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules

clean:
    make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean

# Compile kernel module
make

# Load kernel module
sudo insmod hello_module.ko

# View kernel logs
dmesg | tail

# View loaded modules
lsmod | grep hello_module

# Unload kernel module
sudo rmmod hello_module

# View kernel logs
dmesg | tail

5.2 Kernel Parameter Tuning

# View kernel version
uname -r

# View kernel parameters
sysctl -a

# View specific kernel parameter
sysctl net.ipv4.tcp_syncookies

# Modify kernel parameter (temporary)
sysctl -w net.ipv4.tcp_syncookies=1

# Modify kernel parameter (permanent)
sudo nano /etc/sysctl.conf
# Add the following configurations
# Network optimization
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15

# Memory optimization
vm.swappiness = 10
vm.oversubmitting_memory = 1
vm.oversubmitting_ratio = 90

# File system optimization
fs.file-max = 65535

# Apply kernel parameters
sudo sysctl -p

# View system limits
ulimit -a

# Modify system limit (temporary)
ulimit -n 65535

# Modify system limit (permanent)
sudo nano /etc/security/limits.conf
# Add the following configurations
# * soft nofile 65535
# * hard nofile 65535
# * soft nproc 65535
# * hard nproc 65535

# View kernel logs
dmesg

# Monitor kernel events
sudo perf top

6. Automation

Automation is an advanced topic in Linux system management, involving the use of various tools and technologies to automate system management tasks and improve operational efficiency.

6.1 Ansible

# Install Ansible
sudo apt install ansible  # Debian/Ubuntu
sudo yum install ansible  # CentOS/RHEL

# View Ansible version
ansible --version

# Create Ansible inventory file
nano hosts
[webservers]
server1 ansible_host=192.168.1.100
server2 ansible_host=192.168.1.101

[dbservers]
server3 ansible_host=192.168.1.102

# Test connection
ansible all -i hosts -m ping

# Execute command
ansible webservers -i hosts -a "df -h"

# Create Ansible playbook
nano setup.yml
---
- hosts: webservers
  become: yes
  tasks:
    - name: Update system
      apt:
        update_cache: yes
        upgrade: dist
      when: ansible_os_family == "Debian"

    - name: Install Nginx
      apt:
        name: nginx
        state: present
      when: ansible_os_family == "Debian"

    - name: Start Nginx
      service:
        name: nginx
        state: started
        enabled: yes

    - name: Copy configuration file
      copy:
        src: nginx.conf
        dest: /etc/nginx/nginx.conf
      notify: Restart Nginx

  handlers:
    - name: Restart Nginx
      service:
        name: nginx
        state: restarted

# Execute playbook
ansible-playbook -i hosts setup.yml

# View Ansible facts
ansible server1 -i hosts -m setup

6.2 Continuous Integration/Continuous deploymentment

# Install Jenkins
sudo apt install openjdk-11-jdk  # Debian/Ubuntu
sudo yum install java-11-openjdk  # CentOS/RHEL

wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
sudo sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list'  # Debian/Ubuntu

# Install Jenkins
sudo apt update && sudo apt install jenkins  # Debian/Ubuntu
sudo yum install jenkins  # CentOS/RHEL

# Start and enable Jenkins
sudo systemctl start jenkins
sudo systemctl enable jenkins

# View Jenkins status
sudo systemctl status jenkins

# Access Jenkins
# http://server_ip:8080

# Install GitLab CI/CD
# Refer to GitLab official documentation

# Create .gitlab-ci.yml file
nano .gitlab-ci.yml
stages:
  - build
  - test
  - deploy

build_job:
  stage: build
  script:
    - echo "building..."
    - make

test_job:
  stage: test
  script:
    - echo "Testing..."
    - make test

deploy_job:
  stage: deploy
  script:
    - echo "deploymenting..."
    - make deploy
  only:
    - master

# Install GitHub Actions
# Refer to GitHub official documentation

# Create GitHub Actions workflow
mkdir -p .github/workflows
nano .github/workflows/build.yml
name: CI

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v2
    - name: Set up Python
      uses: actions/setup-python@v2
      with:
        python-version: '3.8'
    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install -r requirements.txt
    - name: build
      run: |
        python setup.py build
    - name: Test
      run: |
        python -m pytest

7. Cloud-Native Technology

Cloud-native technology is an important trend in modern application development and deployment, involving containers, microservices, DevOps, and other technologies aimed at improving application scalability, reliability, and maintainability.

7.1 Microservices Architecture

# Install Docker and Kubernetes
# Refer to previous Docker and Kubernetes installation steps

# Create microservices example
# 1. User service
nano user-service/Dockerfile
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]

# 2. Order service
nano order-service/Dockerfile
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD ["node", "app.js"]

# 3. Product service
nano product-service/Dockerfile
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3002
CMD ["node", "app.js"]

# build images
docker build -t user-service user-service/
docker build -t order-service order-service/
docker build -t product-service product-service/

# deployment to Kubernetes
nano user-service.yaml
apiVersion: apps/v1
kind: deploymentment
metadata:
  name: user-service
spec:
  replicas: 2
  selector:
    matchLabels:
      app: user-service
  template:
    metadata:
      labels:
        app: user-service
    spec:
      containers:
      - name: user-service
        image: user-service
        ports:
        - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: user-service
spec:
  selector:
    app: user-service
  ports:
  - port: 3000
    targetPort: 3000
  type: ClusterIP

# Apply configuration
kubectl apply -f user-service.yaml
# deployment other services similarly

7.2 Service Mesh

# Install Istio
curl -L https://istio.io/downloadIstio | sh -
cd istio-*
export PATH=$PWD/bin:$PATH

# Install Istio to cluster
istioctl install --set profile=demo -y

# Enable automatic injection
kubectl label namespace default istio-injection=enabled

# deployment sample application
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

# Verify deployment
kubectl get pods

# Expose service
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml

# View gateway
kubectl get gateway

# Access application
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
curl -s http://$GATEWAY_URL/productpage

# Install Linkerd
curl -sL https://run.linkerd.io/install | sh
export PATH=$PATH:$HOME/.linkerd2/bin

# Install Linkerd to cluster
linkerd install | kubectl apply -

# Verify installation
linkerd check

# deployment sample application
kubectl apply -f https://run.linkerd.io/emojivoto.yml

# Inject Linkerd
kubectl get deploy -n emojivoto -o yaml | linkerd inject - | kubectl apply -f -

# View dashboard
linkerd dashboard

8. advanced Storage

advanced store is an important component of Linux systems, involving various store technologies and file systems to meet different store needs.

8.1 LVM

# Install LVM
sudo apt install lvm2  # Debian/Ubuntu
sudo yum install lvm2  # CentOS/RHEL

# View physical volumes
sudo pvscan

# View volume groups
sudo vgscan

# View logical volumes
sudo lvscan

# Create physical volumes
sudo pvcreate /dev/sdb /dev/sdc

# Create volume group
sudo vgcreate vg0 /dev/sdb /dev/sdc

# Create logical volume
sudo lvcreate -L 10G -n lv0 vg0

# Format logical volume
sudo mkfs.ext4 /dev/vg0/lv0

# Mount logical volume
sudo mkdir /mnt/data
sudo mount /dev/vg0/lv0 /mnt/data

# Permanent mount
sudo nano /etc/fstab
# Add the following configuration
# /dev/vg0/lv0 /mnt/data ext4 defaults 0 0

# Extend logical volume
sudo lvextend -L +5G /dev/vg0/lv0
sudo resize2fs /dev/vg0/lv0

# Reduce logical volume
sudo umount /mnt/data
sudo e2fsck -f /dev/vg0/lv0
sudo resize2fs /dev/vg0/lv0 10G
sudo lvreduce -L 10G /dev/vg0/lv0
sudo mount /dev/vg0/lv0 /mnt/data

# Delete logical volume
sudo umount /mnt/data
sudo lvremove /dev/vg0/lv0

# Delete volume group
sudo vgremove vg0

# Delete physical volumes
sudo pvremove /dev/sdb /dev/sdc

8.2 ZFS

# Install ZFS
sudo apt install zfsutils-linux  # Debian/Ubuntu
sudo yum install zfs zfs-kmod  # CentOS/RHEL

# View ZFS status
sudo zfs status

# Create ZFS pool
sudo zpool create tank /dev/sdb /dev/sdc

# View ZFS pools
sudo zpool list

# Create ZFS file systems
sudo zfs create tank/data
sudo zfs create tank/backup

# View ZFS file systems
sudo zfs list

# Mount ZFS file systems
# ZFS automatically mounts, usually under /mnt/tank/

# Set ZFS properties
sudo zfs set compression=on tank/data
sudo zfs set quota=10G tank/backup
sudo zfs set reservation=5G tank/data

# Snapshot ZFS file systems
sudo zfs snapshot tank/data@backup1
sudo zfs snapshot tank/backup@backup1

# View snapshots
sudo zfs list -t snapshot

# Restore snapshot
sudo zfs rollback tank/data@backup1

# Clone snapshot
sudo zfs clone tank/data@backup1 tank/data_clone

# Extend ZFS pool
sudo zpool add tank /dev/sdd

# Replace disk
sudo zpool replace tank /dev/sdb /dev/sde

# Export ZFS pool
sudo zpool export tank

# Import ZFS pool
sudo zpool import tank

# Destroy ZFS pool
sudo zpool destroy tank

9. advanced Networking

advanced networking is an important component of Linux systems, involving various network technologies and protocols to meet different network requirements.

9.1 Network Namespaces

# creationnetworknamespace
sudo ip netns add ns1
sudo ip netns add ns2

# 查看networknamespace
sudo ip netns list

#  in namespacein执行commands
sudo ip netns exec ns1 ifconfig -a

# creation虚拟以太网 for 
sudo ip link add veth0 type veth peer name veth1

# 将interface添加 to namespace
sudo ip link set veth0 netns ns1
sudo ip link set veth1 netns ns2

# configurationinterface地址
sudo ip netns exec ns1 ip addr add 192.168.1.1/24 dev veth0
sudo ip netns exec ns2 ip addr add 192.168.1.2/24 dev veth1

# 启用interface
sudo ip netns exec ns1 ip link set veth0 up
sudo ip netns exec ns2 ip link set veth1 up

# test连通性
sudo ip netns exec ns1 ping 192.168.1.2

# creation网桥
sudo ip link add br0 type bridge

# 启用网桥
sudo ip link set br0 up

# 将interface添加 to 网桥
sudo ip link set eth0 master br0

# configuration网桥地址
sudo ip addr add 192.168.1.100/24 dev br0

#  from 网桥移除interface
sudo ip link set eth0 nomaster

# delete网桥
sudo ip link delete br0

# deletenetworknamespace
sudo ip netns delete ns1
sudo ip netns delete ns2

9.2 Software-Defined Networking

# installationOpen vSwitch
sudo apt install openvswitch-switch  # Debian/Ubuntu
sudo yum install openvswitch  # CentOS/RHEL

# 启动并启用Open vSwitch
sudo systemctl start openvswitch-switch
sudo systemctl enable openvswitch-switch

# 查看Open vSwitchstatus
sudo systemctl status openvswitch-switch

# creation网桥
sudo ovs-vsctl add-br br0

# 添加端口
sudo ovs-vsctl add-port br0 eth0
sudo ovs-vsctl add-port br0 eth1

# 查看网桥
sudo ovs-vsctl show

# configuration流表
sudo ovs-ofctl add-flow br0 "in_port=1 actions=output:2"
sudo ovs-ofctl add-flow br0 "in_port=2 actions=output:1"

# 查看流表
sudo ovs-ofctl dump-flows br0

# delete流表
sudo ovs-ofctl del-flows br0

# delete端口
sudo ovs-vsctl del-port br0 eth0
sudo ovs-vsctl del-port br0 eth1

# delete网桥
sudo ovs-vsctl del-br br0

# installationCalico (Kubernetesnetwork插件) 
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

# installationFlannel (Kubernetesnetwork插件) 
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

10. advanced topics Practice

10.1 Case Objective

build a microservices application based on Docker and Kubernetes, including frontend, backend, database and other components, and use Ansible for automated deployment.

10.2 Implementation Steps

10.2.1 Environment Preparation

# installationDocker
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker

# installationDocker Compose
sudo apt install docker-compose

# installationKubernetes (usingMinikube) 
curl -LO https://store.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube start

# installationkubectl
sudo apt install kubectl

# installationAnsible
sudo apt install ansible

10.2.2 Application Development

# creationprojectstructure
mkdir -p microservices/{frontend,backend,db}

#  before 端service
cd microservices/frontend
cat > Dockerfile << 'EOF'
FROM nginx:alpine
COPY index.html /usr/share/nginx/html/
EOF

cat > index.html << 'EOF'



    Microservices Demo




    

Welcome to Microservices Demo

EOF # after 端service cd ../backend cat > Dockerfile << 'EOF' FROM node:14 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 8080 CMD ["node", "app.js"] EOF cat > package.json << 'EOF' { "name": "backend", "version": "1.0.0", "description": "Backend service", "main": "app.js", "scripts": { "start": "node app.js" }, "dependencies": { "express": "^4.17.1", "mysql": "^2.18.1" } } EOF cat > app.js << 'EOF' const express = require('express'); const mysql = require('mysql'); const app = express(); const port = 8080; const db = mysql.createConnection({ host: 'db', user: 'root', password: 'password', database: 'demo' }); db.connect((err) => { if (err) throw err; console.log('Connected to database'); db.query('CREATE TABLE IF NOT EXISTS users (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255), email VARCHAR(255))', (err, result) => { if (err) throw err; console.log('Table created'); }); }); app.get('/api/data', (req, res) => { db.query('SELECT * FROM users', (err, result) => { if (err) throw err; res.json(result); }); }); app.listen(port, () => { console.log(`Backend service run on port ${port}`); }); EOF # datalibraryservice cd ../db cat > Dockerfile << 'EOF' FROM mysql:5.7 ENV MYSQL_ROOT_PASSWORD=password ENV MYSQL_DATABASE=demo EOF # creationdocker-compose.yml cd .. cat > docker-compose.yml << 'EOF' version: '3' services: frontend: build: ./frontend ports: - "80:80" depends_on: - backend backend: build: ./backend ports: - "8080:8080" depends_on: - db db: build: ./db ports: - "3306:3306" EOF # 构建并启动service docker-compose up -d # 查看servicestatus docker-compose ps # testservice curl http://localhost:8080/api/data

10.2.3 Kubernetes deploymentment

# creationKubernetesconfigurationfile
cd microservices

#  before 端service
cat > frontend-deployment.yaml << 'EOF'
apiVersion: apps/v1
kind: deploymentment
metadata:
  name: frontend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend
        image: frontend
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 80
  type: NodePort
EOF

#  after 端service
cat > backend-deployment.yaml << 'EOF'
apiVersion: apps/v1
kind: deploymentment
metadata:
  name: backend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: backend
        image: backend
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  selector:
    app: backend
  ports:
  - port: 8080
    targetPort: 8080
  type: ClusterIP
EOF

# datalibraryservice
cat > db-deployment.yaml << 'EOF'
apiVersion: apps/v1
kind: deploymentment
metadata:
  name: db
spec:
  replicas: 1
  selector:
    matchLabels:
      app: db
  template:
    metadata:
      labels:
        app: db
    spec:
      containers:
      - name: db
        image: mysql:5.7
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: password
        - name: MYSQL_DATABASE
          value: demo
        ports:
        - containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
  name: db
spec:
  selector:
    app: db
  ports:
  - port: 3306
    targetPort: 3306
  type: ClusterIP
EOF

# 构建镜像
docker build -t frontend ./frontend
docker build -t backend ./backend

# push镜像 to Minikube
minikube docker-env | eval
docker build -t frontend ./frontend
docker build -t backend ./backend

# deployment to Kubernetes
tkubectl apply -f db-deployment.yaml
tkubectl apply -f backend-deployment.yaml
tkubectl apply -f frontend-deployment.yaml

# 查看deploymentstatus
tkubectl get pods
tkubectl get services

# testservice
export NODE_PORT=$(kubectl get services frontend -o jsonpath='{.spec.ports[0].nodePort}')
export NODE_IP=$(minikube ip)
curl http://$NODE_IP:$NODE_PORT

10.2.4 Ansible Automation

# creationAnsibleconfiguration
mkdir -p ansible
cd ansible

# Inventory file
cat > hosts << 'EOF'
[kubernetes]
localhost ansible_connection=local
EOF

# Playbook file
cat > deploy.yml << 'EOF'
---
- hosts: kubernetes
  become: yes
  tasks:
    - name: build frontend image
      command: docker build -t frontend ../microservices/frontend
      args:
        chdir: ../microservices

    - name: build backend image
      command: docker build -t backend ../microservices/backend
      args:
        chdir: ../microservices

    - name: deployment database
      command: kubectl apply -f db-deployment.yaml
      args:
        chdir: ../microservices

    - name: deployment backend
      command: kubectl apply -f backend-deployment.yaml
      args:
        chdir: ../microservices

    - name: deployment frontend
      command: kubectl apply -f frontend-deployment.yaml
      args:
        chdir: ../microservices

    - name: View deployment status
      command: kubectl get pods

    - name: View service status
      command: kubectl get services
EOF

# Execute playbook
ansible-playbook -i hosts deploy.yml

11. Interactive Exercises

Exercise 1: Docker Containerization

Perform the following operations:

  • 1. Install Docker and start the service.
  • 2. Pull Ubuntu image and run container.
  • 3. Create a simple Web application Dockerfile.
  • 4. build image and run container.
  • 5. deployment multi-container application using Docker Compose.

Exercise 2: Kubernetes deploymentment

Perform the following operations:

  • 1. Install Minikube and kubectl.
  • 2. Start Minikube cluster.
  • 3. Create deploymentment and Service configuration files.
  • 4. deployment application to Kubernetes.
  • 5. Scale and update deployment.

Exercise 3: Automated Operations

Perform the following operations:

  • 1. Install Ansible.
  • 2. Create inventory file and playbook.
  • 3. Write tasks to automate system configuration.
  • 4. Execute playbook and verify results.
  • 5. Extend playbook to deploy application.

Exercise 4: Performance Optimization

Perform the following operations:

  • 1. Install performance analysis tools.
  • 2. analysis application performance bottlenecks.
  • 3. optimization kernel parameters.
  • 4. optimization application configuration.
  • 5. Verify optimization results.