banner
云野阁

云野阁

闲云野鹤,八方逍遥

Deploy SkyWalking with Docker

Introduction#

🔜What is SkyWalking?🔚

It is an excellent domestic APM tool.

A performance monitoring tool for distributed systems, specifically designed for microservices, cloud-native architectures, and container-based (Docker, K8s, Mesos) architectures.

Provides an integrated solution for distributed tracing, service mesh telemetry analysis, metric aggregation, and visualization.

The SkyWalking architecture is divided into four parts: UI, OAP, Storage, and Probe.

The UI is SkyWalking UI: responsible for providing the console, viewing traces, etc.; (visual display)

OAP is SkyWalking OAP: responsible for receiving tracing data information sent by the Agent (Probe), then performing analysis (Analysis Core), storing it in external storage (Storage), and finally providing query (Query) functionality. (data analysis)

Storage is for tracing data storage. Currently supports various storages like ES, MySQL, Sharding Sphere, TiDB, H2. We are currently using ES, mainly because the SkyWalking development team uses ES in their production environment. (data storage)

The Probe is Agent: responsible for collecting trace information from the application and sending it to the SkyWalking OAP server. Currently supports tracing data information provided by SkyWalking, Zipkin, Jaeger, etc. We are currently using the SkyWalking Agent to collect SkyWalking tracing data and pass it to the server. (data collection)

Environment Preparation#

(1) Install Docker and Docker Compose using the script

bash <(curl -sSL https://linuxmirrors.cn/docker.sh)

(2) Configure image acceleration

vi /etc/docker/daemon.json

{
  "data-root": "/data/dockerData",
  "registry-mirrors": ["https://registry.cn-hangzhou.aliyuncs.com",
    "https://huecker.io",
    "https://docker.rainbond.cc",
    "https://dockerhub.timeweb.cloud",
    "https://dockerhub.icu",
    "https://docker.registry.cyou",
    "https://docker-cf.registry.cyou",
    "https://dockercf.jsdelivr.fyi",
    "https://docker.jsdelivr.fyi",
    "https://dockertest.jsdelivr.fyi",
    "https://mirror.aliyuncs.com",
    "https://dockerproxy.com",
    "https://mirror.baidubce.com",
    "https://docker.m.daocloud.io",
    "https://docker.nju.edu.cn",
    "https://docker.mirrors.sjtug.sjtu.edu.cn",
    "https://docker.mirrors.ustc.edu.cn",
    "https://mirror.iscas.ac.cn",
    "https://docker.rainbond.cc",
    "https://docker.kubesre.xyz"],
    "log-driver":"json-file",
    "log-opts":{"max-size" :"50m","max-file":"3"}
}

(3) Start the Docker service

systemctl start docker
systemctl enable docker
systemctl status docker

(4) Set the kernel parameter related to memory mapping to 262144 and check the kernel parameters applied to the system.

echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -p

Deployment Steps#

(1) Create the storage directories required for the deployment files

mkdir -p /data/elasticsearch/data /data/elasticsearch/logs /data/skywalking/oap
chmod -R 777 /data/elasticsearch

(2) Create a temporary skywalking-oap-server container and copy the SkyWalking configuration files to the mapped directory.

cd /data/skywalking/oap
# Create a temporary skywalking-oap-server container and copy the SkyWalking configuration files to the host directory
docker run -itd --name=oap-temp apache/skywalking-oap-server:9.5.0
docker cp  oap-temp:/skywalking/config/. .
docker rm -f oap-temp

(3) Modify the SkyWalking configuration file application.yml to use Elasticsearch as the data storage.

vi application.yml

storage:
  selector: ${SW_STORAGE:elasticsearch} # Change h2 to elasticsearch
  elasticsearch:
    namespace: ${SW_NAMESPACE:""}
    clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:local_ip:9200} # Change localhost to host IP
    protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"http"}
    connectTimeout: ${SW_STORAGE_ES_CONNECT_TIMEOUT:3000}
    socketTimeout: ${SW_STORAGE_ES_SOCKET_TIMEOUT:30000}
    responseTimeout: ${SW_STORAGE_ES_RESPONSE_TIMEOUT:15000}
    numHttpClientThread: ${SW_STORAGE_ES_NUM_HTTP_CLIENT_THREAD:0}
    user: ${SW_ES_USER:"elastic"} # Fill in the ES account
    password: ${SW_ES_PASSWORD:"elastic"} # Fill in the ES password
    trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:""}
    trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:""}

(4) Create a docker-compose file to orchestrate the deployment of SkyWalking, ES, and SkyWalking UI.

vi skywalking.yml

services:
  elasticsearch:
    image: elasticsearch:8.15.0
    container_name: elasticsearch
    restart: always
    environment:
      - discovery.type=single-node
      - ES_JAVA_OPTS=-Xms1g -Xmx1g
      - ELASTIC_PASSWORD=elastic
      - TZ=Asia/Shanghai
    ports:
      - "9200:9200"
      - "9300:9300"
    healthcheck:
      test: [ "CMD-SHELL", "curl --silent --fail -u elastic:elastic localhost:9200/_cluster/health || exit 1" ]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s
  logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "3"
    volumes:
      - /data/elasticsearch/data:/usr/share/elasticsearch/data
      - /data/elasticsearch/logs:/usr/share/elasticsearch/logs
      - /data/elasticsearch/plugins:/usr/share/elasticsearch/plugins
    networks:
      skywalking-network:
        ipv4_address: 172.20.110.11
    ulimits:
      memlock:
        soft: -1
        hard: -1

  skywalking-oap:
    image: apache/skywalking-oap-server:9.5.0
    container_name: skywalking-oap
    restart: always
    ports:
      - "11800:11800"
      - "12800:12800"
      - "1234:1234"  
    healthcheck:
      test: [ "CMD-SHELL", "/skywalking/bin/swctl health" ]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s
    depends_on:
      elasticsearch:
        condition: service_healthy
    environment:
      - SW_STORAGE=elasticsearch
      - SW_HEALTH_CHECKER=default
      - TZ=Asia/Shanghai
      - JVM_Xms=512M
      - JVM_Xmx=1024M
      - SW_STORAGE_ES_CLUSTER_NODES=local_ip:9200
    volumes:
      - /data/skywalking/oap:/skywalking/config
    networks:
      skywalking-network:
        ipv4_address: 172.20.110.12


  skywalking-ui:
    image: apache/skywalking-ui:9.5.0
    container_name: skywalking-ui
    restart: always
    environment:
      - SW_OAP_ADDRESS=http://local_ip:12800
      - SW_ZIPKIN_ADDRESS=http://local_ip:9412
      - TZ=Asia/Shanghai
    ports:
      - "8080:8080"
    depends_on:
      skywalking-oap:
        condition: service_healthy
    networks:
      skywalking-network:
        ipv4_address: 172.20.110.13

networks:
  skywalking-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.110.0/24
docker compose -f skywalking.yml up -d

SkyWalking can connect to ES for storage in two ways. (You can use either one)

SkyWalking connects to Elasticsearch via HTTP authentication

# Disable SSL certificate authentication for ES
docker exec -it elasticsearch bash -c ' sed -i "s/  enabled: true/  enabled: false/g" /usr/share/elasticsearch/config/elasticsearch.yml'
docker exec -it elasticsearch bash -c 'cat /usr/share/elasticsearch/config/elasticsearch.yml'
docker restart elasticsearch
docker restart skywalking-oap

SkyWalking connects to Elasticsearch via HTTPS SSL authentication

(1) Convert the ES crt and key certificate files to p12 format

openssl pkcs12 -export -in ca.crt -inkey ca.key -out es.p12 -name esca -CAfile es.crt

Enter the keypass twice, where the -name parameter is the alias.

openssl pkcs12 -export -in ca.crt -inkey ca.key -out es.p12 -name esca -CAfile es.crt
Enter Export Password:
Verifying - Enter Export Password:

(2) Convert the p12 format certificate to jks certificate

Install JDK. keytool is part of the JDK, so you need to install the JDK to perform the certificate conversion.

 yum install -y java-11-openjdk-devel

The storepass parameter is the jks certificate password, and the srcstorepass parameter is the p12 certificate password.

keytool -importkeystore -v -srckeystore es.p12 -srcstoretype PKCS12  -srcstorepass wasd2345  -deststoretype JKS -destkeystore es.jks -storepass qiswasd2345

storage:
selector: ${SW_STORAGE}
elasticsearch:
namespace: ${SW_NAMESPACE:""}
clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:es_server_address:443}
protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"https"}
connectTimeout: ${SW_STORAGE_ES_CONNECT_TIMEOUT:3000}
socketTimeout: ${SW_STORAGE_ES_SOCKET_TIMEOUT:30000}
responseTimeout: ${SW_STORAGE_ES_RESPONSE_TIMEOUT:15000}
numHttpClientThread: ${SW_STORAGE_ES_NUM_HTTP_CLIENT_THREAD:0}
user: ${SW_ES_USER:"es_username"}
password: ${SW_ES_PASSWORD:"es_password"}
trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:"jks_certificate_path"}
trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:"jks_certificate_password"}

Enable Linux Monitoring#

Install Prometheus node-exporter to collect metric data from the VM. (Source method)#

yum install -y wget
wget https://github.com/prometheus/node_exporter/releases/download/v1.8.2/node_exporter-1.8.2.linux-amd64.tar.gz
tar -xvzf node_exporter-1.8.2.linux-amd64.tar.gz
cd 
mv node_exporter-1.8.2.linux-amd64/node_exporter /usr/sbin/
cd /usr/sbin/
./node_exporter

Verify if it is running

curl http://localhost:9100/metrics

Create a node_exporter service file

vi /usr/lib/systemd/system/node_exporter.service

[Unit]
Description=node exporter service
Documentation=https://prometheus.io
After=network.target

[Service]
Type=simple
User=root
Group=root
# Location of node_exporter
ExecStart=/usr/sbin/node_exporter  
Restart=on-failure

[Install]
WantedBy=multi-user.target

Reload the system manager configuration file, start the node_exporter service, and set it to start on boot

systemctl daemon-reload
systemctl start node_exporter
systemctl enable node_exporter
systemctl status node_exporter

Install Prometheus node-exporter (Container method)#

services:
  node-exporter:
    image: quay.io/prometheus/node-exporter
    container_name: node-exporter
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.rootfs=/rootfs'
      - '--path.sysfs=/host/sys'
    restart: always
    environment:
      - TZ=Asia/Shanghai
    ports:
      - 9100:9100
    networks:
       linux_exporter:
        ipv4_address: 172.20.104.11

networks:
  linux_exporter:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.104.0/24

Install OpenTelemetry Collector#

Create OpenTelemetry Collector configuration file

mkdir /data/opentelemetry-collector
vi /data/opentelemetry-collector/config.yaml

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'vm-monitoring' # Must match the name in the vm.yaml of otel-rules in skywalking-oap
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9100']
             labels:
               host_name: 10.10.2.145
               service: oap-server
processors:
  batch:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers: [prometheus]
      processors: [batch]
      exporters: [otlp, logging]
services:
  otelcol:
    image: otel/opentelemetry-collector
    container_name: otelcol
    restart: always
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/opentelemetry-collector/config.yaml:/etc/otelcol/config.yaml
    networks:
       opentelemetry:
        ipv4_address: 172.20.101.11

networks:
  opentelemetry:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.101.0/24

The latest configuration file changes for the opentelemetry-collector are to use debug instead of logging in the exporters section in the latest container image (0.113.0).
That is, before version v0.86.0, logging was used

exporters:
otlp:
endpoint: ip+port
tls:
insecure: true
logging:
loglevel: debug

After that, use debug

exporters:
otlp:
endpoint: ip+port
tls:
insecure: true
debug:
verbosity: detailed

Enable SkyWalking Service Self-Monitoring#

Enable backend telemetry, find the prometheus section in the application.yml configuration file of skywalking-oap, and modify the parameters.

telemetry:
  selector: ${SW_TELEMETRY:prometheus} # Change none to prometheus
  none:
  prometheus:
    host: ${SW_TELEMETRY_PROMETHEUS_HOST:0.0.0.0}
    port: ${SW_TELEMETRY_PROMETHEUS_PORT:1234}
    sslEnabled: ${SW_TELEMETRY_PROMETHEUS_SSL_ENABLED:false}
    sslKeyPath: ${SW_TELEMETRY_PROMETHEUS_SSL_KEY_PATH:""}
    sslCertChainPath: ${SW_TELEMETRY_PROMETHEUS_SSL_CERT_CHAIN_PATH:""}

Add self-monitoring parameters in the OpenTelemetry Collector configuration file

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'vm-monitoring'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9100']
             labels:
               host_name: 10.10.2.145
               service: skywalking-oap-server
  prometheus/2:  # New
    config:
     scrape_configs:
       - job_name: 'skywalking-so11y'  # Must match the name in the oap.yaml of otel-rules in skywalking-oap
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:1234']   # Port is 1234
             labels:
               host_name: 10.10.2.145_self
               service: skywalking-oap


processors:
  batch:
  batch/2:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug
  otlp/2:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging/2:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers: [prometheus, prometheus/2]
      processors: [batch, batch/2]
      exporters: [otlp, otlp/2, logging, logging/2]

Restart the otelcol container and skywalking-oap in sequence, and check if self-monitoring is generated.

Enable MySQL/MariaDB Database Monitoring#

(1) Deploy mysqld_exporter

services:
  mysqld_exporter:
    image: prom/mysqld-exporter
    container_name: mysqld_exporter
    restart: always
    environment:
      - TZ=Asia/Shanghai
    ports:
      - "9104:9104"
    command:
      - "--mysqld.username=user:password"   # Username and password
      - "--mysqld.address=10.10.2.145:3306"   # IP and port number
    networks:
       sw-mysql:
        ipv4_address: 172.20.102.11

networks:
  sw-mysql:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.102.0/24

(2) Add MySQL monitoring parameters in the OpenTelemetry Collector configuration file

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'vm-monitoring'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9100']
             labels:
               host_name: 10.10.2.145
               service: skywalking-oap-server
  prometheus/2:
    config:
     scrape_configs:
       - job_name: 'skywalking-so11y'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:1234']
             labels:
               host_name: 10.10.2.145_self
               service: skywalking-oap

  prometheus/3:  # MySQL, MariaDB monitoring section
    config:
     scrape_configs:
       - job_name: 'mysql-monitoring' # Must match the name in the yaml file in the otel-rules/mysql directory of skywalking-oap
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9104']
             labels:
               host_name: mariadb-monitoring

processors:
  batch:
  batch/2:
  batch/3:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug
  otlp/2:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging/2:
    loglevel: debug
  otlp/3:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging/3:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers:
      - prometheus
      - prometheus/2
      - prometheus/3
      processors:
      - batch
      - batch/2
      - batch/3
      exporters:
      - otlp
      - otlp/2
      - otlp/3
      - logging
      - logging/2
      - logging/3

Restart the otelcol container and skywalking-oap in sequence, and check if MySQL or MariaDB monitoring data is generated.

Enable Elasticsearch Monitoring#

(1) Deploy elasticsearch_exporter

services:
  elasticsearch_exporter:
    image: quay.io/prometheuscommunity/elasticsearch-exporter:latest
    container_name: elasticsearch_exporter
    restart: always
    environment:
      - TZ=Asia/Shanghai
    ports:
      - "9114:9114"
    command:
      #- '--es.uri=http://elastic:[email protected]:9200'   # ES uses HTTP protocol
      - '--es.uri=https://elastic:[email protected]:9200' 
      - "--es.ssl-skip-verify"   # Skip SSL verification when connecting to ES
    networks:
       es_exporter:
        ipv4_address: 172.20.103.11

networks:
  es_exporter:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.103.0/24

(2) Add ES monitoring parameters in the OpenTelemetry Collector configuration file

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'elasticsearch-monitoring'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9114']
             labels:
               host_name: elasticsearch-monitoring


processors:
  batch:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers:
      - prometheus
      processors:
      - batch
      exporters:
      - otlp
      - logging

Restart the otelcol container and skywalking-oap in sequence, and check if ES monitoring data is generated.

Enable PostgreSQL Database Monitoring#

(1) Deploy postgres-exporter

services:
  postgres-exporter:
    image: quay.io/prometheuscommunity/postgres-exporter
    container_name: postgres-exporter
    restart: always
    environment:
      TZ: "Asia/Shanghai"
      DATA_SOURCE_URI: "localhost:5432/postgres?sslmode=disable"
      DATA_SOURCE_USER: "postgres"
      DATA_SOURCE_PASS: "password"
    ports:
      - "9187:9187"
    networks:
       sw-pgsql:
        ipv4_address: 172.20.105.11

networks:
  sw-pgsql:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.105.0/24

(2) Add PostgreSQL monitoring parameters in the OpenTelemetry Collector configuration file

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'postgresql-monitoring'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9187']
             labels:
               host_name: postgresql-monitoring


processors:
  batch:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers:
      - prometheus
      processors:
      - batch
      exporters:
      - otlp
      - logging

Restart the skywalking-oap container and otelcol, and check if PostgreSQL monitoring data is generated.

Enable MongoDB Database Monitoring#

(1) Deploy mongodb_exporter

services:
  mongodb_exporter:
    image: percona/mongodb_exporter:0.40
    container_name: mongodb_exporter
    restart: always
    ports:
      - "9216:9216"
    environment:
      - TZ=Asia/Shanghai
      - MONGODB_URI=mongodb://user:[email protected]:27017/?authSource=admin
    command:
      - --collect-all
      - --web.listen-address=:9216
    networks:
       sw-mongodb:
        ipv4_address: 172.20.106.11

networks:
  sw-mongodb:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.106.0/24

(2) Add MongoDB monitoring parameters in the OpenTelemetry Collector configuration file

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'mongodb-monitoring'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9216']
             labels:
               host_name: mongodb-monitoring


processors:
  batch:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers:
      - prometheus
      processors:
      - batch
      exporters:
      - otlp
      - logging

.NET Project Service Tracing#

Install SkyWalking .NET Core Agent (Windows Environment)#

(1) Install the nuget package SkyAPM.Agent.AspNetCore in the project in Visual Studio.

(2) Add the environment variables "ASPNETCORE_HOSTINGSTARTUPASSEMBLIES": "SkyAPM.Agent.AspNetCore" and "SKYWALKING__SERVICENAME": "Service Name (consistent with the name of the executed dll program)" in the launchSettings.json file.

  "profiles": {
    "http": {
      "commandName": "Project",
      "dotnetRunMessages": true,
      "launchBrowser": true,
      "applicationUrl": "http://localhost:5205",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        // Add environment variables
        "ASPNETCORE_HOSTINGSTARTUPASSEMBLIES": "SkyAPM.Agent.AspNetCore",
        "SKYWALKING__SERVICENAME": "Service Name (consistent with the name of the executed dll program)"
      }
    },
    "https": {
      "commandName": "Project",
      "dotnetRunMessages": true,
      "launchBrowser": true,
      "applicationUrl": "https://localhost:7105;http://localhost:5205",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        // Add environment variables
        "ASPNETCORE_HOSTINGSTARTUPASSEMBLIES": "SkyAPM.Agent.AspNetCore",
        "SKYWALKING__SERVICENAME": "Service Name (consistent with the name of the executed dll program)"
      }
    },
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
         // Add environment variables
        "ASPNETCORE_HOSTINGSTARTUPASSEMBLIES": "SkyAPM.Agent.AspNetCore",
        "SKYWALKING__SERVICENAME": "Service Name (consistent with the name of the executed dll program)"
      }
    }
  }
,

(3) Add configuration parameters in Program.cs.

//SkyApm
builder.Services.AddSkyApmExtensions();
Environment.SetEnvironmentVariable
("ASPNETCORE_HOSTINGSTARTUPASSEMBLIES", "SkyAPM.Agent.AspNetCore");
Environment.SetEnvironmentVariable("SKYWALKING__SERVICENAME", "Service Name (consistent with the name of the executed dll program)");

(4) Add a skyapm.json file, which can be done in two ways:

One is to create a skyapm.json in the same directory as the dll running program and write the following content.

{
  "SkyWalking": {
    "ServiceName": "Service Name (consistent with the name of the executed dll program)",
    "Namespace": "",
    "HeaderVersions": [
      "sw8"
    ],
    "Sampling": {
      "SamplePer3Secs": -1,
      "Percentage": -1.0
    },
    "Logging": {
      "Level": "Information",
      "FilePath": "logs\\skyapm-{Date}.log"
    },
    "Transport": {
      "Interval": 3000,
      "ProtocolVersion": "v8",
      "QueueSize": 30000,
      "BatchSize": 3000,
      "gRPC": {
        "Servers": "SkyWalking service IP:11800",
        "Timeout": 10000,
        "ConnectTimeout": 10000,
        "ReportTimeout": 600000,
        "Authentication": ""
      }
    }
  }
}

The second way is to create skyapm.json by entering commands in the Visual Studio console.

dotnet tool install -g SkyAPM.DotNet.CLI
dotnet skyapm config Service Name (consistent with the name of the executed dll program) SkyWalking service IP:11800

(5) After configuration is complete, add environment variables on the running server

Method 1: Add on the server

vi  ~/.bashrc


export ASPNETCORE_ENVIRONMENT=development
export ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=SkyAPM.Agent.AspNetCore
export SKYWALKING__SERVICENAME=Service Name


# Make the configuration effective
source ~/.bashrc

Method 2: Add in the container, which can be added in the Dockerfile

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
......
ENV ASPNETCORE_ENVIRONMENT=development
ENV ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=SkyAPM.Agent.AspNetCore
ENV SKYWALKING__SERVICENAME=Service Name
# Container entry point
ENTRYPOINT ["dotnet", "xxx"]

(6) Access the .NET project and check the service monitoring generated in Skywalking under the regular services.

Install SkyWalking .NET Core Agent (Linux Environment)#

Applicable for .NET projects created on Linux

(1) Install SkyWalking .NET Core Agent

dotnet add package SkyAPM.Agent.AspNetCore
export ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=SkyAPM.Agent.AspNetCore
export SKYWALKING__SERVICENAME=Service Name (consistent with the name of the executed dll program)

(2) Install SkyAPM.DotNet.CLI to generate skyapm.json

dotnet tool install -g SkyAPM.DotNet.CLI
dotnet skyapm config Service Name (consistent with the name of the executed dll program) SkyWalking service IP:11800

(3) After configuration is complete, add environment variables on the running server

Method 1: Add on the server

vi  ~/.bashrc


export ASPNETCORE_ENVIRONMENT=development
export ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=SkyAPM.Agent.AspNetCore
export SKYWALKING__SERVICENAME=Service Name


# Make the configuration effective
source ~/.bashrc

Method 2: Add in the container, which can be added in the Dockerfile

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
......
ENV ASPNETCORE_ENVIRONMENT=development
ENV ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=SkyAPM.Agent.AspNetCore
ENV SKYWALKING__SERVICENAME=Service Name
# Container entry point
ENTRYPOINT ["dotnet", "xxx"]

(4) Access the .NET project and check the service monitoring generated in Skywalking under the regular services.

1

Integrate Frontend Monitoring#

(1) Add the skywalking-client-js package to the project

npm install skywalking-client-js --save

(2) Configure the proxy in vue.config.js

proxy:{
      '/browser': {
        target:'SkyWalking service IP:12800',// This is the proxy for routing and error reporting
        changeOrigin: true
      },
      '/v3':{
        target:'SkyWalking service IP:12800',
        changeOrigin: true// This is the proxy for tracing reports
      }
}

(3) Integrate skywalking-client-js in main.js

// Skywalking monitoring system
import ClientMonitor from 'skywalking-client-js';
// Register skywalking
ClientMonitor.register({
    service: 'Service Name',// Service name        
    serviceVersion:'',// Application version number
    traceSDKInternal:true,// Trace SDK
    pagePath: location.href,// Current route address
    useFmp: true
});

(4) In the corresponding business system server, add the corresponding proxy configuration in the published nginx proxy service.

    location /browser/ {
        proxy_pass http://SkyWalking service IP:12800/browser/;
        client_max_body_size 1000M;
    }
        location /v3/ {
        proxy_pass http://SkyWalking service IP:12800/v3/;
        client_max_body_size 1000M;
    }
Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.