banner
云野阁

云野阁

闲云野鹤,八方逍遥

docker 部署 SkyWalking

前言#

🔜什麼是SkyWalking?🔚

是一款優秀的國產 APM 工具。

分佈式系統的應用程序性能監視工具,專為微服務、雲原生架構和基於容器(Docker、K8s、Mesos)架構而設計。

提供分佈式追蹤、服務網格遙測分析、度量聚合和可視化一體化解決方案。

SkyWalking 架構分為 UI、OAP、存儲、探針 4 部分。

其中 UI 為SkyWalking UI :負責提供控台、查看鏈路等等;(可視化顯示)

OAP 為SkyWalking OAP :負責接收 Agent(探針) 发送的 Tracing 數據信息,然後進行分析(Analysis Core),存儲到外部存儲器 ( Storage ),最終提供查詢 ( Query ) 功能。(數據分析)

存儲為 Tracing 數據存儲。目前支持 ES、MySQL、Sharding Sphere、TiDB、H2 多種存儲器。而我們目前採用的是 ES,主要考慮是 SkyWalking 開發團隊自己的生產環境採用 ES 為主。(存儲數據)

探針為 Agent :負責從應用中,收集鏈路信息,發送給 SkyWalking OAP 伺服器。目前支持 SkyWalking、Zikpin、Jaeger 等提供的 Tracing 數據信息。而我們目前採用的是,SkyWalking Agent 收集 SkyWalking Tracing 數據,傳遞給伺服器。(收集數據)

環境準備#

(1)使用腳本安裝 docker、docker-compose

bash <(curl -sSL https://linuxmirrors.cn/docker.sh)

(2)配置鏡像加速

vi /etc/docker/daemon.json

{
  "data-root": "/data/dockerData",
  "registry-mirrors": ["https://registry.cn-hangzhou.aliyuncs.com",
    "https://huecker.io",
    "https://docker.rainbond.cc",
    "https://dockerhub.timeweb.cloud",
    "https://dockerhub.icu",
    "https://docker.registry.cyou",
    "https://docker-cf.registry.cyou",
    "https://dockercf.jsdelivr.fyi",
    "https://docker.jsdelivr.fyi",
    "https://dockertest.jsdelivr.fyi",
    "https://mirror.aliyuncs.com",
    "https://dockerproxy.com",
    "https://mirror.baidubce.com",
    "https://docker.m.daocloud.io",
    "https://docker.nju.edu.cn",
    "https://docker.mirrors.sjtug.sjtu.edu.cn",
    "https://docker.mirrors.ustc.edu.cn",
    "https://mirror.iscas.ac.cn",
    "https://docker.rainbond.cc",
    "https://docker.kubesre.xyz"],
    "log-driver":"json-file",
    "log-opts":{"max-size" :"50m","max-file":"3"}
}

(3)啟動 docker 服務

systemctl start docker
systemctl enable docker
systemctl status docker

(4)設置與內存映射相關的內核參數為 262144,查看應用到系統的內核參數。

echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -p

部署步驟#

(1)創建部署文件所需的存儲目錄

mkdir -p /data/elasticsearch/data /data/elasticsearch/logs /data/skywalking/oap
chmod -R 777 /data/elasticsearch

(2)創建臨時 skywalking-oap-server 容器,將 skywalking 配置文件拷貝到映射目錄中。

cd /data/skywalking/oap
# 創建臨時skywalking-oap-server容器,拷貝skywalking配置文件到主機目錄
docker run -itd --name=oap-temp apache/skywalking-oap-server:9.5.0
docker cp  oap-temp:/skywalking/config/. .
docker rm -f oap-temp

(3)修改 skywalking 的配置文件 application.yml,將 elasticsearch 作為數據存儲。

vi application.yml

storage:
  selector: ${SW_STORAGE:elasticsearch} #將h2修改為elasticsearch
  elasticsearch:
    namespace: ${SW_NAMESPACE:""}
    clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:本機ip:9200} #localhost修改為主機ip
    protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"http"}
    connectTimeout: ${SW_STORAGE_ES_CONNECT_TIMEOUT:3000}
    socketTimeout: ${SW_STORAGE_ES_SOCKET_TIMEOUT:30000}
    responseTimeout: ${SW_STORAGE_ES_RESPONSE_TIMEOUT:15000}
    numHttpClientThread: ${SW_STORAGE_ES_NUM_HTTP_CLIENT_THREAD:0}
    user: ${SW_ES_USER:"elastic"} #填寫es的賬號
    password: ${SW_ES_PASSWORD:"elastic"} #填寫es密碼
    trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:""}
    trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:""}

(4)創建 docker-compose 文件,編排部署 skywalking、es、skywalking-ui。

vi skywalking.yml

services:
  elasticsearch:
    image: elasticsearch:8.15.0
    container_name: elasticsearch
    restart: always
    environment:
      - discovery.type=single-node
      - ES_JAVA_OPTS=-Xms1g -Xmx1g
      - ELASTIC_PASSWORD=elastic
      - TZ=Asia/Shanghai
    ports:
      - "9200:9200"
      - "9300:9300"
    healthcheck:
      test: [ "CMD-SHELL", "curl --silent --fail -u elastic:elasitc localhost:9200/_cluster/health || exit 1" ]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s
  logging:
      driver: "json-file"
      options:
        max-size: "50m"
        max-file: "3"
    volumes:
      - /data/elasticsearch/data:/usr/share/elasticsearch/data
      - /data/elasticsearch/logs:/usr/share/elasticsearch/logs
      - /data/elasticsearch/plugins:/usr/share/elasticsearch/plugins
    networks:
      skywalking-network:
        ipv4_address: 172.20.110.11
    ulimits:
      memlock:
        soft: -1
        hard: -1

  skywalking-oap:
    image: apache/skywalking-oap-server:9.5.0
    container_name: skywalking-oap
    restart: always
    ports:
      - "11800:11800"
      - "12800:12800"
      - "1234:1234"  
    healthcheck:
      test: [ "CMD-SHELL", "/skywalking/bin/swctl health" ]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 10s
    depends_on:
      elasticsearch:
        condition: service_healthy
    environment:
      - SW_STORAGE=elasticsearch
      - SW_HEALTH_CHECKER=default
      - TZ=Asia/Shanghai
      - JVM_Xms=512M
      - JVM_Xmx=1024M
      - SW_STORAGE_ES_CLUSTER_NODES=本機ip:9200
    volumes:
      - /data/skywalking/oap:/skywalking/config
    networks:
      skywalking-network:
        ipv4_address: 172.20.110.12


  skywalking-ui:
    image: apache/skywalking-ui:9.5.0
    container_name: skywalking-ui
    restart: always
    environment:
      - SW_OAP_ADDRESS=http://本機ip:12800
      - SW_ZIPKIN_ADDRESS=http://本機ip:9412
      - TZ=Asia/Shanghai
    ports:
      - "8080:8080"
    depends_on:
      skywalking-oap:
        condition: service_healthy
    networks:
      skywalking-network:
        ipv4_address: 172.20.110.13

networks:
  skywalking-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.110.0/24
docker compose -f skywalking.yml up -d

Skywalking 可通過以下兩種方式連接 es,作為存儲。(使用其中一種即可)

Skywalking 通過 HTTP 認證連接 Elasticsearch

#關閉es的ssl證書認證
docker exec -it elasticsearch bash -c ' sed -i "s/  enabled: true/  enabled: false/g" /usr/share/elasticsearch/config/elasticsearch.yml'
docker exec -it elasticsearch bash -c 'cat /usr/share/elasticsearch/config/elasticsearch.yml'
docker restart elasticsearch
docker restart skywalking-oap

Skywalking 通過 HTTPS SSL 認證連接 Elasticsearch

(1) 將 es 的 crt 和 key 證書文件,轉化為 p12 格式

openssl pkcs12 -export -in ca.crt -inkey ca.key -out es.p12 -name esca -CAfile es.crt

輸入兩次 keypass,其中 - name 參數為別名。

openssl pkcs12 -export -in ca.crt -inkey ca.key -out es.p12 -name esca -CAfile es.crt
Enter Export Password:
Verifying - Enter Export Password:

(2)將 p12 格式證書轉化為 jks 證書

安裝 JDK。keytool 是 JDK 中的一部分,需要安裝 JDK,進行證書轉化操作。

 yum install - java-11-openjdk-devel

storepass 參數為 jks 證書密碼,srcstorepass 參數為 p12 證書密碼。

keytool -importkeystore -v -srckeystore es.p12 -srcstoretype PKCS12  -srcstorepass wasd2345  -deststoretype JKS -destkeystore es.jks -storepass qiswasd2345

storage:
selector: ${SW_STORAGE}
elasticsearch:
namespace: ${SW_NAMESPACE:""}
clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:es 所在伺服器地址:443}
protocol: 443${SW_STORAGE_ES_HTTP_PROTOCOL:"https"}
connectTimeout: ${SW_STORAGE_ES_CONNECT_TIMEOUT:3000}
socketTimeout: ${SW_STORAGE_ES_SOCKET_TIMEOUT:30000}
responseTimeout: ${SW_STORAGE_ES_RESPONSE_TIMEOUT:15000}
numHttpClientThread: ${SW_STORAGE_ES_NUM_HTTP_CLIENT_THREAD:0}
user: ${SW_ES_USER:"es 用戶名"}
password: ${SW_ES_PASSWORD:"es 密碼"}
trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:"jks 證書地址"}
trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:"jks 證書密碼"}

開啟 Linux 監控#

安裝 Prometheus node-exporter 從 VM 收集指標數據。(源碼方式)#

yum install -y wget
wget https://github.com/prometheus/node_exporter/releases/download/v1.8.2/node_exporter-1.8.2.linux-amd64.tar.gz
tar -xvzf node_exporter-1.8.2.linux-amd64.tar.gz
cd 
mv node_exporter-1.8.2.linux-amd64/node_exporter /usr/sbin/
cd /usr/sbin/
./node_exporter

驗證是否運行

curl http://localhost:9100/metrics

創建 node_exporter 服務文件

vi /usr/lib/systemd/system/node_exporter.service

[Unit]
Description=node exporter service
Documentation=https://prometheus.io
After=network.target

[Service]
Type=simple
User=root
Group=root
#node_exporter的存放位置
ExecStart=/usr/sbin/node_exporter  
Restart=on-failure

[Install]
WantedBy=multi-user.target

重新加載系統管理器配置文件,啟動 node_exporter 服務並設置開機自啟

systemctl daemon-reload
systemctl start node_exporter
systemctl enable node_exporter
systemctl status node_exporter

安裝 Prometheus node-exporter(容器方式)#

services:
  node-exporter:
    image: quay.io/prometheus/node-exporter
    container_name: node-exporter
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.rootfs=/rootfs'
      - '--path.sysfs=/host/sys'
    restart: always
    environment:
      - TZ=Asia/Shanghai
    ports:
      - 9100:9100
    networks:
       linux_exporter:
        ipv4_address: 172.20.104.11

networks:
  linux_exporter:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.104.0/24

安裝 OpenTelemetry Collector#

創建 OpenTelemetry Collector 配置文件

mkdir /data/opentelemetry-collector
vi /data/opentelemetry-collector/config.yaml

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'vm-monitoring' #要與skywalking-oap中的otel-rules的vm.yaml中的名稱保持一致
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9100']
             labels:
               host_name: 10.10.2.145
               service: oap-server
processors:
  batch:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers: [prometheus]
      processors: [batch]
      exporters: [otlp, logging]
services:
  otelcol:
    image: otel/opentelemetry-collector
    container_name: otelcol
    restart: always
    environment:
      - TZ=Asia/Shanghai
    volumes:
      - /data/opentelemetry-collector/config.yaml:/etc/otelcol/config.yaml
    networks:
       opentelemetry:
        ipv4_address: 172.20.101.11

networks:
  opentelemetry:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.101.0/24

opentelemetry-collector 最新鏡像配置文件變更在最新版的容器鏡像中(0.113.0)中使用導出器 exporters 中使用 debug 代替 logging
即在 v0.86.0 版本之前使用 logging

exporters:
otlp:
endpoint: ip + 端口
tls:
insecure: true
logging:
loglevel: debug

之後使用 debug

exporters:
otlp:
endpoint: ip + 端口
tls:
insecure: true
debug:
verbosity: detailed

SkyWalking 服務自監控開啟#

開啟後端遙測,在 skywalking-oap 的配置文件 application.yml 中,找到 promethus 的部分,修改參數。

telemetry:
  selector: ${SW_TELEMETRY:prometheus} #將none修改為prometheus
  none:
  prometheus:
    host: ${SW_TELEMETRY_PROMETHEUS_HOST:0.0.0.0}
    port: ${SW_TELEMETRY_PROMETHEUS_PORT:1234}
    sslEnabled: ${SW_TELEMETRY_PROMETHEUS_SSL_ENABLED:false}
    sslKeyPath: ${SW_TELEMETRY_PROMETHEUS_SSL_KEY_PATH:""}
    sslCertChainPath: ${SW_TELEMETRY_PROMETHEUS_SSL_CERT_CHAIN_PATH:""}

在 OpenTelemetry Collector 配置文件中加入自監控的參數

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'vm-monitoring'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9100']
             labels:
               host_name: 10.10.2.145
               service: skywalking-oap-server
  prometheus/2:  #新增
    config:
     scrape_configs:
       - job_name: 'skywalking-so11y'  #要與skywalking-oap中的otel-rules的oap.yaml中的名稱保持一致
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:1234']   #端口為1234
             labels:
               host_name: 10.10.2.145_self
               service: skywalking-oap


processors:
  batch:
  batch/2:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug
  otlp/2:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging/2:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers: [prometheus, prometheus/2]
      processors: [batch, batch/2]
      exporters: [otlp, otlp/2, logging, logging/2]

依次重啟 otelcol 容器和 skywalking-oap,查看是否生成自監控。

開啟數據庫 MySQL/MariaDB 監控#

(1)部署 mysqld_exporter

services:
  mysqld_exporter:
    image: prom/mysqld-exporter
    container_name: mysqld_exporter
    restart: always
    environment:
      - TZ=Asia/Shanghai
    ports:
      - "9104:9104"
    command:
      - "--mysqld.username=user:password"   #用戶名和密碼
      - "--mysqld.address=10.10.2.145:3306"   #ip和端口號
    networks:
       sw-mysql:
        ipv4_address: 172.20.102.11

networks:
  sw-mysql:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.102.0/24

(2) 在 OpenTelemetry Collector 配置文件中 mysql 的監控參數

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'vm-monitoring'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9100']
             labels:
               host_name: 10.10.2.145
               service: skywalking-oap-server
  prometheus/2:
    config:
     scrape_configs:
       - job_name: 'skywalking-so11y'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:1234']
             labels:
               host_name: 10.10.2.145_self
               service: skywalking-oap

  prometheus/3:  #mysql、mariadb的監控部分
    config:
     scrape_configs:
       - job_name: 'mysql-monitoring' #要與skywalking-oap中的otel-rules/mysql目录中的yaml文件中的名稱保持一致
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9104']
             labels:
               host_name: mariadb-monitoring

processors:
  batch:
  batch/2:
  batch/3:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug
  otlp/2:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging/2:
    loglevel: debug
  otlp/3:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging/3:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers:
      - prometheus
      - prometheus/2
      - prometheus/3
      processors:
      - batch
      - batch/2
      - batch/3
      exporters:
      - otlp
      - otlp/2
      - otlp/3
      - logging
      - logging/2
      - logging/3

依次重啟 otelcol 容器和 skywalking-oap,查看是否生成 mysql 或 mariadb 的監控數據。

開啟 Elasticsearch 監控#

(1)部署 elasticsearch_exporter

services:
  elasticsearch_exporter:
    image: quay.io/prometheuscommunity/elasticsearch-exporter:latest
    container_name: elasticsearch_exporter
    restart: always
    environment:
      - TZ=Asia/Shanghai
    ports:
      - "9114:9114"
    command:
      #- '--es.uri=http://elastic:[email protected]:9200'   #es使用http協議
      - '--es.uri=https://elastic:[email protected]:9200' 
      - "--es.ssl-skip-verify"   #連接到es時跳過SSL驗證
    networks:
       es_exporter:
        ipv4_address: 172.20.103.11

networks:
  es_exporter:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.103.0/24

(2)在 OpenTelemetry Collector 配置文件中 es 的監控參數

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'elasticsearch-monitoring'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9114']
             labels:
               host_name: elasticsearch-monitoring


processors:
  batch:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers:
      - prometheus
      processors:
      - batch
      exporters:
      - otlp
      - logging

依次重啟 otelcol 容器和 skywalking-oap,查看是否生成 es 的監控數據。

開啟數據庫 PostgreSQL 監控#

(1)部署 postgres-exporter

services:
  postgres-exporter:
    image: quay.io/prometheuscommunity/postgres-exporter
    container_name: postgres-exporter
    restart: always
    environment:
      TZ: "Asia/Shanghai"
      DATA_SOURCE_URI: "localhost:5432/postgres?sslmode=disable"
      DATA_SOURCE_USER: "postgres"
      DATA_SOURCE_PASS: "password"
    ports:
      - "9187:9187"
    networks:
       sw-pgsql:
        ipv4_address: 172.20.105.11

networks:
  sw-pgsql:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.105.0/24

(2)在 OpenTelemetry Collector 配置文件中 postgres sql 的監控參數

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'postgresql-monitoring'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9187']
             labels:
               host_name: postgresql-monitoring


processors:
  batch:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers:
      - prometheus
      processors:
      - batch
      exporters:
      - otlp
      - logging

依次重啟 skywalking-oap 容器和 otelcol,查看是否生成 postgres sql 的監控數據。

開啟數據庫 MongoDB 監控#

(1)部署 mongodb_exporter

services:
  mongodb_exporter:
    image: percona/mongodb_exporter:0.40
    container_name: mongodb_exporter
    restart: always
    ports:
      - "9216:9216"
    environment:
      - TZ=Asia/Shanghai
      - MONGODB_URI=mongodb://user:[email protected]:27017/?authSource=admin
    command:
      - --collect-all
      - --web.listen-address=:9216
    networks:
       sw-mongodb:
        ipv4_address: 172.20.106.11

networks:
  sw-mongodb:
    driver: bridge
    ipam:
      config:
        - subnet: 172.20.106.0/24

(2) 在 OpenTelemetry Collector 配置文件中 mongodb 的監控參數

receivers:
  prometheus:
    config:
     scrape_configs:
       - job_name: 'mongodb-monitoring'
         scrape_interval: 5s
         static_configs:
           - targets: ['10.10.2.145:9216']
             labels:
               host_name: mongodb-monitoring


processors:
  batch:

exporters:
  otlp:
    endpoint: 10.10.2.145:11800
    tls:
      insecure: true
  logging:
    loglevel: debug

service:
  pipelines:
    metrics:
      receivers:
      - prometheus
      processors:
      - batch
      exporters:
      - otlp
      - logging

.NET 項目服務鏈路追蹤#

安裝 SkyWalking .NET Core Agent(Windows 環境)#

(1)在 Visual Studio 中的項目中安裝nugetSkyAPM.Agent.AspNetCore

(2)在launchSettings.json文件中新增環境變量"ASPNETCORE_HOSTINGSTARTUPASSEMBLIES": "SkyAPM.Agent.AspNetCore""SKYWALKING__SERVICENAME": "服務名(與執行的dll程序的名稱一致)"

  "profiles": {
    "http": {
      "commandName": "Project",
      "dotnetRunMessages": true,
      "launchBrowser": true,
      "applicationUrl": "http://localhost:5205",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        //新增環境變量
        "ASPNETCORE_HOSTINGSTARTUPASSEMBLIES": "SkyAPM.Agent.AspNetCore",
        "SKYWALKING__SERVICENAME": "服務名(與執行的dll程序的名稱一致)"
      }
    },
    "https": {
      "commandName": "Project",
      "dotnetRunMessages": true,
      "launchBrowser": true,
      "applicationUrl": "https://localhost:7105;http://localhost:5205",
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
        //新增環境變量
        "ASPNETCORE_HOSTINGSTARTUPASSEMBLIES": "SkyAPM.Agent.AspNetCore",
        "SKYWALKING__SERVICENAME": "服務名(與執行的dll程序的名稱一致)"
      }
    },
    "IIS Express": {
      "commandName": "IISExpress",
      "launchBrowser": true,
      "environmentVariables": {
        "ASPNETCORE_ENVIRONMENT": "Development",
         //新增環境變量
        "ASPNETCORE_HOSTINGSTARTUPASSEMBLIES": "SkyAPM.Agent.AspNetCore",
        "SKYWALKING__SERVICENAME": "服務名(與執行的dll程序的名稱一致)"
      }
    }
  }
,

(3)在 Program.cs 中添加配置參數。

//SkyApm
builder.Services.AddSkyApmExtensions();
Environment.SetEnvironmentVariable
("ASPNETCORE_HOSTINGSTARTUPASSEMBLIES", "SkyAPM.Agent.AspNetCore");
Environment.SetEnvironmentVariable("SKYWALKING__SERVICENAME", "服務名(與執行的dll程序的名稱一致)");

(4)添加 skyapm.json 文件,添加方式有兩種:

一是在 dll 運行程序的同目錄下創建 skyapm.json,並寫入以下內容。

{
  "SkyWalking": {
    "ServiceName": "服務名(與執行的dll程序的名稱一致)",
    "Namespace": "",
    "HeaderVersions": [
      "sw8"
    ],
    "Sampling": {
      "SamplePer3Secs": -1,
      "Percentage": -1.0
    },
    "Logging": {
      "Level": "Information",
      "FilePath": "logs\\skyapm-{Date}.log"
    },
    "Transport": {
      "Interval": 3000,
      "ProtocolVersion": "v8",
      "QueueSize": 30000,
      "BatchSize": 3000,
      "gRPC": {
        "Servers": "SkyWalking服務ip:11800",
        "Timeout": 10000,
        "ConnectTimeout": 10000,
        "ReportTimeout": 600000,
        "Authentication": ""
      }
    }
  }
}

二是在 Visual Studio 的控制台中輸入命令創建 skyapm.json。

dotnet tool install -g SkyAPM.DotNet.CLI
dotnet skyapm config 服務名(與執行的dll程序的名稱一致) SkyWalking服務ip:11800

(5)配置完成後,在運行的伺服器中加入環境變量

方式一:在伺服器添加

vi  ~/.bashrc


export ASPNETCORE_ENVIRONMENT=development
export ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=SkyAPM.Agent.AspNetCore
export SKYWALKING__SERVICENAME=服務名


#使得配置生效
source ~/.bashrc

方式二:在容器中添加,可在 Dockerfile 文件中添加

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
......
ENV ASPNETCORE_ENVIRONMENT=development
ENV ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=SkyAPM.Agent.AspNetCore
ENV SKYWALKING__SERVICENAME=服務名
#容器入口點
ENTRYPOINT ["dotnet", "xxx"]

(6)訪問.NET 項目,查看 Skywalking 中常規服務 - 服務中生成的服務監控。(效果圖在最後)

安裝 SkyWalking .NET Core Agent(Linux 環境)#

適用於在 Linux 上創建的.NET 項目

(1)安裝 SkyWalking .NET Core Agent

dotnet add package SkyAPM.Agent.AspNetCore
export ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=SkyAPM.Agent.AspNetCore
export SKYWALKING__SERVICENAME=服務名(與執行的dll程序的名稱一致)

(2)安裝 SkyAPM.DotNet.CLI,用於生成 skyapm.json

dotnet tool install -g SkyAPM.DotNet.CLI
dotnet skyapm config 服務名(與執行的dll程序的名稱一致) SkyWalking服務ip:11800

(3)配置完成後,在運行的伺服器中加入環境變量

方式一:在伺服器添加

vi  ~/.bashrc


export ASPNETCORE_ENVIRONMENT=development
export ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=SkyAPM.Agent.AspNetCore
export SKYWALKING__SERVICENAME=服務名


#使得配置生效
source ~/.bashrc

方式二:在容器中添加,可在 Dockerfile 文件中添加

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base
......
ENV ASPNETCORE_ENVIRONMENT=development
ENV ASPNETCORE_HOSTINGSTARTUPASSEMBLIES=SkyAPM.Agent.AspNetCore
ENV SKYWALKING__SERVICENAME=服務名
#容器入口點
ENTRYPOINT ["dotnet", "xxx"]

(4)訪問.NET 項目,查看 Skywalking 中常規服務 - 服務中生成的服務監控。

1

接入前端監控#

(1)項目中添加 skywalking-client-js 包

npm install skywalking-client-js --save

(2)在 vue.config.js 中配置代理

proxy:{
      '/browser': {
        target:'SkyWalking服務ip:12800',//這裡是路由和報錯報告的代理
        changeOrigin: true
      },
      '/v3':{
        target:'SkyWalking服務ip:12800',
        changeOrigin: true//這裡是追蹤報告的代理
      }
}

(3)在 main.js 中接入 skywalking-client-js

//skywalking監控系統
import ClientMonitor from 'skywalking-client-js';
//註冊skywalking
ClientMonitor.register({
    service: '服務名',//服務名稱        
    serviceVersion:'',//應用版本號
    traceSDKInternal:true,//追蹤sdk
    pagePath: location.href,//當前路由地址
    useFmp: true
});

(4) 在對應業務系統的伺服器中,在對應發布 nignx 代理服務中加入對應的代理配置。

    location /browser/ {
        proxy_pass http://SkyWalking服務ip:12800/browser/;
        client_max_body_size 1000M;
    }
        location /v3/ {
        proxy_pass http://SkyWalking服務ip:12800/v3/;
        client_max_body_size 1000M;
    }
載入中......
此文章數據所有權由區塊鏈加密技術和智能合約保障僅歸創作者所有。