spring cloud 二代架构依赖组件 docker全配置放送
一 背景介绍

先来看一下我们熟悉的第一代 spring cloud 的组件
| 组件名称 | 功能 |
|---|---|
| Ribbon | 客户端负载均衡器 |
| Eureka | 服务治理(注册、发现……) |
| Hystrix | 服务之间远程调用时的熔断保护 |
| Feign | 通过定义接口的方式直接调用其他服务的 API |
| Zuul | 分布式配置组件 |
| Config | 服务网关 |
| Sleuth | 用于请求链路跟踪 |
spring cloud 现在已经是一种标准了,各公司可以基于它的编程模型编写自己的组件 ,比如Netflix、阿里巴巴都有自己的一套通过spring cloud 编程模型开发的分布式服务组件 。
Spring Cloud 二代组件
Spring Cloud Alibaba 主要包含 Sentinel、Nacos、RocketMQ、Dubbo、Seata 等组件。
二代引入了 Spring Cloud Alibaba
| 第一代组件 | 第二代组件 |
|---|---|
| Eureka | Nacos |
| Config | Apollo |
| Zuul | spring cloud gateway |
| Hystrix | Sentinel |
再加上我们常用的组件
| 组件 | 功能 |
|---|---|
| XXL-Job | 分布式定时任务中心 |
| Redis | 分布式缓存 |
| Rocket-MQ | 消息队列 |
| Seata | 分布式事务 |
| ELK | 日志处理 |
| Skywalking | 调用链监控 |
| Prometheus | metrics监控 |
这其有中除 spring cloud gateway都需要外部单独部署服务来支持
二 利用docker-compose 进行本地简化部署
apollo
1version: '2'
2
3services:
4 apollo-quick-start:
5 image: nobodyiam/apollo-quick-start
6 container_name: apollo-quick-start
7 depends_on:
8 - apollo-db
9 ports:
10 - "8080:8080"
11 - "8070:8070"
12 links:
13 - apollo-db
14
15 apollo-db:
16 image: mysql:5.7
17 container_name: apollo-db
18 environment:
19 TZ: Asia/Shanghai
20 MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
21 depends_on:
22 - apollo-dbdata
23 ports:
24 - "13306:3306"
25 volumes:
26 - ./sql:/docker-entrypoint-initdb.d
27 volumes_from:
28 - apollo-dbdata
29
30 apollo-dbdata:
31 image: alpine:latest
32 container_name: apollo-dbdata
33 volumes:
34 - /var/lib/mysql
注意: ./sql下面的文件在这里(https://github.com/ctripcorp/apollo/tree/master/scripts/sql),是两个初始化的sql文件
nacos
1version: "2"
2services:
3 nacos:
4 image: nacos/nacos-server:latest
5 container_name: nacos-standalone-mysql
6 env_file:
7 - ./env/nacos-standlone-mysql.env
8 volumes:
9 - ./standalone-logs/:/home/nacos/logs
10 - ./init.d/custom.properties:/home/nacos/init.d/custom.properties
11 ports:
12 - "8848:8848"
13 - "9555:9555"
14 depends_on:
15 - mysql
16 restart: on-failure
17 mysql:
18 container_name: mysql
19 image: nacos/nacos-mysql:5.7
20 env_file:
21 - ./env/mysql.env
22 volumes:
23 - ./mysql:/var/lib/mysql
24 ports:
25 - "3308:3306"
redis
1version: '2'
2services:
3 #redis容器
4 redis:
5 #定义主机名
6 container_name: redis
7 #使用的镜像
8 image: redis:6.0.8
9 #容器的映射端口
10 ports:
11 - 6379:6379
12 command: redis-server /etc/conf/redis.conf
13 #定义挂载点
14 volumes:
15 - ./data:/data
16 - ./conf:/etc/conf
17 #环境变量
18 privileged: true
19 environment:
20 - TZ=Asia/Shanghai
21 - LANG=en_US.UTF-8
注意: conf下的redis.conf配置文件可以找个默认的模版文件,然后进行相应修改
rocket-mq
1version: '2'
2
3services:
4 #Service for nameserver
5 namesrv:
6 image: apacherocketmq/rocketmq-nameserver:4.5.0-alpine-operator-0.3.0
7 container_name: rmqnamesrv
8 ports:
9 - 9876:9876
10 volumes:
11 - ./data/namesrv/logs:/home/rocketmq/logs
12 command: sh mqnamesrv
13 environment:
14 TZ: Asia/Shanghai
15 JAVA_OPT_EXT: "-server -Xms512m -Xmx512m -Xmn256m"
16
17 #Service for broker
18 broker:
19 image: apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0
20 container_name: rmqbroker-a
21 depends_on:
22 - namesrv
23 ports:
24 - 10909:10909
25 - 10911:10911
26 - 10912:10912
27 environment:
28 NAMESRV_ADDR: namesrv:9876
29 JAVA_OPT_EXT: "-server -Xms512m -Xmx512m -Xmn256m"
30 volumes:
31 - ./data/broker/logs:/home/rocketmq/logs
32 - ./data/broker/store:/home/rocketmq/store
33 - ./data/broker/conf/broker.conf:/opt/rocketmq-4.7.1/conf/broker.conf
34
35 command: sh mqbroker -c /opt/rocketmq-4.7.1/conf/broker.conf
36
37 #Service for another broker -- broker1
38 broker1:
39 image: apacherocketmq/rocketmq-broker:4.5.0-alpine-operator-0.3.0
40 container_name: rmqbroker-b
41 depends_on:
42 - namesrv
43 ports:
44 - 10929:10909
45 - 10931:10911
46 - 10932:10912
47 environment:
48 NAMESRV_ADDR: namesrv:9876
49 JAVA_OPT_EXT: "-server -Xms512m -Xmx512m -Xmn256m"
50 volumes:
51 - ./data1/broker/logs:/home/rocketmq/logs
52 - ./data1/broker/store:/home/rocketmq/store
53 - ./data1/broker/conf/broker.conf:/opt/rocketmq-4.7.1/conf/broker.conf
54 command: sh mqbroker -c /opt/rocketmq-4.7.1/conf/broker.conf
55
56 rmqconsole:
57 image: styletang/rocketmq-console-ng
58 container_name: rmqconsole
59 ports:
60 - 8180:8080
61 environment:
62 TZ: Asia/Shanghai
63 JAVA_OPTS: "-Drocketmq.namesrv.addr=namesrv:9876 -Dcom.rocketmq.sendMessageWithVIPChannel=false"
64 depends_on:
65 - namesrv
此外还有两个配置文件
- ./data/broker/conf/broker.conf
- ./data1/broker/conf/broker.conf
1## ./data/broker/conf/broker.conf
2brokerClusterName = DefaultCluster
3brokerName = broker-abroker
4Id = 0
5deleteWhen = 04
6fileReservedTime = 48
7brokerRole = ASYNC_MASTER
8flushDiskType = ASYNC_FLUSH
9
10### ./data1/broker/conf/broker.conf
11brokerClusterName = Default
12ClusterbrokerName = broker-bbroker
13Id = 0
14deleteWhen = 04
15fileReservedTime = 48
16brokerRole = ASYNC_MASTER
17flushDiskType = ASYNC_FLUSH
seata-server
1version: "3.1"
2services:
3 seata-server:
4 image: seataio/seata-server:latest
5 hostname: seata-server
6 ports:
7 - 8091:8091
8 environment:
9 - SEATA_PORT=8091
10 expose:
11 - 8091
sentinel
- 没有现成的docker镜像,需要自己编写一个
1FROM openjdk:8
2
3#复制上下文目录下的jar包到容器里 使用COPY命令亦可
4ADD sentinel-dashboard-1.8.0.jar sentinel-dashboard-1.8.0.jar
5
6EXPOSE 8080
7
8#指定容器启动程序及参数 <ENTRYPOINT> "<CMD>"
9ENTRYPOINT ["java","-jar","sentinel-dashboard-1.8.0.jar"]
- 利用自己编译的镜像再编写docker-compose配置文件
1version: '3'
2services:
3 sentinel-dashboard:
4 image: sentinel-dashboard:1.8.0
5 container_name: sentinel-dashboard
6 restart: always
7 environment:
8 JAVA_OPTS: "-Dserver.port=8080 -Dcsp.sentinel.dashboard.server=localhost:8080 -Dproject.name=sentinel-dashboard -Djava.security.egd=file:/dev/./urandom -Dcsp.sentinel.api.port=8719"
9 ports: #避免出现端口映射错误,建议采用字符串格式 8080端口为Dockerfile中EXPOSE端口
10 - "58080:8080"
11 - "8719:8719"
12 volumes:
13 - ./root/logs:/root/logs
xxl-job
1version: '3'
2services:
3 xxl-job-admin:
4 image: xuxueli/xxl-job-admin:2.2.0
5 restart: always
6 container_name: xxl-job-admin
7 depends_on:
8 - mysql
9 environment:
10 PARAMS: '--spring.datasource.url=jdbc:mysql://mysql:3306/xxl_job?Unicode=true&characterEncoding=UTF-8 --spring.datasource.username=root --spring.datasource.password=root'
11 ports:
12 - 8067:8080
13 volumes:
14 - ./data/applogs:/data/applogs
注意: 这时引用的数据库是你现有的mysql,找一个现有的,因为为了它再新建一个容器有点儿浪费
prometheus(altermanager+prometheus+grafana)
1version: '3'
2services:
3 prometheus:
4 image: prom/prometheus:latest
5 container_name: prometheus
6 volumes:
7 - /opt/docker_compose/monitor/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
8 - /opt/docker_compose/monitor/prometheus/alertmanager_rules.yml:/etc/prometheus/alertmanager_rules.yml
9 ports:
10 - 9090:9090
11 command:
12 - '--config.file=/etc/prometheus/prometheus.yml'
13
14 grafana:
15 image: grafana/grafana
16 container_name: grafana
17 restart: always
18 hostname: grafana
19 volumes:
20 - /opt/docker_compose/monitor/grafana/grafana.ini:/etc/grafana/grafana.ini
21 ports:
22 - "3000:3000"
23
24 alertmanager:
25 image: prom/alertmanager:latest
26 container_name: alertmanager
27 hostname: alertmanager
28 restart: always
29 volumes:
30 - /opt/docker_compose/monitor/altermanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml
31 ports:
32 - "9093:9093"
33
34 prometheus-webhook-alert:
35 image: timonwong/prometheus-webhook-dingtalk:v1.3.0
36 container_name: prometheus-webhook-alertmanagers
37 hostname: webhook-alertmanagers
38 restart: always
39 volumes:
40 - /opt/docker_compose/monitor/prometheus-webhook-dingtalk/config.yml:/etc/prometheus-webhook-dingtalk/config.yml
41 - /etc/localtime:/etc/localtime
42 ports:
43 - "8060:8060"
44 entrypoint: /bin/prometheus-webhook-dingtalk --config.file=/etc/prometheus-webhook-dingtalk/config.yml --web.enable-ui
这里我的alter没有用grafana的,而是结合altermanager和 prometheus-webhook-dingtalk实现的钉钉告警。关于prometheus、altermanager、grafana都是常规配置大家可以找模板然后根据自己的需求修改,唯一需要说明的就是prometheus-webhook-dingtalk,虽然github上说明可以配置通知模版,但最新版本的,我怎么修改也不成,是个问题。 需要观察以后版本会不会好,或者直接上手改它的go代码。
skywalking
1version: '3.3'
2services:
3 elasticsearch:
4 image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
5 container_name: elasticsearch
6 restart: always
7 ports:
8 - 9200:9200
9 - 9300:9300
10 environment:
11 - discovery.type=single-node
12 - bootstrap.memory_lock=true
13 network_mode: bridge
14 volumes:
15 - /data/docker_compose/skywalking/es/config/jvm.options:/usr/share/elasticsearch/config/jvm.options:rw
16 - /data/docker_compose/skywalking/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
17 - /data/docker/elk/elk_elastic/data:/usr/share/elasticsearch/data:rw
18 ulimits:
19 memlock:
20 soft: -1
21 hard: -1
22 oap:
23 image: apache/skywalking-oap-server:8.1.0-es7
24 container_name: oap
25 depends_on:
26 - elasticsearch
27 links:
28 - elasticsearch
29 network_mode: bridge
30 restart: always
31 ports:
32 - 11800:11800
33 - 12800:12800
34 environment:
35 SW_ES_USER: elastic
36 SW_ES_PASSWORD: oasises
37 SW_STORAGE: elasticsearch7
38 SW_STORAGE_ES_CLUSTER_NODES: elasticsearch:9200
39 SW_TRACE_SAMPLE_RATE: 8000
40 ui:
41 image: apache/skywalking-ui:8.1.0
42 container_name: ui
43 network_mode: bridge
44 depends_on:
45 - oap
46 links:
47 - oap
48 restart: always
49 ports:
50 - 8083:8080
51 environment:
52 SW_OAP_ADDRESS: oap:12800
注意: es的详细配置文件需要你自己写哈。
kibana(ELK)
1version: '2'
2services:
3 elk-logstash:
4 image: docker.elastic.co/logstash/logstash:7.5.0
5 container_name : elk_logstash
6 hostname: elk_logstash
7 stdin_open: true
8 tty: true
9 ports:
10 - "5000:5000/udp"
11 - 5001:5001
12 command: logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/logstash.conf
13 external_links:
14 - elasticsearch
15 network_mode: bridge
16 volumes:
17 - /data1/docker/elk/elk_logstash/conf.d:/etc/logstash/conf.d
18 - /data1/docker/elk/elk_logstash/heapdump.hprof:/usr/share/logstash/heapdump.hprof -rw
19 - /data1/docker/elk/elk_logstash/gc.log:/usr/share/logstash/gc.log -rw
20
21 elk-kibana:
22 image: docker.elastic.co/kibana/kibana:7.5.0
23 container_name : elk_kibana
24 hostname: elk_kibana
25 stdin_open: true
26 tty: true
27 ports:
28 - 5601:5601
29 external_links:
30 - elasticsearch
31 network_mode: bridge
32 volumes:
33 - /data1/docker/elk/elk_kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
34 environment:
35 - ELASTICSEARCH_URL=http://elasticsearch:9200
- 由于ES一般我们会建集群,这里忽略ES容器
- logstash和kibana的相关配置也可从官网找到模版进行修改

关注公众号 获取更多精彩内容
