部署方式:可以看这篇:

k8s helm Seata1.5.1_hunheidaode的博客-CSDN博客4.我只是改动了values.yaml与templates下的deployment.yaml。在上面添加了些许改动,只不过是用helm部署了,其余的操作还是跟上篇博文一样子的。3.下载seta helm部署yaml文件:文件位于:不知道的可看上篇博文。把/seata-server/resources 下的文件挂载到。7.访问nacos 看到已经被注册到nacos中。2.环境:k8s 、nfs环境 或是动态存储。seataConfigName:后面的内容。/data/k8s/resource下。......https://blog.csdn.net/hunheidaode/article/details/126623672

helm-charthttps://heidaodageshiwo.github.io/helm-chart/

1.当前文章是基于上一篇文章写的:

k8s Seata1.5.1_hunheidaode的博客-CSDN博客

在上面添加了些许改动,只不过是用helm部署了,其余的操作还是跟上篇博文一样子的。

2.环境:k8s  、nfs环境 或是动态存储

 nfs:

3.下载seta helm部署yaml文件:文件位于:不知道的可看上篇博文。

 4.我只是改动了values.yaml与templates下的deployment.yaml

deployment.yaml 改动了最后几行:

有原来hostpath改为NFS存储:

完整配置文件:

apiVersion: apps/v1
kind: Deployment
metadata:
{{- if .Values.namespace }}
  namespace: {{ .Values.namespace }}
{{- end}}
  name: {{ include "seata-server.name" . }}
  labels:
{{ include "seata-server.labels" . | indent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app.kubernetes.io/name: {{ include "seata-server.name" . }}
      app.kubernetes.io/instance: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: {{ include "seata-server.name" . }}
        app.kubernetes.io/instance: {{ .Release.Name }}
    spec:
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          ports:
            - name: http
              containerPort: 8091
              protocol: TCP
          {{- if .Values.volume }}
          volumeMounts:
            {{- range .Values.volume }}
            - name: {{ .name }}
              mountPath: {{ .mountPath }}
            {{- end}}
          {{- end}}
        {{- if .Values.env }}
          env:
          {{- if .Values.env.seataIp }}
            - name: SEATA_IP
              value: {{ .Values.env.seataIp  | quote }}
          {{- end }}
          {{- if .Values.env.seataPort }}
            - name: SEATA_PORT
              value: {{ .Values.env.seataPort | quote }}
          {{- end }}
          {{- if .Values.env.seataEnv }}
            - name: SEATA_ENV
              value: {{ .Values.env.seataEnv }}
          {{- end }}
          {{- if .Values.env.seataConfigName }}
            - name: SEATA_CONFIG_NAME
              value: {{ .Values.env.seataConfigName }}
          {{- end }}
          {{- if .Values.env.serverNode }}
            - name: SERVER_NODE
              value: {{ .Values.env.serverNode | quote }}
          {{- end }}
          {{- if .Values.env.storeMode }}
            - name: STORE_MODE
              value: {{ .Values.env.storeMode }}
          {{- end }}
        {{- end }}
     {{- if .Values.volume }}
      volumes:
        {{- range .Values.volume }}
        - name: {{ .name }}
          nfs:
            server: {{ .nfsServers }}
            path: {{ .nfsPath }}
        {{- end}}
     {{- end}}

只是改动的这块:

     {{- if .Values.volume }}
      volumes:
        {{- range .Values.volume }}
        - name: {{ .name }}
          nfs:
            server: {{ .nfsServers }}
            path: {{ .nfsPath }}
        {{- end}}
     {{- end}}

values.yaml 完整:

replicaCount: 1

namespace: default

image:
  repository: seataio/seata-server
  tag: 1.5.1
  pullPolicy: IfNotPresent

service:
  type: NodePort
  port: 30091
  nodePort: 30091

env:
  seataPort: "8091"
  storeMode: "file"
  seataIp: "192.168.56.211"
  seataConfigName: "file:/root/seata-config/registry/registry.conf"

volume:
  - name: seata-config
    mountPath: /seata-server/resources
    nfsServers: 192.168.56.211
    nfsPath: /data/k8s/resource

env的环境之前配置过但是没有起作用,可以不用写:

seataConfigName:后面的内容
把/seata-server/resources 下的文件挂载到
/data/k8s/resource下

把上篇博文的文件下的文件修改后上传到 /data/k8s/resource文件下:

application.yaml 也与上篇博文一样:

server:
  port: 7091

spring:
  application:
    name: seata-server

logging:
  config: classpath:logback-spring.xml
  file:
    path: ${user.home}/logs/seata
  extend:
    logstash-appender:
      destination: 127.0.0.1:4560
    kafka-appender:
      bootstrap-servers: 127.0.0.1:9092
      topic: logback_to_logstash
console:
  user:
    username: seata
    password: seata

seata:
  config:
    # support: nacos 、 consul 、 apollo 、 zk  、 etcd3
    type: nacos
    nacos:
      server-addr: 192.168.56.211:30683
      namespace: public
      group: SEATA_GROUP
      username: nacos
      password: nacos
      ##if use MSE Nacos with auth, mutex with username/password attribute
      #access-key: ""
      #secret-key: ""
      data-id: seata.properties
  registry:
    # support: nacos 、 eureka 、 redis 、 zk  、 consul 、 etcd3 、 sofa
    type: nacos
    preferred-networks: 192.168.*
    nacos:
      application: seata-server
      server-addr: 192.168.56.211:30683
      group: SEATA_GROUP
      namespace: public
      cluster: default
      username: nacos
      password: nacos
      ##if use MSE Nacos with auth, mutex with username/password attribute
      #access-key: ""
      #secret-key: ""

  store:
    # support: file 、 db 、 redis
    mode: db
    session:
      mode: db
    lock:
      mode: db
    file:
      dir: sessionStore
      max-branch-session-size: 16384
      max-global-session-size: 512
      file-write-buffer-cache-size: 16384
      session-reload-read-size: 100
      flush-disk-mode: async
    db:
      datasource: druid
      db-type: mysql
      driver-class-name: com.mysql.jdbc.Driver
      url: jdbc:mysql://192.168.56.211:31306/seata?rewriteBatchedStatements=true
      user: root
      password: s00J8De852
      min-conn: 5
      max-conn: 100
      global-table: global_table
      branch-table: branch_table
      lock-table: lock_table
      distributed-lock-table: distributed_lock
      query-limit: 100
      max-wait: 5000
  security:
    secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
    tokenValidityInMilliseconds: 1800000
    ignore:
      urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login      

5.直接启动 

cd /root/seata 

helm install  seata ./seata-server

6.查看

7.访问nacos 看到已经被注册到nacos中。

8.seata控制台可视化界面:

当看到控制台一直报错时,我下载下seata1.5.1的源码下来,自己重新打包打镜像,搞了一下午!说实话各种报错!

搞一下午就是起不来!也没有报错!索性不搞了!

9.可视化界面展示

helm  sevice.yaml文件修改:

apiVersion: v1
kind: Service
metadata:
  name: {{ include "seata-server.fullname" . }}
  labels:
{{ include "seata-server.labels" . | indent 4 }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.port }}
      targetPort: {{ .Values.service.port }}
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: {{ include "seata-server.name" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}

把targetPort: http  改为:

      targetPort: {{ .Values.service.port }}

暴露出来:

然后:values.yaml

replicaCount: 1

namespace: default

image:
  repository: seataio/seata-server
  #repository: library/seata-server
  tag: 1.5.1
  pullPolicy: IfNotPresent

service:
  type: NodePort
  port: 7091
  
env:
  seataPort: "8091"
  storeMode: "file"
  seataIp: "192.168.56.211"
  seataConfigName: "xxx"

volume:
  - name: seata-config
    mountPath: /seata-server/resources
    nfsServers: 192.168.56.211
    nfsPath: /data/k8s/resource

启动:

 访问:ip:32542端口即可:

但是有错误 列表出不来:

我又在自己的本地启动:控制台还是包mysql的错误但是不影响

 也注册到nacos中了:

本地访问也是报错的:

这个问题不知道咋回事!,日志太多,看不到具体报错!但是确实是部署起来了!

10.可视化界面问题补充(如果你也是出现了这样的错误,无非就是你也是使用k8s部署的nacos或者是使用helm部署的nacos,你就会出现上面我说的这种情况!怎么解决呢!)

本人已经解决:具体请看nacos  issues

helm 部署 nacos2.1.0 代码不能访问,ui可以访问 · Issue #9075 · alibaba/nacos · GitHub

这是我从部署到访问nacos到注册的问题,大家可以参考(前提:你是使用k8s或者是helm部署的nacos,绝对会出现这个问题!)

11.最终解决方案:

本人已经试过好多个方法:

1.首先把mysql换成了5.7:

2。目前helm部署seata并且能提供对java的访问。这块我还没有解决。

比较欣慰的是 k8s部署seata,java可以访问。这个问题我解决了。

我目前这种方式在csdn你是找不到的。

有人说官方不是提供了部署文档么,端口都代理出来了。那你们就去用用试试,用代码去连接试试。能成功的话就不用看我写的这个了。不行的话那就往下看呗。

完整k8s部署 yaml:

apiVersion: v1
kind: Service
metadata:
  name: seata-server
  namespace: default
  labels:
    k8s-app: seata-server
spec:
  type: ClusterIP
  ports:
    - port: 8091
      targetPort: 8091
      protocol: TCP
      name: http
  selector:
    k8s-app: seata-server
  sessionAffinity: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: seata-server
  namespace: default
  labels:
    k8s-app: seata-server
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: seata-server
  template:
    metadata:
      labels:
        k8s-app: seata-server
    spec:
      containers:
        - name: seata-server
          image: docker.io/seataio/seata-server:1.5.1
          imagePullPolicy: IfNotPresent
          ports:
            - name: http
              containerPort: 8091
              protocol: TCP
          env:
            - name: SEATA_IP
              value: 192.168.56.211
            - name: SEATA_PORT
              value: "31113"              
          volumeMounts:
            - name: seata-config
              mountPath: /seata-server/resources
      volumes:
        - name: seata-config
          hostPath:
            path: /root/resources
---
apiVersion: v1
kind: Service
metadata:
  name: seata-server-testzhangqiang
spec:
  selector:
    k8s-app: seata-server
  ports:
    - name: client-port1
      port: 7091
      protocol: TCP
      targetPort: 7091
      nodePort: 31005
    - name: bus-port1
      port: 31113
      protocol: TCP
      targetPort: 31113
      nodePort: 31113
  type: NodePort
---

主要改动是参照helm  ENV传环境变量的方式进行部署。

 

为什么要代理31113这个端口,为什么env里面要写31113这个端口。

因为:

你重启项目的时候 项目是连接这个端口的。

如果你是windows启动的,那么你平常连接的端口是:8091 端口。

但是你是k8s部署的nacos,你本机有8091端口么。没有!所以代码他连接不上!

所以要把这个31113端口写进去,并代理出来,让本地代码能够访问这个端口。

你们代码还有配置需要有以下改动:

nacos .seata配置修改为:

查看注册信息:

          env:
            - name: SEATA_IP
              value: 192.168.56.211
            - name: SEATA_PORT
              value: "31113" 

这个192.168.56.211是虚拟机ip

检查注册地址与ip 一定要对应上:

 代码:

我是严格按照官网依赖来的。nacos2.1  seata1.5.1(我使用mysql5.7数据库,mysql8的数据库搞了2周都没搞起来)

application.yml(feign配置要添加不然 会  read  time out 新版本是这样配置,搞了好久)

# 数据源
spring:
  datasource:
    username: root
    password: 123456
    url: jdbc:mysql://192.168.56.213:31306/seata_order?characterEncoding=utf8&useSSL=false&serverTimezone=UTC
    driver-class-name: com.mysql.jdbc.Driver
    type: com.alibaba.druid.pool.DruidDataSource

    #初始化时运行sql脚本
    schema: classpath:sql/schema.sql
    initialization-mode: never
  application:
    name: alibaba-order-seata
#设置mybatis
mybatis:
  mapper-locations: classpath:com/xx/order/mapper/*Mapper.xml
  #config-location: classpath:mybatis-config.xml
  typeAliasesPackage: com.xx.order.pojo
  configuration:
    mapUnderscoreToCamelCase: true
server:
  port: 8072

feign:
  client:
    config:
      default:
        readTimeout: 10000
        connectTimeout: 10000

bootstrap.yml(nacos配置要放到这里面)

# 数据源
spring:
  cloud:
    nacos:
      server-addr: 192.168.56.211:31000
      discovery:
        server-addr: 192.168.56.211:31000
        password: nacos
        username: nacos
        enabled: true
seata:
  tx-service-group: zq
  registry:
    nacos:
      group: SEATA_GROUP
      username: nacos
      password: nacos
      application: seata-server
      #      server-addr: 192.168.56.211:31000
      server-addr: 192.168.56.211:31000
      cluster: default
    type: nacos
  config:
    type: nacos
    nacos:
      password: nacos
      username: nacos
      #      server-addr: 192.168.56.211:31000
      server-addr: 192.168.56.211:31000
      group: SEATA_GROUP
  service:
    grouplist:
      default: 192.168.56.211:31113
    disable-global-transaction: false
    #    vgroup-mapping:
    #      default_tx_group: default
    vgroup-mapping:
      default_tx_group: zq

注意这个配置:

大功告成了!

你们可以去试试了!

 本机启动日志:


  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::       (v2.3.12.RELEASE)

2022-09-06 17:00:48.843  INFO 132 --- [           main] c.t.order.AlibabaOrderSeataApplication   : No active profile set, falling back to default profiles: default
2022-09-06 17:00:49.895  INFO 132 --- [           main] o.s.cloud.context.scope.GenericScope     : BeanFactory id=f0658dad-eccb-3a82-9f89-e373def31818
2022-09-06 17:00:49.901  INFO 132 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'io.seata.spring.boot.autoconfigure.SeataCoreAutoConfiguration' of type [io.seata.spring.boot.autoconfigure.SeataCoreAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-09-06 17:00:49.902  INFO 132 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'springApplicationContextProvider' of type [io.seata.spring.boot.autoconfigure.provider.SpringApplicationContextProvider] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-09-06 17:00:49.903  INFO 132 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'io.seata.spring.boot.autoconfigure.SeataAutoConfiguration' of type [io.seata.spring.boot.autoconfigure.SeataAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-09-06 17:00:49.958  INFO 132 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'failureHandler' of type [io.seata.tm.api.DefaultFailureHandlerImpl] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-09-06 17:00:49.974  INFO 132 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'springCloudAlibabaConfiguration' of type [io.seata.spring.boot.autoconfigure.properties.SpringCloudAlibabaConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-09-06 17:00:49.978  INFO 132 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'seataProperties' of type [io.seata.spring.boot.autoconfigure.properties.SeataProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-09-06 17:00:49.981  INFO 132 --- [           main] i.s.s.b.a.SeataAutoConfiguration         : Automatically configure Seata
2022-09-06 17:00:50.061  INFO 132 --- [           main] io.seata.config.ConfigurationFactory     : load Configuration from :Spring Configuration
2022-09-06 17:00:50.076  INFO 132 --- [           main] i.seata.config.nacos.NacosConfiguration  : Nacos check auth with userName/password.
2022-09-06 17:00:50.128  INFO 132 --- [           main] c.a.n.p.a.s.c.ClientAuthPluginManager    : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.impl.NacosClientAuthServiceImpl success.
2022-09-06 17:00:50.128  INFO 132 --- [           main] c.a.n.p.a.s.c.ClientAuthPluginManager    : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.ram.RamClientAuthServiceImpl success.
2022-09-06 17:00:53.548  INFO 132 --- [           main] i.s.s.a.GlobalTransactionScanner         : Initializing Global Transaction Clients ... 
2022-09-06 17:00:53.629  INFO 132 --- [           main] i.s.core.rpc.netty.NettyClientBootstrap  : NettyClientBootstrap has started
2022-09-06 17:00:53.652  INFO 132 --- [           main] c.a.n.p.a.s.c.ClientAuthPluginManager    : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.impl.NacosClientAuthServiceImpl success.
2022-09-06 17:00:53.652  INFO 132 --- [           main] c.a.n.p.a.s.c.ClientAuthPluginManager    : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.ram.RamClientAuthServiceImpl success.
2022-09-06 17:00:53.987  INFO 132 --- [           main] i.s.c.r.netty.NettyClientChannelManager  : will connect to 192.168.56.211:31113
2022-09-06 17:00:54.530  INFO 132 --- [           main] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:TMROLE,address:192.168.56.211:31113,msg:< RegisterTMRequest{applicationId='alibaba-order-seata', transactionServiceGroup='zq'} >
2022-09-06 17:00:55.836  INFO 132 --- [           main] i.s.c.rpc.netty.TmNettyRemotingClient    : register TM success. client version:1.5.1, server version:1.5.1,channel:[id: 0xccc3ed55, L:/192.168.56.1:56381 - R:/192.168.56.211:31113]
2022-09-06 17:00:55.845  INFO 132 --- [           main] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 69 ms, version:1.5.1,role:TMROLE,channel:[id: 0xccc3ed55, L:/192.168.56.1:56381 - R:/192.168.56.211:31113]
2022-09-06 17:00:55.846  INFO 132 --- [           main] i.s.s.a.GlobalTransactionScanner         : Transaction Manager Client is initialized. applicationId[alibaba-order-seata] txServiceGroup[zq]
2022-09-06 17:00:55.864  INFO 132 --- [           main] io.seata.rm.datasource.AsyncWorker       : Async Commit Buffer Limit: 10000
2022-09-06 17:00:55.865  INFO 132 --- [           main] i.s.rm.datasource.xa.ResourceManagerXA   : ResourceManagerXA init ...
2022-09-06 17:00:55.874  INFO 132 --- [           main] i.s.core.rpc.netty.NettyClientBootstrap  : NettyClientBootstrap has started
2022-09-06 17:00:55.874  INFO 132 --- [           main] i.s.s.a.GlobalTransactionScanner         : Resource Manager is initialized. applicationId[alibaba-order-seata] txServiceGroup[zq]
2022-09-06 17:00:55.874  INFO 132 --- [           main] i.s.s.a.GlobalTransactionScanner         : Global Transaction Clients are initialized. 
2022-09-06 17:00:55.877  INFO 132 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'io.seata.spring.boot.autoconfigure.SeataDataSourceAutoConfiguration' of type [io.seata.spring.boot.autoconfigure.SeataDataSourceAutoConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-09-06 17:00:56.006  INFO 132 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'com.alibaba.cloud.seata.feign.SeataFeignClientAutoConfiguration$FeignBeanPostProcessorConfiguration' of type [com.alibaba.cloud.seata.feign.SeataFeignClientAutoConfiguration$FeignBeanPostProcessorConfiguration] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-09-06 17:00:56.010  INFO 132 --- [           main] trationDelegate$BeanPostProcessorChecker : Bean 'seataFeignObjectWrapper' of type [com.alibaba.cloud.seata.feign.SeataFeignObjectWrapper] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2022-09-06 17:00:56.369  INFO 132 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8072 (http)
2022-09-06 17:00:56.381  INFO 132 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2022-09-06 17:00:56.381  INFO 132 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet engine: [Apache Tomcat/9.0.46]
2022-09-06 17:00:56.612  INFO 132 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2022-09-06 17:00:56.612  INFO 132 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 7748 ms
2022-09-06 17:00:56.854  INFO 132 --- [           main] c.a.d.s.b.a.DruidDataSourceAutoConfigure : Init DruidDataSource
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
2022-09-06 17:00:56.991  INFO 132 --- [           main] com.alibaba.druid.pool.DruidDataSource   : {dataSource-1} inited
2022-09-06 17:00:57.284  INFO 132 --- [           main] i.s.c.r.netty.NettyClientChannelManager  : will connect to 192.168.56.211:31113
2022-09-06 17:00:57.284  INFO 132 --- [           main] i.s.c.rpc.netty.RmNettyRemotingClient    : RM will register :jdbc:mysql://192.168.56.213:31306/seata_order
2022-09-06 17:00:57.285  INFO 132 --- [           main] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:RMROLE,address:192.168.56.211:31113,msg:< RegisterRMRequest{resourceIds='jdbc:mysql://192.168.56.213:31306/seata_order', applicationId='alibaba-order-seata', transactionServiceGroup='zq'} >
2022-09-06 17:00:57.300  INFO 132 --- [           main] i.s.c.rpc.netty.RmNettyRemotingClient    : register RM success. client version:1.5.1, server version:1.5.1,channel:[id: 0xe1b055dc, L:/192.168.56.1:56386 - R:/192.168.56.211:31113]
2022-09-06 17:00:57.301  INFO 132 --- [           main] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 10 ms, version:1.5.1,role:RMROLE,channel:[id: 0xe1b055dc, L:/192.168.56.1:56386 - R:/192.168.56.211:31113]
2022-09-06 17:00:57.745  INFO 132 --- [           main] o.s.c.openfeign.FeignClientFactoryBean   : For 'alibaba-stock-seata' URL not provided. Will try picking an instance via load-balancing.
2022-09-06 17:00:57.847  INFO 132 --- [           main] i.s.s.a.GlobalTransactionScanner         : Bean[com.xx.order.service.impl.OrderServiceImpl] with name [orderServiceImpl] would use interceptor [io.seata.spring.annotation.GlobalTransactionalInterceptor]
2022-09-06 17:00:57.933  WARN 132 --- [           main] c.n.c.sources.URLConfigurationSource     : No URLs will be polled as dynamic configuration sources.
2022-09-06 17:00:57.933  INFO 132 --- [           main] c.n.c.sources.URLConfigurationSource     : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2022-09-06 17:00:57.944  WARN 132 --- [           main] c.n.c.sources.URLConfigurationSource     : No URLs will be polled as dynamic configuration sources.
2022-09-06 17:00:57.944  INFO 132 --- [           main] c.n.c.sources.URLConfigurationSource     : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2022-09-06 17:00:58.110  INFO 132 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2022-09-06 17:00:58.951  INFO 132 --- [           main] o.s.s.c.ThreadPoolTaskScheduler          : Initializing ExecutorService 'Nacos-Watch-Task-Scheduler'
2022-09-06 17:00:59.958  INFO 132 --- [           main] c.a.n.p.a.s.c.ClientAuthPluginManager    : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.impl.NacosClientAuthServiceImpl success.
2022-09-06 17:00:59.958  INFO 132 --- [           main] c.a.n.p.a.s.c.ClientAuthPluginManager    : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.ram.RamClientAuthServiceImpl success.
2022-09-06 17:01:00.268  INFO 132 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8072 (http) with context path ''
2022-09-06 17:01:00.287  INFO 132 --- [           main] c.a.c.n.registry.NacosServiceRegistry    : nacos registry, DEFAULT_GROUP alibaba-order-seata 172.20.10.3:8072 register finished
2022-09-06 17:01:01.047  INFO 132 --- [           main] c.t.order.AlibabaOrderSeataApplication   : Started AlibabaOrderSeataApplication in 14.933 seconds (JVM running for 16.759)
2022-09-06 17:01:13.060  INFO 132 --- [nio-8072-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring DispatcherServlet 'dispatcherServlet'
2022-09-06 17:01:13.060  INFO 132 --- [nio-8072-exec-1] o.s.web.servlet.DispatcherServlet        : Initializing Servlet 'dispatcherServlet'
2022-09-06 17:01:13.071  INFO 132 --- [nio-8072-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 11 ms
2022-09-06 17:01:13.142  INFO 132 --- [nio-8072-exec-1] io.seata.tm.TransactionManagerHolder     : TransactionManager Singleton io.seata.tm.DefaultTransactionManager@30a3de9b
2022-09-06 17:01:13.160  INFO 132 --- [nio-8072-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : Begin new global transaction [192.168.56.211:31113:3657226080688205940]
assssssssssssssss
2022-09-06 17:01:14.196  INFO 132 --- [nio-8072-exec-1] c.netflix.config.ChainedDynamicProperty  : Flipping property: alibaba-stock-seata.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2022-09-06 17:01:14.231  INFO 132 --- [nio-8072-exec-1] c.netflix.loadbalancer.BaseLoadBalancer  : Client: alibaba-stock-seata instantiated a LoadBalancer: DynamicServerListLoadBalancer:{NFLoadBalancer:name=alibaba-stock-seata,current list of Servers=[],Load balancer stats=Zone stats: {},Server stats: []}ServerList:null
2022-09-06 17:01:14.240  INFO 132 --- [nio-8072-exec-1] c.n.l.DynamicServerListLoadBalancer      : Using serverListUpdater PollingServerListUpdater
2022-09-06 17:01:14.336  INFO 132 --- [nio-8072-exec-1] c.netflix.config.ChainedDynamicProperty  : Flipping property: alibaba-stock-seata.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2022-09-06 17:01:14.338  INFO 132 --- [nio-8072-exec-1] c.n.l.DynamicServerListLoadBalancer      : DynamicServerListLoadBalancer for client alibaba-stock-seata initialized: DynamicServerListLoadBalancer:{NFLoadBalancer:name=alibaba-stock-seata,current list of Servers=[172.20.10.3:8073],Load balancer stats=Zone stats: {unknown=[Zone:unknown;	Instance count:1;	Active connections count: 0;	Circuit breaker tripped count: 0;	Active connections per server: 0.0;]
},Server stats: [[Server:172.20.10.3:8073;	Zone:UNKNOWN;	Total Requests:0;	Successive connection failure:0;	Total blackout seconds:0;	Last connection made:Thu Jan 01 08:00:00 CST 1970;	First connection made: Thu Jan 01 08:00:00 CST 1970;	Active Connections:0;	total failure count in last (1000) msecs:0;	average resp time:0.0;	90 percentile resp time:0.0;	95 percentile resp time:0.0;	min resp time:0.0;	max resp time:0.0;	stddev resp time:0.0]
]}ServerList:com.alibaba.cloud.nacos.ribbon.NacosServerList@64034cde
2022-09-06 17:01:15.252  INFO 132 --- [erListUpdater-0] c.netflix.config.ChainedDynamicProperty  : Flipping property: alibaba-stock-seata.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
2022-09-06 17:01:15.348  INFO 132 --- [nio-8072-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : Suspending current transaction, xid = 192.168.56.211:31113:3657226080688205940
2022-09-06 17:01:15.348  INFO 132 --- [nio-8072-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : [192.168.56.211:31113:3657226080688205940] commit status: Committed
2022-09-06 17:01:16.086  INFO 132 --- [ch_RMROLE_1_1_8] i.s.c.r.p.c.RmBranchCommitProcessor      : rm client handle branch commit process:xid=192.168.56.211:31113:3657226080688205940,branchId=3657226080688205942,branchType=AT,resourceId=jdbc:mysql://192.168.56.213:31306/seata_order,applicationData={"skipCheckLock":true}
2022-09-06 17:01:16.101  INFO 132 --- [ch_RMROLE_1_1_8] io.seata.rm.AbstractRMHandler            : Branch committing: 192.168.56.211:31113:3657226080688205940 3657226080688205942 jdbc:mysql://192.168.56.213:31306/seata_order {"skipCheckLock":true}
2022-09-06 17:01:16.102  INFO 132 --- [ch_RMROLE_1_1_8] io.seata.rm.AbstractRMHandler            : Branch commit result: PhaseTwo_Committed
2022-09-06 17:17:57.315  WARN 132 --- [MetaChecker_1_1] c.a.druid.pool.DruidAbstractDataSource   : discard long time none received connection. , jdbcUrl : jdbc:mysql://192.168.56.213:31306/seata_order?characterEncoding=utf8&useSSL=false&serverTimezone=UTC, version : 1.2.3, lastPacketReceivedIdleMillis : 119997

结尾:已经把helm的部署给解决了。

部署方式:

或者是查看:helm-charthttps://heidaodageshiwo.github.io/helm-chart/

2.1、 Seata1.5.1安装

添加仓库
[root@master ~]# helm repo ls
NAME    URL                                         
myrepo  https://heidaodageshiwo.github.io/helm-chart
[root@master ~]# 

更新仓库  pull helm包或者是直接去github下载即可
[root@master ~]#  helm search  repo myrepo
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
myrepo/helloworld       0.1.0           1.16.0          A Helm chart for Kubernetes                       
myrepo/nacos            0.1.5           1.0             A Helm chart for Kubernetes                       
myrepo/nginx            13.1.7          1.23.1          NGINX Open Source is a web server that can be a...
myrepo/seata-server     1.0.0           1.0             Seata Server  


[root@master ~]# mkdir seatahelmtest
[root@master ~]# cd seatahelmtest/
[root@master seatahelmtest]# ll
总用量 0
[root@master seatahelmtest]# helm pull myrepo/seata-server
[root@master seatahelmtest]# ls
seata-server-1.0.0.tgz
[root@master seatahelmtest]# tar -zxvf seata-server-1.0.0.tgz 
seata-server/Chart.yaml
seata-server/values.yaml
seata-server/templates/NOTES.txt
seata-server/templates/_helpers.tpl
seata-server/templates/deployment.yaml
seata-server/templates/service.yaml
seata-server/templates/tests/test-connection.yaml
seata-server/.helmignore
seata-server/node.yaml
seata-server/servicesss.yaml
[root@master seatahelmtest]# ls
seata-server  seata-server-1.0.0.tgz
[root@master seatahelmtest]# ll
总用量 4
drwxr-xr-x 3 root root  119 9月   7 16:37 seata-server
-rw-r--r-- 1 root root 2636 9月   7 16:37 seata-server-1.0.0.tgz
[root@master seatahelmtest]# ls
seata-server  seata-server-1.0.0.tgz
[root@master seatahelmtest]# helm install seata ./seata-server
NAME: seata
LAST DEPLOYED: Wed Sep  7 16:39:17 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services seata-seata-server)
  export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT
[root@master seatahelmtest]# 

2.2、 在安装之前需要注意几点(如果说你们按照官网的部署方式能够把8091端口代理出来也不用参考我这个了)

1.首先把代理的配置文件给挂载出来。我是用的是nfs: https://blog.csdn.net/hunheidaode/article/details/126623672  
2.需要修改values.yaml里面的ip地址:我的是192.168.56.211
3.我是用的是mysql5.7  mysql8的一直报错
4.可以安装了

界面访问 ip:31005 seata可视化界面: 

需要注意这个ip与端口 

  地址要一样即可。 代码的连接方式在博客中:k8s helm Seata1.5.1_hunheidaode的博客-CSDN博客 我也已经提到过了。这里就不贴了。

2.3、 测试:


如有问题请在博客中发表评论。

 

 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐