Dubbo配置详解

概述

Dubbo是一款高性能、轻量级的开源Java RPC框架,它提供了三大核心能力:面向接口的远程方法调用,智能容错和负载均衡,以及服务自动注册和发现。

Oinone平台默认使用dubbo-v2.7.22版本,本文以该版本为例进行描述。

基本概念

Dubbo在注册provider/consumer时使用Netty作为RPC调用的核心服务,其具备客户端/服务端(C/S)的基本特性。即:provider作为服务端consumer作为客户端

客户端通过服务中心发现有服务可被调用时,将通过服务中心提供的服务端调用信息,连接服务端并发起请求,从而实现远程调用。

服务注册(绑定Host/Port)

JAVA程序启动时,需要将provider的信息注册到服务中心,并在当前环境为Netty服务开启Host/Port监听,以实现服务注册功能。

在下文中,我们通过绑定Host/Port表示Netty服务的访问地址,通过注册Host/Port表示客户端的访问地址。

使用yaml配置绑定Host/Port

PS:该配置可在多种环境中通用,改变部署方式无需修改此配置。

dubbo:
  protocol:
    name: dubbo
    # host: 0.0.0.0
    port: -1

假设当前环境的可用IP为192.168.1.100

以上配置将使得Netty服务默认绑定在0.0.0.0:20880地址,服务注册地址为192.168.1.100:20880

客户端将通过192.168.1.100:20880调用服务端服务

若发生20880端口占用,则自动向后查找可用端口。如20881、20882等等

若当前可用端口为20881,则以上配置将使得Netty服务默认绑定在0.0.0.0:20881地址,服务注册地址为192.168.1.100:20881

使用环境变量配置注册Host/Port

当服务端被放置在容器环境中时,由于容器环境的特殊性,其内部的网络配置相对于宿主机而言是独立的。因此为保证客户端可以正常调用服务端,还需在容器中配置环境变量,以确保客户端可以通过指定的注册Host/Port进行访问。

以下示例为体现无法使用20880端口的情况,将宿主机可访问端口从20880改为20881。

DUBBO_IP_TO_REGISTRY=192.168.1.100
DUBBO_PORT_TO_REGISTRY=20881

假设当前宿主机环境的可用IP为192.168.1.100

以上配置将使得Netty服务默认绑定在0.0.0.0:20881地址,服务注册地址为192.168.1.100:20881

客户端将通过192.168.1.100:20881调用服务端服务

使用docker/docker-compose启动

需添加端口映射,将20881端口映射至宿主机20881端口。(此处容器内的端口发生变化,若需要了解具体原因,可参考题外话章节)

docker-run

IP=192.168.1.100

docker run -d --name designer-allinone-full \
-e DUBBO_IP_TO_REGISTRY=$IP \
-e DUBBO_PORT_TO_REGISTRY=20881 \
-p 20881:20881 \

docker-compose

services:
  backend:
    container_name: designer-backend
    image: harbor.oinone.top/oinone/designer-backend-v5.0
    restart: always
    environment:
      DUBBO_IP_TO_REGISTRY: 192.168.1.100
      DUBBO_PORT_TO_REGISTRY: 20881
    ports:
     - 20881:20881 # dubbo端口

使用kubernetes启动

工作负载(Deployment)

kind: Deployment
apiVersion: apps/v1
spec:
  replicas: 1
  template:
    spec:
      containers:
        - name: designer-backend
          image: harbor.oinone.top/oinone/designer-backend-v5.0
          ports:
            - name: dubbo
              containerPort: 20881
              protocol: TCP
          env:
            - name: DUBBO_IP_TO_REGISTRY
              value: "192.168.1.100"
            - name: DUBBO_PORT_TO_REGISTRY
              value: "20881"

服务(Services)

kind: Service
apiVersion: v1
spec:
  type: NodePort
  ports:
    - name: dubbo
      protocol: TCP
      port: 20881
      targetPort: dubbo
      nodePort: 20881

PS:此处的targetPort为对应Deployment#spec. template.spec.containers.ports.name配置的端口名称。若未配置,可使用20881直接指定对应容器的端口号。

使用kubernetes其他暴露服务方式

在Kubernetes中部署服务,有多种配置方式均可用暴露服务。上述配置仅用于通过Service/NodePort20881端口暴露至宿主机,其他服务可用通过任意Kubernetes节点IP进行调用。

若其他服务也在Kubernetes中进行部署,则可以通过Service/Service方式进行调用。将DUBBO_IP_TO_REGISTRY配置为${serviceName}.${namespace}即可。

若其他服务无法直接访问Kubernetes的master服务,则可以通过Ingress/Service方式进行调用。将DUBBO_IP_TO_REGISTRY配置为Ingress可解析域名即可。

Dubbo调用链路图解

PS: Consumer绑定Host/Port是其作为Provider使用的,下面所有图解仅演示单向的调用链路。

名词解释

  • Provider: 服务提供者(JVM)
  • Physical Machine Provider: 服务提供者所在物理机
  • Provider Container: 服务提供者所在容器
  • Kubernetes Service: Kubernetes Service资源类型
  • Consumer: 服务消费者(JVM)
  • Registration Center: 注册中心;可以是zookeepernacos等。
  • bind: 服务绑定Host/Port到指定ip:port
  • registry: 服务注册;注册Host/Port到注册中心的信息。
  • discovery: 服务发现;注册Host/Port到消费者的信息。
  • invoke: 服务调用;消费者通过注册中心提供的提供者信息向提供者发起服务调用。
  • forward: 网络转发;通常在容器环境需要进行必要的网络转发,以使得服务调用可以到达服务提供者。

物理机/物理机调用链路

image.png

``` mermaid
sequenceDiagram

participant p as Provider<br>(bind 0.0.0.0:20880)
participant m as Physical Machine Provider<br>(bind 192.168.1.100:20881)
participant rc as Registration Center<br>(zookeeper/nacos)
participant c as Consumer<br>(bind 0.0.0.0:20881)

p-->>+m: bind 192.168.1.100:20880
m->>+rc: registry 192.168.1.100:20880
rc->>+c: discovery 192.168.1.100:20880
c->>+m: invoke 192.168.1.100:20880
m-->>+p: forward 0.0.0.0:20881
```

PS: 此处虚线部分表示提供者部署在物理机上,并不存在真实的网络处理。

容器/物理机调用链路

image.png

``` mermaid
sequenceDiagram

participant p as Provider<br>(bind 0.0.0.0:20881)
participant pc as Provider Container<br>(bind 172.17.1.100:20881)
participant m as Physical Machine Provider<br>(bind 192.168.1.100:20881)
participant rc as Registration Center<br>(zookeeper/nacos)
participant c as Consumer<br>(bind 0.0.0.0:20882)

p-->>+pc: bind 172.26.1.100:20881
pc->>+m: mapping 192.168.1.100:20881
pc->>+rc: registry 192.168.1.100:20881
rc->>+c: discovery 192.168.1.100:20881
c->>+m: invoke 192.168.1.100:20881
m->>+pc: forward 172.17.1.100:20881
pc-->>+p: forward 0.0.0.0:20881
```

PS: 此处虚线部分表示提供者部署在容器中,并不存在真实的网络处理。

Kubernetes/物理机(Service/NodePort模式)调用链路

image.png

``` mermaid
sequenceDiagram

participant p as Provider<br>(bind 0.0.0.0:20881)
participant pc as Provider Container<br>(bind 172.17.1.100:20881)
participant ks as Kubernetes Service<br>targetPort 172.17.1.100:20881<br>nodePort 192.168.1.100:20881
participant m as Physical Machine Provider<br>(bind 192.168.1.100:20881)
participant rc as Registration Center<br>(zookeeper/nacos)
participant c as Consumer<br>(bind 0.0.0.0:20882)

p-->>+pc: bind 172.26.1.100:20881
pc->>+ks: mapping 192.168.1.100:20881
ks->>+m: mapping 192.168.1.100:20881
pc->>+rc: registry 192.168.1.100:20881
rc->>+c: discovery 192.168.1.100:20881
c->>+m: invoke 192.168.1.100:20881
m->>+ks: forward 192.168.1.100:20881
ks->>+pc: forward 172.17.1.100:20881
pc-->>+p: forward 0.0.0.0:20881
```

PS: 此处虚线部分表示提供者部署在容器中,并不存在真实的网络处理。

题外话

dubbo-v2.7.22源码中,作者发现Host/Port的获取方式并不对等,这里目前不太清楚是dubbo设计如此还是作者对dubbo设计理解不足。

  • 现象:

    • DUBBO_IP_TO_REGISTRY配置与dubbo.protocol.host无关。
    • DUBBO_PORT_TO_REGISTRY配置优先级高于dubbo.protocol.port配置。
  • 作者理解:

    • 客户端向服务端发起请求时,应使用注册Host/Port进行调用,只要该访问地址可以与服务端连通,则远程调用就可以正常运行。
    • 注册Host/Port绑定Host/Port应支持完全独立配置。当注册Host/Port绑定Host/Port均被配置时,注册Host绑定Host是独立生效的,但绑定Port却强制使用了注册Port。(这一点也是经常在容器环境中无法正常调用的主要原因)

常用配置

yaml配置

dubbo:
  application:
    name: pamirs-test
    version: 1.0.0
  registry:
    address: zookeeper://127.0.0.1:2181
    # group: demo
    # timeout: 5000
  protocol:
    name: dubbo
    # host: 0.0.0.0
    port: -1
    serialization: pamirs
    payload: 104857600
  scan:
    base-packages: pro.shushi
  cloud:
    subscribed-services:
  • dubbo.registry.address: 注册中心地址
  • dubbo.registry.group: 全局group配置
  • dubbo.registry.timeout: 全局超时时间配置
  • dubbo.protocol.name: 协议名称
  • dubbo.protocol.host: 绑定主机IP配置;默认:0.0.0.0
  • dubbo.protocol.port: 绑定主机端口配置;-1表示自动获取可用端口;默认:20880
  • dubbo.protocol.serialization: 序列化配置;Oinone平台必须使用pamirs作为序列化方式。
  • dubbo.protocol.payload: RPC调用数据大小限制;单位:字节(byte)
  • dubbo.scan.base-packages: provider/consumer扫描包路径
  • dubbo.cloud.subscribed-services: 多提供者配置;示例中该参数配置为空是为了避免启动时的警告日志,一般无需配置。

环境变量配置

DUBBO_IP_TO_REGISTRY=127.0.0.1
DUBBO_PORT_TO_REGISTRY=20880
  • DUBBO_IP_TO_REGISTRY:注册Host配置
  • DUBBO_PORT_TO_REGISTRY:注册Port配置

源码参考

  • org.apache.dubbo.config.ServiceConfig#findConfigedHosts
private String findConfigedHosts(ProtocolConfig protocolConfig,
                                 List<URL> registryURLs,
                                 Map<String, String> map) {
    boolean anyhost = false;

    String hostToBind = getValueFromConfig(protocolConfig, DUBBO_IP_TO_BIND);
    if (hostToBind != null && hostToBind.length() > 0 && isInvalidLocalHost(hostToBind)) {
        throw new IllegalArgumentException("Specified invalid bind ip from property:" + DUBBO_IP_TO_BIND + ", value:" + hostToBind);
    }

    // if bind ip is not found in environment, keep looking up
    if (StringUtils.isEmpty(hostToBind)) {
        hostToBind = protocolConfig.getHost();
        if (provider != null && StringUtils.isEmpty(hostToBind)) {
            hostToBind = provider.getHost();
        }
        if (isInvalidLocalHost(hostToBind)) {
            anyhost = true;
            logger.info("No valid ip found from environment, try to get local host.");
            hostToBind = getLocalHost();
        }
    }

    map.put(BIND_IP_KEY, hostToBind);

    // registry ip is not used for bind ip by default
    String hostToRegistry = getValueFromConfig(protocolConfig, DUBBO_IP_TO_REGISTRY);
    if (hostToRegistry != null && hostToRegistry.length() > 0 && isInvalidLocalHost(hostToRegistry)) {
        throw new IllegalArgumentException(
                "Specified invalid registry ip from property:" + DUBBO_IP_TO_REGISTRY + ", value:" + hostToRegistry);
    } else if (StringUtils.isEmpty(hostToRegistry)) {
        // bind ip is used as registry ip by default
        hostToRegistry = hostToBind;
    }

    map.put(ANYHOST_KEY, String.valueOf(anyhost));

    return hostToRegistry;
}
  • org.apache.dubbo.config.ServiceConfig#findConfigedPorts
private Integer findConfigedPorts(ProtocolConfig protocolConfig,
                                  String name,
                                  Map<String, String> map, int protocolConfigNum) {
    Integer portToBind = null;

    // parse bind port from environment
    String port = getValueFromConfig(protocolConfig, DUBBO_PORT_TO_BIND);
    portToBind = parsePort(port);

    // if there's no bind port found from environment, keep looking up.
    if (portToBind == null) {
        portToBind = protocolConfig.getPort();
        if (provider != null && (portToBind == null || portToBind == 0)) {
            portToBind = provider.getPort();
        }
        final int defaultPort = ExtensionLoader.getExtensionLoader(Protocol.class).getExtension(name).getDefaultPort();
        if (portToBind == null || portToBind == 0) {
            portToBind = defaultPort;
        }
        if (portToBind <= 0) {
            portToBind = getRandomPort(name);
            if (portToBind == null || portToBind < 0) {
                portToBind = getAvailablePort(defaultPort);
                putRandomPort(name, portToBind);
            }
        }
    }

    // registry port, not used as bind port by default
    String key = DUBBO_PORT_TO_REGISTRY;
    if (protocolConfigNum > 1) {
        key = getProtocolConfigId(protocolConfig).toUpperCase() + "_" + key;
    }
    String portToRegistryStr = getValueFromConfig(protocolConfig, key);
    Integer portToRegistry = parsePort(portToRegistryStr);
    if (portToRegistry != null) {
        portToBind = portToRegistry;
    }

    // save bind port, used as url's key later
    map.put(BIND_PORT_KEY, String.valueOf(portToBind));

    return portToBind;
}
  • org.apache.dubbo.config.ServiceConfig#getValueFromConfig
private String getValueFromConfig(ProtocolConfig protocolConfig, String key) {
    String protocolPrefix = protocolConfig.getName().toUpperCase() + "_";
    String value = ConfigUtils.getSystemProperty(protocolPrefix + key);
    if (StringUtils.isEmpty(value)) {
        value = ConfigUtils.getSystemProperty(key);
    }
    return value;
}
  • org.apache.dubbo.common.utils.ConfigUtils#getSystemProperty
public static String getSystemProperty(String key) {
    String value = System.getenv(key);
    if (StringUtils.isEmpty(value)) {
        value = System.getProperty(key);
    }
    return value;
}

Oinone社区 作者:张博昊原创文章,如若转载,请注明出处:https://doc.oinone.top/backend/16028.html

访问Oinone官网:https://www.oinone.top获取数式Oinone低代码应用平台体验

(1)
张博昊的头像张博昊数式管理员
上一篇 2024年8月9日 pm6:02
下一篇 2024年8月12日 pm5:37

相关推荐

  • Excel导入扩展点-整体导入(批量导入)

    1、【导入】在有些场景,需要获取Excel导入的整体数据,进行批量的操作或者校验 可以通过实现导入扩展点的方式实现,入参data是导入Excel的数据列表;业务可以根据实际情况进行数据校验 1)Excel模板定义,需要设置setEachImport(false) 2)导入扩展点API定义 pro.shushi.pamirs.file.api.extpoint.ExcelImportDataExtPoint#importData 3)示例代码参考: pro.shushi.pamirs.translate.extpoint.ResourceTranslationImportExtPoint#importData @Slf4j @Component @Ext(ExcelImportTask.class) public class ResourceTranslationImportExtPoint extends AbstractExcelImportDataExtPointImpl<List<ResourceTranslationItem>> { @Override //TODO 表达式,可以自定义,比如可以支持1个模型的多个【导入名称】的不同模板 @ExtPoint.Implement(expression = "importContext.definitionContext.model==\"" + ResourceTranslation.MODEL_MODEL + "\"") public Boolean importData(ExcelImportContext importContext, List<ResourceTranslationItem> dataList) { //TODO dataList就是excel导入那个sheet的所有内容 return true; } } 2、【导入】逐行导入的时候做事务控制 在模板中定义中增加事务的定义,并设置异常后回滚。参加示例代码: excel模板定义 @Component public class DemoItemImportTemplate implements ExcelTemplateInit { public static final String TEMPLATE_NAME = "商品导入模板"; @Override public List<ExcelWorkbookDefinition> generator() { //定义事务(导入处理中,只操作单个表的不需要事务定义。) //是否定义事务根据实际业务逻辑确定。比如:有些场景在导入前需要删除数据后在进行导入就需要定义事务 InitializationUtil.addTxConfig(DemoItem.MODEL_MODEL, ExcelDefinitionContext.EXCEL_TX_CONFIG_PREFIX + TEMPLATE_NAME); return Collections.singletonList( ExcelHelper.fixedHeader(DemoItem.MODEL_MODEL, TEMPLATE_NAME) .setType(ExcelTemplateTypeEnum.IMPORT) .createSheet("商品导入-sheet1") .createBlock(DemoItem.MODEL_MODEL) .addUnique(DemoItem.MODEL_MODEL,"name") .addColumn("name","名称") .addColumn("description","描述") .addColumn("itemPrice","单价") .addColumn("inventoryQuantity","库存") .build().setEachImport(true) //TODO 设置异常后回滚的标识,这个地方会回滚事务 .setHasErrorRollback(true) .setExcelImportMode(ExcelImportModeEnum.SINGLE_MODEL) ); } } 导入逻辑处理 @Slf4j @Component @Ext(ExcelImportTask.class) public class DemoItemImportExtPoint extends AbstractExcelImportDataExtPointImpl<DemoItem> implements ExcelImportDataExtPoint<DemoItem> { @Autowired private DemoItemService demoItemService; @Override @ExtPoint.Implement(expression = "importContext.definitionContext.model == \"" + DemoItem.MODEL_MODEL + "\"") public Boolean importData(ExcelImportContext importContext, DemoItem data) { ExcelImportTask importTask = importContext.getImportTask(); try { DemoItemImportTask hrExcelImportTask = new DemoItemImportTask().queryById(importTask.getId()); String publishUserName = Optional.ofNullable(hrExcelImportTask).map(DemoItemImportTask::getPublishUserName).orElse(null); data.setPublishUserName(publishUserName); demoItemService.create(data); } catch(PamirsException e) { log.error("导入异常", e); } catch (Exception e) { log.error("导入异常", e); } return Boolean.TRUE; } }

    2023年12月7日
    1.2K00
  • Oinone引入搜索引擎(增强模型)

    场景描述 在碰到大数据量并且需要全文检索的场景,我们在分布式架构中基本会架设ElasticSearch来作为一个常规解决方案。在oinone体系中增强模型就是应对这类场景,其背后也是整合了ElasticSearch; 使用前你应该 了解ElasticSearch,包括不限于:Index(索引)、分词、Node(节点)、Document(文档)、Shards(分片) & Replicas(副本)。参考官方网站:https://www.elastic.co/cn/ 有一个可用的ElasticSearch环境(本地项目能引用到) 前置约束 增强模型增量依赖数据变更实时消息,因此确保项目的event是开启的,mq配置正确。 项目引入搜索步骤 1、boot工程加入相关依赖包 boot工程需要指定ES客户端包版本,不指定版本会隐性依赖顶层spring-boot依赖管理指定的低版本 boot工程加入pamris-channel的工程依赖 <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-client</artifactId> <version>8.4.1</version> </dependency> <dependency> <groupId>jakarta.json</groupId> <artifactId>jakarta.json-api</artifactId> <version>2.1.1</version> </dependency> <dependency> <groupId>pro.shushi.pamirs.core</groupId> <artifactId>pamirs-sql-record-core</artifactId> </dependency> <dependency> <groupId>pro.shushi.pamirs.core</groupId> <artifactId>pamirs-channel-core</artifactId> </dependency> 2、api工程加入相关依赖包 在XXX-api中增加入pamirs-channel-api的依赖 <dependency> <groupId>pro.shushi.pamirs.core</groupId> <artifactId>pamirs-channel-api</artifactId> </dependency> 3、yml文件配置 在pamirs-demo-boot的application-dev.yml文件中增加配置pamirs.boot.modules增加channel,即在启动模块中增加channel模块。同时注意es的配置,是否跟es的服务一致 pamirs: record: sql: #改成自己本地路径(或服务器路径) store: /Users/oinone/record boot: modules: – channel ## 确保也安装了sql_record – sql_record channel: packages: # 增强模型扫描包配置 – com.xxx.xxx elastic: url: 127.0.0.1:9200 4、项目的模块增加模块依赖 XXXModule增加对ChannelModule的依赖 @Module(dependencies = {ChannelModule.MODULE_MODULE}) 5、增加增强模型(举例) package pro.shushi.pamirs.demo.api.enhance; import pro.shushi.pamirs.channel.enmu.IncrementEnum; import pro.shushi.pamirs.channel.meta.Enhance; import pro.shushi.pamirs.channel.meta.EnhanceModel; import pro.shushi.pamirs.demo.api.model.ShardingModel; import pro.shushi.pamirs.meta.annotation.Model; import pro.shushi.pamirs.meta.enmu.ModelTypeEnum; @Model(displayName = "测试EnhanceModel") @Model.model(ShardingModelEnhance.MODEL_MODEL) @Model.Advanced(type = ModelTypeEnum.PROXY, inherited = {EnhanceModel.MODEL_MODEL}) @Enhance(shards = "3", replicas = "1", reAlias = true,increment= IncrementEnum.OPEN) public class ShardingModelEnhance extends ShardingModel { public static final String MODEL_MODEL="demo.ShardingModelEnhance"; } 6、重启系统看效果 1、进入【传输增强模型】应用,访问增强模型列表我们会发现一条记录,并点击【全量同步】初始化ES,并全量dump数据 2、再次回到Demo应用,进入增强模型页面,可以正常访问并进增删改查操作 个性化dump逻辑 通常dump逻辑是有个性化需求,那么我们可以重写模型的synchronize方法,函数重写特性在“面向对象-继承与多态”部分中已经有详细介绍。 重写ShardingModelEnhance模型的synchronize方法 重写后,如果针对老数据记录需要把新增的字段都自动填充,可以进入【传输增强模型】应用,访问增强模型列表,找到对应的记录并点击【全量同步】 package pro.shushi.pamirs.demo.api.enhance; import pro.shushi.pamirs.channel.enmu.IncrementEnum; import pro.shushi.pamirs.channel.meta.Enhance; import pro.shushi.pamirs.channel.meta.EnhanceModel; import pro.shushi.pamirs.demo.api.model.ShardingModel; import pro.shushi.pamirs.meta.annotation.Field; import pro.shushi.pamirs.meta.annotation.Function; import pro.shushi.pamirs.meta.annotation.Model; import pro.shushi.pamirs.meta.enmu.FunctionTypeEnum; import pro.shushi.pamirs.meta.enmu.ModelTypeEnum; import java.util.List; @Model(displayName = "测试EnhanceModel") @Model.model(ShardingModelEnhance.MODEL_MODEL) @Model.Advanced(type = ModelTypeEnum.PROXY, inherited = {EnhanceModel.MODEL_MODEL}) @Enhance(shards = "3", replicas = "1", reAlias = true,increment= IncrementEnum.OPEN) public class ShardingModelEnhance extends ShardingModel { public static final String MODEL_MODEL="demo.ShardingModelEnhance"; @Field(displayName = "nick") private String nick;…

    2024年5月14日
    1.7K00
  • 字段类型之关系描述的特殊场景(常量关联)

    场景概述 【字段类型之关系与引用】一文中已经描述了各种关系字段的常规写法,还有一些特殊场景如:关系映射中存在常量,或者M2M中间表是大于两个字段构成。 场景描述 1、PetTalent模型增加talentType字段2、PetItem与PetTalent的多对多关系增加talentType(达人类型),3、PetItemRelPetTalent中间表维护petItemId、petTalentId以及talentType,PetDogItem和PetCatItem分别重写petTalents字段,关系中增加常量描述。示意图如下: 实际操作步骤 Step1 新增 TalentTypeEnum package pro.shushi.pamirs.demo.api.enumeration; import pro.shushi.pamirs.meta.annotation.Dict; import pro.shushi.pamirs.meta.common.enmu.BaseEnum; @Dict(dictionary = TalentTypeEnum.DICTIONARY,displayName = "达人类型") public class TalentTypeEnum extends BaseEnum<TalentTypeEnum,Integer> { public static final String DICTIONARY ="demo.TalentTypeEnum"; public final static TalentTypeEnum DOG =create("DOG",1,"狗达人","狗达人"); public final static TalentTypeEnum CAT =create("CAT",2,"猫达人","猫达人"); } Step2 PetTalent模型增加talentType字段 package pro.shushi.pamirs.demo.api.model; import pro.shushi.pamirs.demo.api.enumeration.TalentTypeEnum; import pro.shushi.pamirs.meta.annotation.Field; import pro.shushi.pamirs.meta.annotation.Model; @Model.model(PetTalent.MODEL_MODEL) @Model(displayName = "宠物达人",summary="宠物达人",labelFields ={"name"}) public class PetTalent extends AbstractDemoIdModel{ public static final String MODEL_MODEL="demo.PetTalent"; @Field(displayName = "达人") private String name; @Field(displayName = "达人类型") private TalentTypeEnum talentType; } Step3 修改PetItem的petTalents字段,在关系描述中增加talentType(达人类型) @Field.many2many(relationFields = {"petItemId"},referenceFields = {"petTalentId","talentType"},through = PetItemRelPetTalent.MODEL_MODEL ) @Field.Relation(relationFields = {"id"}, referenceFields = {"id","talentType"}) @Field(displayName = "推荐达人",summary = "推荐该商品的达人们") private List<PetTalent> petTalents; Step4 PetDogItem增加petTalents字段,重写父类PetItem的关系描述 talentType配置为常量,填入枚举的值 增加domain描述用户页面选择的时候自动过滤出特定类型的达人,RSQL用枚举的name @Field(displayName = "推荐达人") @Field.many2many( through = "PetItemRelPetTalent", relationFields = {"petItemId"}, referenceFields = {"petTalentId","talentType"} ) @Field.Relation(relationFields = {"id"}, referenceFields = {"id", "#1#"}, domain = " talentType == DOG") private List<PetTalent> petTalents; Step5 PetCatItem增加petTalents字段,重写父类PetItem的关系描述 talentType配置为常量,填入枚举的值 增加domain描述用户页面选择的时候自动过滤出特定类型的达人,RSQL用枚举的name @Field(displayName = "推荐达人") @Field.many2many( through = "PetItemRelPetTalent", relationFields = {"petItemId"}, referenceFields = {"petTalentId","talentType"} ) @Field.Relation(relationFields = {"id"}, referenceFields = {"id", "#2#"}, domain = " talentType == CAT") private List<PetTalent> petTalents; Step6 PetCatItem增加petTalents字段,many2one关系示例 talentType配置为常量,填入枚举的值 增加domain描述用户页面选择的时候自动过滤出特定类型的达人,RSQL用枚举的name @Model.model(PetPet.MODEL_MODEL) @Model(displayName…

    2024年5月25日
    1.5K00
  • 导入设计数据时dubbo超时导入失败

    问题描述 在本地启动导入设计数据的工程时,会出现dubbo调用超时导致设计数据无法完整导入的问题。 org.apache.dubbo.remoting.TimeoutException 产生原因 pom中的包依赖出现问题,导致没有使用正确的远程服务。 本地可能出现的异常报错堆栈信息如下: xception in thread "fixed-1-thread-10" PamirsException level: ERROR, code: 10100025, type: SYSTEM_ERROR, msg: 函数执行错误, extra:, extend: null at pro.shushi.pamirs.meta.common.exception.PamirsException$Builder.errThrow(PamirsException.java:190) at pro.shushi.pamirs.framework.faas.fun.manage.ManagementAspect.around(ManagementAspect.java:118) at sun.reflect.GeneratedMethodAccessor498.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:633) at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:95) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691) at pro.shushi.pamirs.framework.orm.DefaultWriteApi$$EnhancerBySpringCGLIB$$b4cea2b4.createOrUpdateBatchWithResult(<generated>) at pro.shushi.pamirs.meta.base.manager.data.OriginDataManager.createOrUpdateBatchWithResult(OriginDataManager.java:161) at pro.shushi.pamirs.meta.base.manager.data.OriginDataManager.createOrUpdateBatch(OriginDataManager.java:152) at pro.shushi.pamirs.ui.designer.service.installer.UiDesignerInstaller.lambda$install$0(UiDesignerInstaller.java:42) at pro.shushi.pamirs.core.common.function.AroundRunnable.run(AroundRunnable.java:26) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.dubbo.rpc.RpcException: Failed to invoke the method createOrUpdateBatchWithResult in the service org.apache.dubbo.rpc.service.GenericService. Tried 1 times of the providers [192.168.0.123:20880] (1/1) from the registry 127.0.0.1:2181 on the consumer 192.168.0.123 using the dubbo version 2.7.22. Last error is: Invoke remote method timeout. method: $invoke, provider: dubbo://192.168.0.123:20880/ui.designer.UiDesignerViewLayout.oio.defaultWriteApi?anyhost=true&application=pamirs-demo&application.version=1.0.0&check=false&deprecated=false&dubbo=2.0.2&dynamic=true&generic=true&group=pamirs&interface=ui.designer.UiDesignerViewLayout.oio.defaultWriteApi&metadata-type=remote&methods=*&payload=104857600&pid=69748&qos.enable=false&register.ip=192.168.0.123&release=2.7.15&remote.application=pamirs-test&retries=0&serialization=pamirs&service.name=ServiceBean:pamirs/ui.designer.UiDesignerViewLayout.oio.defaultWriteApi:1.0.0&side=consumer&sticky=false&timeout=5000&timestamp=1701136088893&version=1.0.0, cause: org.apache.dubbo.remoting.TimeoutException: Waiting server-side response timeout by scan timer. start time: 2023-11-28 10:23:05.835, end time: 2023-11-28 10:23:10.856, client elapsed: 695 ms, server elapsed: 4326 ms, timeout: 5000 ms, request: Request [id=0, version=2.0.2, twoway=true, event=false, broken=false, data=null], channel: /192.168.0.123:49449 -> /192.168.0.123:20880 at org.apache.dubbo.rpc.cluster.support.FailoverClusterInvoker.doInvoke(FailoverClusterInvoker.java:110) at org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invoke(AbstractClusterInvoker.java:265) at org.apache.dubbo.rpc.cluster.interceptor.ClusterInterceptor.intercept(ClusterInterceptor.java:47) at org.apache.dubbo.rpc.cluster.support.wrapper.AbstractCluster$InterceptorInvokerNode.invoke(AbstractCluster.java:92) at org.apache.dubbo.rpc.cluster.support.wrapper.MockClusterInvoker.invoke(MockClusterInvoker.java:98) at org.apache.dubbo.registry.client.migration.MigrationInvoker.invoke(MigrationInvoker.java:170) at org.apache.dubbo.rpc.proxy.InvokerInvocationHandler.invoke(InvokerInvocationHandler.java:96) at org.apache.dubbo.common.bytecode.proxy0.$invoke(proxy0.java) at pro.shushi.pamirs.framework.faas.distribution.computer.RemoteComputer.compute(RemoteComputer.java:124) at pro.shushi.pamirs.framework.faas.FunEngine.run(FunEngine.java:80) at pro.shushi.pamirs.distribution.faas.remote.spi.service.RemoteFunctionHelper.run(RemoteFunctionHelper.java:68) at pro.shushi.pamirs.framework.faas.fun.manage.ManagementAspect.around(ManagementAspect.java:109) … 20 more Caused…

    2023年11月28日
    90600
  • 如何扩展自有的文件存储系统

    介绍 数式Oinone默认提供了阿里云、腾讯云、华为云、又拍云、Minio和本地文件存储这几种文件存储系统,如果我们有其他的文件存储系统需要对接,或者是扩展现有的文件系统,可以通过SPI继承AbstractFileClient注册新的文件存储系统。 代码示例 这里以扩展自有的本地文件系统为例 继承了内置的本地文件存储LocalFileClient,将其中上传文件的方法重写 package pro.shushi.pamirs.demo.core.file; import org.springframework.stereotype.Component; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.ResponseBody; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.multipart.MultipartFile; import org.springframework.web.multipart.support.StandardMultipartHttpServletRequest; import pro.shushi.pamirs.framework.connectors.cdn.client.LocalFileClient; import pro.shushi.pamirs.meta.annotation.fun.extern.Slf4j; import pro.shushi.pamirs.meta.common.spi.SPI; import javax.servlet.http.HttpServletRequest; @Slf4j @Component // 注册新的文件存储系统类型 @SPI.Service(DemoLocalFileClient.TYPE) @RestController @RequestMapping("/demo_file") public class DemoLocalFileClient extends LocalFileClient { public static final String TYPE = "DEMO_LOCAL"; @Override public CdnFileForm getFormData(String fileName) { CdnConfig cdnConfig = getCdnConfig(); CdnFileForm fileForm = new CdnFileForm(); String uniqueFileName = Spider.getDefaultExtension(CdnFileNameApi.class).getNewFilename(fileName); String fileKey = getFileKey(cdnConfig.getMainDir(), uniqueFileName); //前端获取uploadUrl,上传文件到该地址 fileForm.setUploadUrl(cdnConfig.getUploadUrl() + "/demo_file/upload"); //上传后,前端将downloadUrl返回给后端 fileForm.setDownloadUrl(getDownloadUrl(fileKey)); fileForm.setFileName(uniqueFileName); Map<String, Object> formDataJson = new HashMap<>(); formDataJson.put("uniqueFileName", uniqueFileName); formDataJson.put("key", fileKey); fileForm.setFormDataJson(JSON.toJSONString(formDataJson)); return fileForm; } @ResponseBody @RequestMapping(value = "/upload", produces = "multipart/form-data;charset=UTF-8",method = RequestMethod.POST) public String uploadFileToLocal(HttpServletRequest request) { MultipartFile file = ((StandardMultipartHttpServletRequest) request).getFile("file"); // 例如可以根据file文件类型判断哪些文件是否可以上传 return super.uploadFileToLocal(request); } } 在application.yml内配置 cdn: oss: name: 本地文件系统 # 这里的type与代码中定义的文件存储系统类型对应 type: DEMO_LOCAL bucket: pamirs uploadUrl: http://127.0.0.1:8190 downloadUrl: http://127.0.0.1:6800 validTime: 3600000 timeout: 600000 active: true referer: localFolderUrl: /Users/demo/workspace/static

    2024年10月24日
    64700

Leave a Reply

登录后才能评论