Oinone请求路由源码分析

通过源码分析,从页面发起请求,如果通过graphQL传输到具体action的链路,并且在这之间做了哪些隐式处理
分析源码版本5.1.x

请求流程大致如下:

  1. 拦截所有指定的请求
  2. 组装成graphQL请求信息
  3. 调用graphQL执行
  4. 通过hook拦截先执行
    • RsqlDecodeHook:rsql解密
    • UserHook: 获取用户信息, 通过cookies获取用户ID,再查表获取用户信息,放到本地Local线程里
    • RoleHook: 角色Hook
    • FunctionPermissionHook: 函数权限Hook ,跳过权限拦截的实现放在这一层,对应的配置
    •  pamirs:
           auth:
              fun-filter:
                    - namespace: user.PamirsUserTransient
                      fun: login #登录
                    - namespace: top.PetShop
                      fun: action
                          
    • DataPermissionHook: 数据权限hook
    • PlaceHolderHook:占位符转化替换hook
    • RsqlParseHook: 解释Rsql hook
    • SingletonModelUpdateHookBefore
  5. 执行post具体内容
  6. 通过hook拦截后执行
    • QueryPageHook4TreeAfter: 树形Parent查询优化
    • FieldPermissionHook: 字段权限Hook
    • UserQueryPageHookAfter
    • UserQueryOneHookAfter
  7. 封装执行结果信息返回

时序图

Oinone请求路由源码分析
时序图

核心源码解析

拦截所有指定的请求 /pamirs/模块名
RequestController

    @RequestMapping(
            value = "/pamirs/{moduleName:^[a-zA-Z][a-zA-Z0-9_]+[a-zA-Z0-9]$}",
            method = RequestMethod.POST
    )
    public String pamirsPost(@PathVariable("moduleName") String moduleName,
                             @RequestBody PamirsClientRequestParam gql,
                             HttpServletRequest request,
                             HttpServletResponse response) {
                             }

DefaultRequestExecutor 构建graph请求信息,并调用graph请求


() -> execute(GraphQL::execute, param), param
private <T> T execute(BiFunction<GraphQL, ExecutionInput, T> executor, PamirsRequestParam param) {
    // 获取GraphQL请求信息,包含grapsh schema
    GraphQL graphQL = buildGraphQL(param);
...
    ExecutionInput executionInput = ExecutionInput.newExecutionInput()
                .query(param.getQuery())
                .variables(param.getVariables().getVariables())
                .dataLoaderRegistry(Spider.getDefaultExtension(DataLoaderRegistryApi.class).dataLoader())
                .build();
...
// 调用 GraphQL的方法execute 执行
    T result = executor.apply(graphQL, executionInput);
...
    return result;
    }

QueryAndMutationBinder 绑定graphQL读取写入操作

public static DataFetcher<?> dataFetcher(Function function, ModelConfig modelConfig) {
        if (isAsync()) {
            if (FunctionTypeEnum.QUERY.in(function.getType())) {
                return AsyncDataFetcher.async(dataFetchingEnvironment -> dataFetcherAction(function, modelConfig, dataFetchingEnvironment), ExecutorServiceApi.getExecutorService());
            } else {
                return dataFetchingEnvironment -> dataFetcherAction(function, modelConfig, dataFetchingEnvironment);
            }
        } else {
            return dataFetchingEnvironment -> dataFetcherAction(function, modelConfig, dataFetchingEnvironment);
        }
    }

    private static Object dataFetcherAction(Function function, ModelConfig modelConfig, DataFetchingEnvironment environment) {
        try {
            SessionExtendUtils.tagMainRequest();
            // 使用共享的请求和响应对象
            return Spider.getDefaultExtension(ActionBinderApi.class)
                    .action(modelConfig, function, FunctionTypeEnum.QUERY.in(function.getType()), environment);
        } finally {
            PamirsSession.clearMainSession();
        }
    }

FunctionManager 执行具体方法,会执行前后hook

public Object runProxy(java.util.function.Function<Object[], Object> consumer, Function function, Object... args) {
        SessionMetaBit directive = Objects.requireNonNull(PamirsSession.directive());
        if (directive.isFromClient() || directive.isBuiltAction()) {
            directive.disableFromClient();
            boolean isHook = directive.isHook();
            boolean isDoExtPoint = directive.isDoExtPoint();
            if (isHook) {
                // 执行权限,用户相关的hook
        hookApi.before(function.getNamespace(), function.getFun(), args);
            }
            // 执行业务方法
            Object result = Tx.build(function.getNamespace(), function.getFun()).execute((transactionStatus) -> {
                Object txResult;
                // 植入预制结果集
                String supplier = "ctxHack" + function.getNamespace() + function.getFun();
                Object supplierObj = Optional.ofNullable(PamirsSession.getRequestVariables())
                        .map(PamirsRequestVariables::getVariables)
                        .map(_variables -> _variables.get(supplier))
                        .orElse(null);
                if (null != supplierObj) {
                    try {
                        txResult = ((Supplier<?>) supplierObj).get();
                    } finally {
                        Optional.ofNullable(PamirsSession.getRequestVariables())
                                .map(PamirsRequestVariables::getVariables)
                                .ifPresent(_variables -> _variables.remove(supplier));
                    }
                } else if (isDoExtPoint) {
                    Object[] tArgs = ArrayUtils.toArray(builtinExtPointExecutor.before(function, args));
                    txResult = builtinExtPointExecutor.override(function, consumer, tArgs);
                    builtinExtPointExecutor.callback(function, args);
                    txResult = builtinExtPointExecutor.after(function, txResult);
                } else {
                    txResult = consumer.apply(args);
                }
                return txResult;
            });
            if (isHook) {
                // 执行
                hookApi.after(function.getNamespace(), function.getFun(), result);
            }
            return result;
        } else {
            return Tx.build(function.getNamespace(), function.getFun())
                    .execute((transactionStatus) -> consumer.apply(args));
        }
    }

默认hook 逻辑

  1. RsqlDecodeHook:rsql解密
  2. UserHook:获取用户信息, 通过cookies获取用户ID,再查表获取用户信息,放到本地Local线程里
  3. RoleHook: 角色Hook
  4. FunctionPermissionHook: 函数权限Hook,跳过权限拦截
  5. DataPermissionHook: 数据权限hook
  6. PlaceHolderHook:占位符转化替换hook
  7. RsqlParseHook: 解释Rsql hook
  8. SingletonModelUpdateHookBefore
  9. QueryPageHook4TreeAfter: 树形Parent查询优化
  10. FieldPermissionHook: 字段权限Hook
  11. UserQueryPageHookAfter
  12. UserQueryOneHookAfter

Oinone社区 作者:oinone原创文章,如若转载,请注明出处:https://doc.oinone.top/backend/16247.html

访问Oinone官网:https://www.oinone.top获取数式Oinone低代码应用平台体验

(1)
oinone的头像oinone
上一篇 2024年8月20日 pm9:09
下一篇 2024年8月21日 pm10:33

相关推荐

  • JSON转换工具类

    JSON转换工具类 JSON转对象 pro.shushi.pamirs.meta.util.JsonUtils JSON转模型 pro.shushi.pamirs.framework.orm.json.PamirsDataUtils

    2023年11月1日
    1.6K00
  • IWrapper、QueryWrapper和LambdaQueryWrapper使用

    条件更新updateByWrapper 通常我们在更新的时候new一个对象出来在去更新,减少更新的字段 Integer update = new DemoUser().updateByWrapper(new DemoUser().setFirstLogin(Boolean.FALSE), Pops.<DemoUser>lambdaUpdate().from(DemoUser.MODEL_MODEL).eq(IdModel::getId, userId) 使用基础模型的updateById方法更新指定字段的方法: new 一下update对象出来,更新这个对象。 WorkflowUserTask userTaskUp = new WorkflowUserTask(); userTaskUp.setId(userTask.getId()); userTaskUp.setNodeContext(json); userTaskUp.updateById(); 条件删除updateByWrapper public List<T> delete(List<T> data) { List<Long> petTypeIdList = new ArrayList(); for(T item:data){ petTypeIdList.add(item.getId()); } Models.data().deleteByWrapper(Pops.<PetType>lambdaQuery().from(PetType.MODEL_MODEL).in(PetType::getId,petTypeIdList)); return data; } 构造条件查询数据 示例1: LambdaQueryWrapper拼接查询条件 private void queryPetShops() { LambdaQueryWrapper<PetShop> query = Pops.<PetShop>lambdaQuery(); query.from(PetShop.MODEL_MODEL); query.setSortable(Boolean.FALSE); query.orderBy(true, true, PetShop::getId); List<PetShop> petShops2 = new PetShop().queryList(query); System.out.printf(petShops2.size() + ""); } 示例2: IWrapper拼接查询条件 private void queryPetShops() { IWrapper<PetShop> wrapper = Pops.<PetShop>lambdaQuery() .from(PetShop.MODEL_MODEL).eq(PetShop::getId,1L); List<PetShop> petShops4 = new PetShop().queryList(wrapper); System.out.printf(petShops4.size() + ""); } 示例3: QueryWrapper拼接查询条件 private void queryPetShops() { //使用Lambda获取字段名,防止后面改字段名漏改 String nameField = LambdaUtil.fetchFieldName(PetTalent::getName); //使用Lambda获取Clumon名,防止后面改字段名漏改 String nameColumn = PStringUtils.fieldName2Column(nameField); QueryWrapper<PetShop> wrapper2 = new QueryWrapper<PetShop>().from(PetShop.MODEL_MODEL) .eq(nameColumn, "test"); List<PetShop> petShops5 = new PetShop().queryList(wrapper2); System.out.printf(petShops5.size() + ""); } IWrapper转为LambdaQueryWrapper @Function.Advanced(type= FunctionTypeEnum.QUERY) @Function.fun(FunctionConstants.queryPage) @Function(openLevel = {FunctionOpenEnum.API}) public Pagination<PetShopProxy> queryPage(Pagination<PetShopProxy> page, IWrapper<PetShopProxy> queryWrapper) { LambdaQueryWrapper<PetShopProxy> wrapper = ((QueryWrapper<PetShopProxy>) queryWrapper).lambda(); // 非存储字段从QueryData中获取 Map<String, Object> queryData = queryWrapper.getQueryData(); if (null != queryData && !queryData.isEmpty()) { String codes = (String) queryData.get("codes"); if (org.apache.commons.lang3.StringUtils.isNotBlank(codes)) { wrapper.in(PetShopProxy::getCode, codes.split(",")); } } return new PetShopProxy().queryPage(page, wrapper); }

    2024年5月25日
    1.8K00
  • DsHint(指定数据源)和BatchSizeHint(指定批次数量)

    概述和使用场景 DsHintApi ,强制指定数据源, BatchSizeHintApi ,强制指定查询批量数量 API定义 DsHintApi public static DsHintApi model(String model/**模型编码*/) { // 具体实现 } public DsHintApi(Object dsKey/***数据源名称*/) { // 具体实现 } BatchSizeHintApi public static BatchSizeHintApi use(Integer batchSize) { // 具体实现 } 使用示例 1、【注意】代码中使用 try-with-resources语法; 否则可能会出现数据源错乱 2、DsHintApi使用示例包裹在try里面的所有查询都会强制使用指定的数据源 // 使用方式1: try (DsHintApi dsHintApi = DsHintApi.model(PetItem.MODEL_MODEL)) { List<PetItem> items = demoItemDAO.customSqlDemoItem(); PetShopProxy data2 = data.queryById(); data2.fieldQuery(PetShopProxy::getPetTalents); } // 使用方式2: try (DsHintApi dsHintApi = DsHintApi.use("数据源名称")) { List<PetItem> items = demoItemDAO.customSqlDemoItem(); PetShopProxy data2 = data.queryById(); data2.fieldQuery(PetShopProxy::getPetTalents); } 3、BatchSizeHintApi使用示例包裹在try里面的所有查询都会按照指定的batchSize进行查询 // 查询指定每次查询500跳 try (BatchSizeHintApi batchSizeHintApi = BatchSizeHintApi.use(500)) { PetShopProxy data2 = data.queryById(); data2.fieldQuery(PetShopProxy::getPetTalents); } // 查询指定不分页(batchSize=-1)查询。 请注意,你必须在明确不需要分页查询的情况下使用;如果数据量超大不分页可能会卡死。默认不指定分页数的情况下下平台会进行分页查询 try (BatchSizeHintApi batchSizeHintApi = BatchSizeHintApi.use(-1)) { PetShopProxy data2 = data.queryById(); data2.fieldQuery(PetShopProxy::getPetTalents); }

    2024年5月18日
    1.3K00
  • 如何扩展自有的文件存储系统

    介绍 数式Oinone默认提供了阿里云、腾讯云、华为云、又拍云、Minio和本地文件存储这几种文件存储系统,如果我们有其他的文件存储系统需要对接,或者是扩展现有的文件系统,可以通过SPI继承AbstractFileClient注册新的文件存储系统。 代码示例 这里以扩展自有的本地文件系统为例 继承了内置的本地文件存储LocalFileClient,将其中上传文件的方法重写 package pro.shushi.pamirs.demo.core.file; import org.springframework.stereotype.Component; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.ResponseBody; import org.springframework.web.bind.annotation.RestController; import org.springframework.web.multipart.MultipartFile; import org.springframework.web.multipart.support.StandardMultipartHttpServletRequest; import pro.shushi.pamirs.framework.connectors.cdn.client.LocalFileClient; import pro.shushi.pamirs.meta.annotation.fun.extern.Slf4j; import pro.shushi.pamirs.meta.common.spi.SPI; import javax.servlet.http.HttpServletRequest; @Slf4j @Component // 注册新的文件存储系统类型 @SPI.Service(DemoLocalFileClient.TYPE) @RestController @RequestMapping("/demo_file") public class DemoLocalFileClient extends LocalFileClient { public static final String TYPE = "DEMO_LOCAL"; @Override public CdnFileForm getFormData(String fileName) { CdnConfig cdnConfig = getCdnConfig(); CdnFileForm fileForm = new CdnFileForm(); String uniqueFileName = Spider.getDefaultExtension(CdnFileNameApi.class).getNewFilename(fileName); String fileKey = getFileKey(cdnConfig.getMainDir(), uniqueFileName); //前端获取uploadUrl,上传文件到该地址 fileForm.setUploadUrl(cdnConfig.getUploadUrl() + "/demo_file/upload"); //上传后,前端将downloadUrl返回给后端 fileForm.setDownloadUrl(getDownloadUrl(fileKey)); fileForm.setFileName(uniqueFileName); Map<String, Object> formDataJson = new HashMap<>(); formDataJson.put("uniqueFileName", uniqueFileName); formDataJson.put("key", fileKey); fileForm.setFormDataJson(JSON.toJSONString(formDataJson)); return fileForm; } @ResponseBody @RequestMapping(value = "/upload", produces = "multipart/form-data;charset=UTF-8",method = RequestMethod.POST) public String uploadFileToLocal(HttpServletRequest request) { MultipartFile file = ((StandardMultipartHttpServletRequest) request).getFile("file"); // 例如可以根据file文件类型判断哪些文件是否可以上传 return super.uploadFileToLocal(request); } } 在application.yml内配置 cdn: oss: name: 本地文件系统 # 这里的type与代码中定义的文件存储系统类型对应 type: DEMO_LOCAL bucket: pamirs uploadUrl: http://127.0.0.1:8190 downloadUrl: http://127.0.0.1:6800 validTime: 3600000 timeout: 600000 active: true referer: localFolderUrl: /Users/demo/workspace/static

    2024年10月24日
    74000
  • 导入设计数据时dubbo超时导入失败

    问题描述 在本地启动导入设计数据的工程时,会出现dubbo调用超时导致设计数据无法完整导入的问题。 org.apache.dubbo.remoting.TimeoutException 产生原因 pom中的包依赖出现问题,导致没有使用正确的远程服务。 本地可能出现的异常报错堆栈信息如下: xception in thread "fixed-1-thread-10" PamirsException level: ERROR, code: 10100025, type: SYSTEM_ERROR, msg: 函数执行错误, extra:, extend: null at pro.shushi.pamirs.meta.common.exception.PamirsException$Builder.errThrow(PamirsException.java:190) at pro.shushi.pamirs.framework.faas.fun.manage.ManagementAspect.around(ManagementAspect.java:118) at sun.reflect.GeneratedMethodAccessor498.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:644) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:633) at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:95) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:749) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:691) at pro.shushi.pamirs.framework.orm.DefaultWriteApi$$EnhancerBySpringCGLIB$$b4cea2b4.createOrUpdateBatchWithResult(<generated>) at pro.shushi.pamirs.meta.base.manager.data.OriginDataManager.createOrUpdateBatchWithResult(OriginDataManager.java:161) at pro.shushi.pamirs.meta.base.manager.data.OriginDataManager.createOrUpdateBatch(OriginDataManager.java:152) at pro.shushi.pamirs.ui.designer.service.installer.UiDesignerInstaller.lambda$install$0(UiDesignerInstaller.java:42) at pro.shushi.pamirs.core.common.function.AroundRunnable.run(AroundRunnable.java:26) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.dubbo.rpc.RpcException: Failed to invoke the method createOrUpdateBatchWithResult in the service org.apache.dubbo.rpc.service.GenericService. Tried 1 times of the providers [192.168.0.123:20880] (1/1) from the registry 127.0.0.1:2181 on the consumer 192.168.0.123 using the dubbo version 2.7.22. Last error is: Invoke remote method timeout. method: $invoke, provider: dubbo://192.168.0.123:20880/ui.designer.UiDesignerViewLayout.oio.defaultWriteApi?anyhost=true&application=pamirs-demo&application.version=1.0.0&check=false&deprecated=false&dubbo=2.0.2&dynamic=true&generic=true&group=pamirs&interface=ui.designer.UiDesignerViewLayout.oio.defaultWriteApi&metadata-type=remote&methods=*&payload=104857600&pid=69748&qos.enable=false&register.ip=192.168.0.123&release=2.7.15&remote.application=pamirs-test&retries=0&serialization=pamirs&service.name=ServiceBean:pamirs/ui.designer.UiDesignerViewLayout.oio.defaultWriteApi:1.0.0&side=consumer&sticky=false&timeout=5000&timestamp=1701136088893&version=1.0.0, cause: org.apache.dubbo.remoting.TimeoutException: Waiting server-side response timeout by scan timer. start time: 2023-11-28 10:23:05.835, end time: 2023-11-28 10:23:10.856, client elapsed: 695 ms, server elapsed: 4326 ms, timeout: 5000 ms, request: Request [id=0, version=2.0.2, twoway=true, event=false, broken=false, data=null], channel: /192.168.0.123:49449 -> /192.168.0.123:20880 at org.apache.dubbo.rpc.cluster.support.FailoverClusterInvoker.doInvoke(FailoverClusterInvoker.java:110) at org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invoke(AbstractClusterInvoker.java:265) at org.apache.dubbo.rpc.cluster.interceptor.ClusterInterceptor.intercept(ClusterInterceptor.java:47) at org.apache.dubbo.rpc.cluster.support.wrapper.AbstractCluster$InterceptorInvokerNode.invoke(AbstractCluster.java:92) at org.apache.dubbo.rpc.cluster.support.wrapper.MockClusterInvoker.invoke(MockClusterInvoker.java:98) at org.apache.dubbo.registry.client.migration.MigrationInvoker.invoke(MigrationInvoker.java:170) at org.apache.dubbo.rpc.proxy.InvokerInvocationHandler.invoke(InvokerInvocationHandler.java:96) at org.apache.dubbo.common.bytecode.proxy0.$invoke(proxy0.java) at pro.shushi.pamirs.framework.faas.distribution.computer.RemoteComputer.compute(RemoteComputer.java:124) at pro.shushi.pamirs.framework.faas.FunEngine.run(FunEngine.java:80) at pro.shushi.pamirs.distribution.faas.remote.spi.service.RemoteFunctionHelper.run(RemoteFunctionHelper.java:68) at pro.shushi.pamirs.framework.faas.fun.manage.ManagementAspect.around(ManagementAspect.java:109) … 20 more Caused…

    2023年11月28日
    1.0K00

Leave a Reply

登录后才能评论