argo的输入输出--output和input输出目录或文件到下一步骤(2)
- UID
- 1066743
|
argo的输入输出--output和input输出目录或文件到下一步骤(2)
STEPS模式和DAG模式的传递区别
注意steps模式的使用传参方式是:
{{steps.generate-parameter.outputs.parameters.hello-param}}
如果是DAG templates 则使用 tasks 作为前缀与其他步骤关联, 例如
{{tasks.generate-artifact.outputs.artifacts.hello-art}}
组件方式artifacts
示例
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: output-artifacts-
spec:
entrypoint: output-artifacts
templates:
- name: output-artifacts
steps:
- - name: generate-artifacts
template: generate
- - name: consume-artifacts
template: consume
arguments:
artifacts:
- name: in-artifact
from: "{{steps.generate.outputs.artifacts.out-artifact}}"
- name: generate
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["echo -n hello world > /tmp/hello_world.txt"]
outputs:
artifacts:
- name: out-artifact
path: /tmp/hello_world.txt
- name: consume
inputs:
artifacts:
- name: in-artifact
path: /tmp/input.txt
container:
image: docker/whalesay:latest
command: [sh, -c]
args: ["
echo 'input artifact contents:' &&
cat /tmp/input.txt
"]
可能遇到的问题
controller is not configured with a default archive location
原因
组件方式需要有一个中转文件的地方,所以需要给argo配置一个存储引擎。
问题参考
源码参考
目前argo支持三种类型的存储:
aws的s3,gcs(Google Cloud Storage),Minio
解决方案
在使用的地方配置s3存储引擎
在outputs增加代码如下:
s3:
endpoint: s3.amazonaws.com
bucket: my-aws-bucket-name
key: path/in/bucket/my-input-artifact.txt
accessKeySecret:
name: my-aws-s3-credentials
key: accessKey
secretKeySecret:
name: my-aws-s3-credentials
key: secretKey
如果s3的key已经在当前环境配置好,则不需要accessKeySecret和secretKeySecret配置。
如下:
s3:
endpoint: s3.amazonaws.com
bucket: my-aws-bucket-name
key: path/in/bucket/my-input-artifact.txt |
|
|
|
|
|