Upload Artifacts to S3
This topic provides details about the settings for the Upload Artifacts to S3 step, which uploads artifacts to AWS or other S3 providers, such as MinIO.
Depending on the stage's build infrastructure, some settings may be unavailable.
Name
Enter a name summarizing the step's purpose. Harness generates an Id (Entity Identifier Reference) based on the Name. You can edit the Id.
AWS Connector
The Harness AWS connector to use when connecting to AWS S3.
The AWS IAM roles and policies associated with the account connected to the Harness AWS connector must be able to push to S3. For more information about roles and permissions for AWS connectors, go to:
Stage variable required for non-default ACLs
S3 buckets use private ACLs by default. Your pipeline must have a PLUGIN_ACL
stage variable if you want to use a different ACL.
- In the Pipeline Studio, select the relevant stage, and then select the Overview tab.
- In the Advanced section, add a stage variable.
- Input
PLUGIN_ACL
as the Variable Name, set the Type to String, and then select Save. - Input the relevant ACL in the Value field.
Stage variable required for ARNs
If your AWS connector's authentication uses a cross-account role (ARN), pipeline stages with Upload Artifacts to S3 steps must have a PLUGIN_USER_ROLE_ARN
stage variable.
- In the Pipeline Studio, select the stage with the Upload Artifacts to S3 step, and then select the Overview tab.
- In the Advanced section, add a stage variable.
- Input
PLUGIN_USER_ROLE_ARN
as the Variable Name, set the Type to String, and then select Save. - In the Value field, input the full ARN value that corresponds with the AWS connector's ARN.
Region
Define the AWS region to use when pushing the image.
Bucket
The name of the S3 bucket name where you want to upload the artifact.
Source Path
Path to the artifact file/folder that you want to upload. Harness creates the compressed file automatically.
Optional Configuration
Use the following settings to add additional configuration to the step. Settings specific to containers, such as Set Container Resources, are not applicable when using the step in a stage with VM or Harness Cloud build infrastructure.
Endpoint URL
Endpoint URL for S3-compatible providers. This setting is not needed for AWS.
Target
The path, relative to the S3 Bucket, where you want to store the artifact. Do not include the bucket name; you specified this in Bucket.
If no path is specified, the cache is saved to [bucket]/[key]
.
Run as User
Specify the user ID to use to run all processes in the pod if running in containers. For more information, go to Set the security context for a pod.
Set Container Resources
Maximum resources limits for the resources used by the container at runtime:
- Limit Memory: Maximum memory that the container can use. You can express memory as a plain integer or as a fixed-point number with the suffixes
G
orM
. You can also use the power-of-two equivalents,Gi
orMi
. Do not include spaces when entering a fixed value. The default is500Mi
. - Limit CPU: The maximum number of cores that the container can use. CPU limits are measured in CPU units. Fractional requests are allowed. For example, you can specify one hundred millicpu as
0.1
or100m
. The default is400m
. For more information, go to Resource units in Kubernetes.
Timeout
Set the timeout limit for the step. Once the timeout limit is reached, the step fails and pipeline execution continues. To set skip conditions or failure handling for steps, go to: