-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for storage resource #321
Conversation
So there's good news and bad news. 👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there. 😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request. Note to project maintainer: This is a terminal state, meaning the |
685ccd7
to
4472e16
Compare
CLAs look good, thanks! |
4472e16
to
c5c8dfe
Compare
docs/using.md
Outdated
secretName: service_account.json | ||
``` | ||
|
||
Note: `GOOGLE_APPLICATION_CREDENTIALS` is the environment variable to be set for the container to download/upload GCS resource. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this field in the resource? or a different env variable user has to set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this field in the resource?
No.If storage resource is private buckets then secret params expects this env variable to be set. I will update the docs to make this more clear. I did not want to hard code this env variable in resource definition because in future we could update to passing token instead of service account with minimal code change.
Secrets: []SecretParam{{ | ||
SecretName: "secretName", | ||
SecretKey: "secretKey", | ||
FieldName: "GOOGLE_APPLICATION_CREDENTIALS", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be a field in the resource and not relying on the env variable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If bucket is public then user doesn't have to set it. Then we need some boolean flag to decide whether we want this variable or not. Interface will become complex for no reason
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so now we need the user to set an env variable of the bucket is private?
pkg/reconciler/v1alpha1/taskrun/resources/input_resource_test.go
Outdated
Show resolved
Hide resolved
@@ -65,12 +73,13 @@ func AddInputResource( | |||
return nil, fmt.Errorf("task %q failed to Get Pipeline Resource: %q", taskName, boundResource) | |||
} | |||
|
|||
// git resource sets source in build so execute this always. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't support the git
repo moving from one task to the next?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We do. Implementation gets unnecessarily complex with git being inherently supported by build as source and rest of them not. Let me try simplify this a bit more.
f8231e2
to
4755eb0
Compare
4755eb0
to
c6e1c66
Compare
|
||
for _, param := range r.Spec.Params { | ||
switch { | ||
case strings.EqualFold(param.Name, "type"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the two scenarios I know about (gs and s3) the type can be derived from the protocol segment of the "location". Do we need type in those cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
type
here refers to "gcs" and not the type of object downloading. Clients are configured to deduce the format of object from URL
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the prefix of the location s3://my_bucket/ or gs://my_bucket isn't actually the object type but the protocol specifier which is usually specific to the underlying API. I suppose there may be some storage services that doesn't have a protocol as part of location (azure?) so it may be relevant in such cases.
In any case, I don't think it's a big deal.
@pivotal-nader-ziada : I have addressed all your comments. Ready for another round of review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally I am very excited about supporting a new resource for generic binary data!! I have a few questions before I get too far into the review :D
|
||
If resources are only declared in output then a copy of resource to be uploaded | ||
or shared for next task is expected to be present under `/output/resource_name/`. If resource is declared in both input and output then resource destination path is used instead of `/output/resource_name`. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Woooot thanks for adding these docs! 😍
I'm feeling like the "Container Contract" section is getting big enough that we can maybe start breaking it up into subsections, maybe a section on outputs?
docs/using.md
Outdated
### Storage Resource | ||
|
||
Storage resource represents a blob storage, that contains either an object or directory. | ||
Adding the storage resource as an input to a task will download the blob and allow the task to perform the required actions on the contents of the blob. Blob storage type "Google Cloud Storage" is supported as of now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe something like:
" Blob storage type "Google Cloud Storage" (gcs
)"
Since gcs
is the type they need to actually use
docs/using.md
Outdated
Params that can be added are the following: | ||
|
||
1. `location`: represents the location of the blobstorage. | ||
1. `type`: represents the type of blob storage. Currently there is implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should probably be gcs
docs/using.md
Outdated
1. `location`: represents the location of the blobstorage. | ||
1. `type`: represents the type of blob storage. Currently there is implementation | ||
for only GCS. | ||
1. `dir`: represents whether the blob storage is a director or not. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo, directory
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does it mean if it isn't a directory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or is this a path to a directory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will add more description to explain why we are using this flag
docs/using.md
Outdated
1. `dir`: represents whether the blob storage is a director or not. | ||
|
||
To use GCS private buckets, [service accounts](https://cloud.google.com/compute/docs/access/service-accounts | ||
) could be configured to access GCS bucket for `read` and `write` authorities. Download the service account keys to create a secret. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think there's an extra newline here that's interfering with service accounts
being a link?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what "read
and write
authorities" means - is this a GCS term? Or is this "or read
and write
permissions"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Permissions. I will correct it
docs/using.md
Outdated
secrets: | ||
- fieldName: GOOGLE_APPLICATION_CREDENTIALS | ||
secretKey: bucket-sa | ||
secretName: service_account.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice!!!
var kubeconfigWriterImage = flag.String("kubeconfig-writer-image", "override-with-kubeconfig-writer:latest", "The container image containing our kubeconfig writer binary.") | ||
var ( | ||
kubeconfigWriterImage = flag.String("kubeconfig-writer-image", "override-with-kubeconfig-writer:latest", "The container image containing our kubeconfig writer binary.") | ||
bashNoopImage = flag.String("bash-noop-image", "override-with-bash-noop:latest", "The container image containing bash shell") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
before I get too far into this PR - could you explain a bit (maybe even in some comments in the main.go
s?) a bit more about what these binaries are for? it's a bit hard to tell at a glance
do we need a generic bash image? alternatively we could run any other container that has bash in it, maybe set the entrypoint to bash
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need a generic bash image?
I do not think so but I remember in build
there was an image build-base
which was custom to what build base image required (I think). I am not entirely sure why we need new custom image in this case because .ko.yaml
lets you perfectly override if user wants different base image. I added comment in .ko.yaml
file about the expectation of image.
It should be noted that the implementation for S3 is almost exactly the same except for the image and a very small difference in the command line arguments. Basically the initialize and the two Container Spec functions will be different. |
0dd3620
to
013ce29
Compare
/test pull-knative-build-pipeline-integration-tests |
cmd/bash/main.go
Outdated
|
||
func main() { | ||
flag.Parse() | ||
logger, _ := logging.NewLogger("", "bash_command") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we rename the bash_command
to shell_command
or something similar since you are using sh
?
docs/using.md
Outdated
1. `location`: represents the location of the blob storage. | ||
1. `type`: represents the type of blob storage. Currently there is implementation for only `gcs`. | ||
1. `dir`: represents whether the blob storage is a directory or not. | ||
1. If artifact is a directory then `-r`(recursive) flag is used to copy all files under source directory to gcs bucket. Eg: `gsutil cp -r source_dir gs://some-bucket` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there a default value for the dir
field? since its not the example here, its not clear what the value would be
- fieldName: GOOGLE_APPLICATION_CREDENTIALS | ||
secretKey: bucket-sa | ||
secretName: service_account.json | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The use of fieldName here is confusing because this is not actually a field, I would prefer if it was a field on the gcs resource, doesn't have to be part of the generic interface
// to the desired destination directory | ||
var copyStepsFromPrevTasks []corev1.Container | ||
|
||
for i, path := range boundResource.Paths { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name Paths
does not make it clear at a glance that it means you have to copy from a previous task, maybe something like
passedPath
fromOtherTaskPath
onVolumePath
cachedPath
fetchedPath
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I agree field name is not intuitive and it should be updated but refactoring this would add more changes to this PR. This is already growing too much in scope. I want to follow up with another PR for that change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, makes sense since this was already there before this PR
|
||
// AddOutputResources reads the output resources and adds the corresponding container steps | ||
// This function also reads the inputs to check if resources are redeclared in inputs and has any custom | ||
// target directory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why can't we require that the output path will always be in output/resource_name
even if its different in the input, this way it at least will be consistent and easier for users to know what to expect, it might require copying it to the output folder if its an output
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean if user declares output resources then expect the resource to be present in output/resource_name
folder?
Then user has to add a step to copy the resource to the folder path /output/resource_name
. I think it might be too much work. I agree its more predictable but I was trying to infer the path from already given information which means less yaml. I do not have strong opinion about this as both methods as have pros and cons.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean if user declares output resources then expect the resource to be present in output/resource_name folder?
yes exactly
Then user has to add a step to copy the resource to the folder path /output/resource_name. I think it might be too much work. I agree its more predictable but I was trying to infer the path from already given information which means less yaml. I do not have strong opinion about this as both methods as have pros and cons.
not the user has to add a step, the code automatically does that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If resource is defined in input then in code we can figure out where is the resource placed. I do not see value in copying from input resource destination -> /output/resource_name -> external bucket. As of current implementation we have input resource destination -> external bucket
- This would introduce additional copy step.
- code complexity is not solved but just moved to input resource. Now input resource has to check if resource is defined in output and add some additional step to ensure copy has happened before resolving outputs. I am not sure of this change
013ce29
to
180f1ba
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work 👍 , few comments 👼
cmd/bash/main.go
Outdated
*/ | ||
|
||
/* | ||
Base image expects shell in $PATH |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
expects sh
in …
cmd/controller/main.go
Outdated
@@ -88,6 +88,14 @@ func main() { | |||
logger.Fatalf("Error building pipeline clientset: %v", err) | |||
} | |||
|
|||
<<<<<<< HEAD |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
conflicts 😓
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just resolved them.My bad. I forgot to push local changes after resolving conflicts.
docs/using.md
Outdated
steps: | ||
- image: objectuser/run-java-jar #https://hub.docker.com/r/objectuser/run-java-jar/ | ||
command: [jar] | ||
args: ["-cvf", "-o", "/workspace/outputs/storage-gcs/", "projectname.war", "*"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the path should be /workspace/output/storage-gcs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah. Good catch
docs/using.md
Outdated
Params that can be added are the following: | ||
|
||
1. `location`: represents the location of the blob storage. | ||
1. `type`: represents the type of blob storage. Currently there is implementation for only `gcs`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2.
?
docs/using.md
Outdated
|
||
1. `location`: represents the location of the blob storage. | ||
1. `type`: represents the type of blob storage. Currently there is implementation for only `gcs`. | ||
1. `dir`: represents whether the blob storage is a directory or not. By default storage artifact is considered not a directory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
3.
?
docs/using.md
Outdated
1. `type`: represents the type of blob storage. Currently there is implementation for only `gcs`. | ||
1. `dir`: represents whether the blob storage is a directory or not. By default storage artifact is considered not a directory. | ||
1. If artifact is a directory then `-r`(recursive) flag is used to copy all files under source directory to gcs bucket. Eg: `gsutil cp -r source_dir gs://some-bucket` | ||
1. If is a single file like zip, tar files then copy will be only 1 level deep. Eg: `gsutil cp source.tar gs://some-bucket.tar`. This will not trigger copy of sub directories in source directory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2.
? (and same below, number ain't right 🙃 )
13029ef
to
f7b74f7
Compare
@vdemeester I have addressed your comments. Let me know 👍 |
- Add implementation for only gcs for now, resource declaration includes bucket location(url), type params to configure credentials for bucket. - If storage resource is added as input resource then updated build steps to pull the blob storage. Params of resources are set as environment variables for private buckets. - If storage is added as output resource then step is added to update the blob to the bucket after execution of build steps. - Add documentation how to use gcs resource - Refactor design for output - Add e2e test with a gcs resource has secret param set. Test asserts that volume is mounted for secrets, lists the fetched file.
- This will allow users to provide another base image if they do not want to use "busybox" for open source compliance reasons.
- Simplify logic in taskrun to copy from previous task output or not. current implementation: taskrun input resouce checks whether there is dependency path set, then it executes copy from pvc instead of fetching fresh copy - update gcs resource to do 1 level copy if `--dir` param is not set else `-r`(recursive) copy flag is set. - update messed up indentation in docs - add docs for fetching objects from private buckets - add .ko.yaml file to override base image with bash shell
f7b74f7
to
a4f1ddb
Compare
/test pull-knative-build-pipeline-integration-tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sgtm 👼
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: shashwathi, vdemeester The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking great @shashwathi ! 😍
I've swung in with a few last minute comments, main things:
- I still don't understand why we need to be building and releasing the bash image - but this might just be me not reading carefully or some missing docs!
- I think we need to add the new image to our release scripts? (or maybe just update this comment if this happens automatically...)
- Can you start a doc that explains the internals of how we this all works? i.e. something that talks about how we pass outputs + inptus - not directed at users of build-pipeline, but instead at maintainers (no worries if you want to do this in a separate PR, this one has been open a while! also feel free to put in the bare minimum, I just want to get something started that we can build on as this gets more complex, and we've got the init container changes coming too...)
I'm happy to LGTM this asap so we can get it merged, dont want to leave it open for too long :D
p.s. I am especially stoked about the awesome user docs in this PR 💃
|
||
**Note**: If task is relying on output resource functionality then they cannot mount anything in file path `/workspace/output`. | ||
|
||
If resource is declared in both input and output then input resource, then destination path of input resource is used instead of `/workspace/output/resource_name`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
when skimming i though this was saying the path would still have output
- could we include an example of the actual path, or link to the section in the docs on it?
value: "world" | ||
``` | ||
|
||
**Note**: If task is relying on output resource functionality then they cannot mount anything in file path `/workspace/output`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we change the path when a resource is only an output
? i'm wondering why we decided not to use the same path in both cases?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could use the same path in both cases, @pivotal-nader-ziada had similar suggestion too. If we take that route then it becomes more burden for operator to have a copy of resource present in path /workspace/output
. If resource is already declared in input then we know where the resource is already placed, then we are re-using that information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we take that route then it becomes more burden for operator to have a copy of resource present in path /workspace/output.
Is the idea that inputs are always in /workspace/
and outputs are always in /workspace/outputs
?
An alternative could be to have unique names for all inputs + outputs, and put all of them in /workspace/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True. We could do this.
docs/using.md
Outdated
|
||
If resource is declared in both input and output then input resource, then destination path of input resource is used instead of `/workspace/output/resource_name`. | ||
|
||
In the following example Task `tar-artifact` resource is used both as input and output so input resource is downloaded into directory `customworkspace`(as specified in `targetPath`). Step `untar` extracts tar file into `tar-scratch-space` directory , `edit-tar` adds a new file and last step `tar-it-up` creates new tar file and places in `/workspace/customworkspace/` directory. After execution of task steps, (new) tar file in directory `customworkspace` will be uploaded to the bucket defined in `tar-artifact` resource definition. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can the docs show the complete path to customworkspace
, i.e. is it /customworkspace
or is it /workspace/customworkspace
or /workspace/output/customworkspace
?
2. `type`: represents the type of blob storage. Currently there is implementation for only `gcs`. | ||
3. `dir`: represents whether the blob storage is a directory or not. By default storage artifact is considered not a directory. | ||
* If artifact is a directory then `-r`(recursive) flag is used to copy all files under source directory to GCS bucket. Eg: `gsutil cp -r source_dir gs://some-bucket` | ||
* If artifact is a single file like zip, tar files then copy will be only 1 level deep(no recursive). It will not trigger copy of sub directories in source directory. Eg: `gsutil cp source.tar gs://some-bucket.tar`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
@@ -65,7 +65,6 @@ func ValidateResolvedTaskResources(params []v1alpha1.Param, rtr *resources.Resol | |||
} | |||
} | |||
} | |||
// TODO: make sure there aren't missing??? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I leave this in? There was no name or context. I was cleaning all my TODO
so I might have accidentally did too much 😆
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hahaha no feel free to remove it, I'm not sure what it is either though I suspect I wrote it 😅
Base image expects gsutil binary in $PATH | ||
gsutil command uses `-m` flag(default) for faster download and uploads. | ||
This file executes cp sub-command like `cp source destination` | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think it would be helpful for all of our main.go
's to have some info about how to use the resulting binaries, e.g. something like:
This tool is for _______
To use it, run ____
For help, run ______
For example:
______
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bobcatfish : I have updated comments to include new style. Let me know if thats good.
} | ||
|
||
// GetParams get params | ||
func (s *GCSResource) GetParams() []Param { return []Param{} } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should this be returning Location
and Dir
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked at git, image, cluster resource as example and all of them return empty params but I can update this to return Location
. Dir
is like runtime param I am not sure it fits the case here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kk!
"name": s.Name, | ||
"type": string(s.Type), | ||
"location": s.Location, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about dir?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dir
is decided by controller and not replacement template. So did not add that key
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hm kk - it looks in the docs like ppl can specify dir explicitly:
dir
: represents whether the blob storage is a directory or not. By default storage artifact is considered not a directory.
so i feel like it would make sense to provide here? but i cant imagine why anyone would want to template with it so maybe it's fine!
} | ||
|
||
// GetDownloadContainerSpec returns an array of container specs to download gcs storage object | ||
func (s *GCSResource) GetDownloadContainerSpec() ([]corev1.Container, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oooo i like this UploadContainer + DownloadContainer pattern! do you see us doing this for other resource types as well? (e.g. git)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I did not want to do git container spec in this PR but it fits the case well. s3 resource would be a great fit too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kk sounds great!
There is nothing to add to release script.
I will update |
- In .ko.yaml gsutil base could be replaced with any other image with gsutil binaries for license compliance reasons - tests are updated to verify override-gsutil-image is used instead of "google/cloud-sdk" image directly. - gsutil main.go file logs command output to diagnose failures/success - Add documentation in main.go files on how they are being used in pipeline crd - Update documentation to include examples about configuring private GCS buckets - Fix merge conflicts
a4f1ddb
to
8a935e8
Compare
/test pull-knative-build-pipeline-integration-tests Helm deploy is flaky. Retesting as new changes was doc related. |
@bobcatfish : I have added developer docs. At this point I am dumping my thoughts about inputs/outputs contract. It is obviously not well framed. We can iterate on docs in further PR if you don't mind. |
/me shakes fist |
That's great!! Thanks so much for getting these started :D :D :D |
@@ -0,0 +1,54 @@ | |||
## Developer docs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my very last request is that we have a link to these docs somewhere - maybe from the DEVELOPMENT.md and/or from the main README 🙏
i promise to stop the scope creep after this ❤️
I added another question about the |
347b5e6
to
e8e851a
Compare
- Adding docs explaining how different parts are working in conjuction. In this initial attemp I am adding inputs/outputs contract is controllers. - Link developer docs in README - Add comment to main.go file about the file to update for base image override.
e8e851a
to
d69afcd
Compare
The following is the coverage report on pkg/.
|
@@ -36,6 +36,7 @@ We are so excited to have you! | |||
|
|||
- See [CONTRIBUTING.md](CONTRIBUTING.md) for an overview of our processes | |||
- See [DEVELOPMENT.md](DEVELOPMENT.md) for how to get started | |||
- [Deep dive](./docs/developers/README.md) into demystifying the inner workings (advanced reading material) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
😍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/meow space
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
because it adds two more containers of which the latter, source-copy, fails with {"level":"error","ts":1545214117.0201285,"logger":"fallback-logger","caller":"bash/main.go:62","msg":"Error executing command \"cp -r /workspace/output/builtImage/. /pvc/export-npmlib/builtImage\" with arguments cp: can't stat '/workspace/output/builtImage/.': No such file or directory\n","stacktrace":"main.main\n\t/go/src/github.com/knative/build-pipeline/cmd/bash/main.go:62\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:201"} and I'd rather not figure out why just now.
includes bucket location(url), type params to configure credentials for private
bucket.
the blob storage. Params of resources are set as environment variables
for private buckets.
update the blob to the bucket after execution of build steps.
Co-authored-by: Maria Ntalla [email protected]
Fixes #124
cc @bobcatfish @pivotal-nader-ziada