-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Add External build strategy #7949
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -218,6 +218,10 @@ func (g *BuildGenerator) Instantiate(ctx kapi.Context, request *buildapi.BuildRe | |||
return nil, err | |||
} | |||
|
|||
if bc.Spec.Strategy.ExternalStrategy != nil { | |||
return nil, &GeneratorFatalError{fmt.Sprintf("can't instantiate from BuildConfig %s/%s: BuildConfig uses an External build strategy", bc.Namespace, bc.Name)} | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think we ultimately are going to want to instantiate these, just like any other BC, we'll instantiate it to create a build object, and then something will be monitoring for those new build jobs to bridge into launching the jenkins job.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(that will necessitate changes to our build controller logic to ignore these though.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'd do that before we merge this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i would imagine this will kick the jenkins job and return the jenkins job logs (by launching a pod that will connect to jenkins and stream the logs).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, i figured, i realize this PR is very much a WIP to spur discussion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolutely - this is an early WIP PR to ensure we all understand & agree on the approach.
Thanks @jimmidyson! At first pass it looks like the right direction, will spend more time w/ this post-travel. |
@@ -303,6 +303,9 @@ type BuildStrategy struct { | |||
|
|||
// CustomStrategy holds the parameters to the Custom build strategy | |||
CustomStrategy *CustomBuildStrategy `json:"customStrategy,omitempty" description:"holds parameters to the Custom build strategy"` | |||
|
|||
// ExternalStrategy holds the parameters to the External build strategy | |||
ExternalStrategy *ExternalBuildStrategy `json:"externalStrategy,omitempty" description:"holds parameters to the External build strategy"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, based on everything we've said, we want to make this be somewhat specific to workflow but not necessarily (implicitly) specific to Jenkins. We want to pick the most general abstraction that solves the following problems:
- Users want to define a complex workflow on their builds
- A third party is going to implement that workflow
- The third party needs to report back the workflow status to the status sub resource on the build config and clients need to read it (as a versionable API struct)
- A client can parameterize the external workflow (what parameters will we support?)
- If the workflow has a single definition (a Jenkinsfile), can we allow users to directly support that (are there any cases where a Jenkinsfile needs multiple files to work, because it includes another groovy script in the same dir?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does the example flow look like? does that mean user will have to create multiple build configs for each step in the pipeline?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no. the build steps are defined in the jenkinsfile (or whatever mechanism the pipeline engine of choice uses to define a workflow). This buildconfig is purely a wrapper to that external workflow/pipeline process. it doesn't care about individual steps.
some of the pipeline steps might be represented by other "normal" buildconfigs (eg an s2 build) but that's not necessarily always going to be the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bparees so openshift itself does not know how the final pipeline looks like? i mean if jenkinsfile defines it, how we are going to visualize it in the console? i was expecting something like a linked build configurations (prev step/next step), where some build configs will be external and some local (regular docker/s2i/custom builds)...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mfojtik fabric knows how to interrogate a jenkins workflow job definition and visualize it. that's what will come to openshift. The buildconfig will just point to that workflow job. (that's one scenario. another is that that buildconfig actually contains enough metadata on its own to define a workflow, without a jenkinsfile).
but no, there will not be linked buildconfigurations, not all steps will be represented as buildconfiguration objects (eg a human approval step)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think of the build config as representing an entire build process. The external engine will write back a pretty detailed status blob that will either be strictly API versioned (generic steps, tasks, stages, pipelines, whatever) or we'll figure out how to let it evolve and still have naive UI clients show it. If that external process uses builds, other build configs, deployment configs, or things not on the platform, they'll set the appropriate labels / annotations to let naive clients do something kind of useful, but mostly drive status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks Ben, I think i am getting the idea now :)
On Mar 11, 2016 18:18, "Ben Parees" [email protected] wrote:
In pkg/build/api/v1/types.go
#7949 (comment):@@ -303,6 +303,9 @@ type BuildStrategy struct {
// CustomStrategy holds the parameters to the Custom build strategy CustomStrategy *CustomBuildStrategy `json:"customStrategy,omitempty" description:"holds parameters to the Custom build strategy"`
- // ExternalStrategy holds the parameters to the External build strategy
- ExternalStrategy *ExternalBuildStrategy
json:"externalStrategy,omitempty" description:"holds parameters to the External build strategy"
@mfojtik https://github.com/mfojtik fabric knows how to interrogate a
jenkins workflow job definition and visualize it. that's what will come to
openshift. The buildconfig will just point to that workflow job. (that's
one scenario. another is that that buildconfig actually contains enough
metadata on its own to define a workflow, without a jenkinsfile).but no, there will not be linked buildconfigurations, not all steps will
be represented as buildconfiguration objects (eg a human approval step)—
Reply to this email directly or view it on GitHub
https://github.com/openshift/origin/pull/7949/files#r55866364.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can get the Jenkins plugin to write the status of each Jenkins pipeline build back into the OpenShift Build object with details of what stages have completed, what are pending & the expected duration of each stage; so that the console or CLI can visualise pipeline builds purely using the metadata inside the OpenShift Build object
openshift/jenkins-sync-plugin#2
Is this level of abstraction right ( FYI Jenkins Workflow has been renamed as Jenkins Pipeline so please make sure we're using this name to prevent confusion. |
@jimmidyson i'm ok with |
@jimmidyson I think External is correct, we may have internal structs within the strategy that are jenkins specific(and optional, in case you are using something other than jenkins in the future), but the strategy is just generically "External" |
I'd like to answer the two other questions before we decide on the name - types and kinds of parameters (how would it be used for jenkins pipeline specifically) and then what does the "status" look like. Can someone here take the todo to mock up both of those for a representative pipeline of moderate to high complexity? |
"There are two mutually exclusive techniques that are used in the early stages of programming: The Software Engineering method, and the ever-popular Brute Force strategy. Right from the start of our computer careers, we are told that any problem can be broken down into manageable pieces, and that these pieces can be linked together to form a logically constructed program; the method used by Software Engineers. This process is time consuming, yet incredibly simple. Keep the pieces as small as possible, construct each one separately, get it to working, and plug it in. ``This method can be applied to any problem you'll ever have to solve, in the field of computer science, or in real life situations,'' says the textbook. Sure. If you've got the time. Brute Force can similarly be applied to any real life situation, and in the early stages it's quicker than the Software Engineering method. It's instinctive, spontaneous, and produces concrete results almost immediately. Read the problem, get a general idea of where you're headed, and head there. Start simply, and then build the sucker. If you don't understand something, ignore it. If it doesn't work, throw it out. Assume you know more about what you're doing than you actually do. It's kind of like picking a nice living room set, and building a house around it." (I will take the todo to mock up the flows) |
Specifically I'm asking what a multistage,
multiprocess, multiple user and multiple simultaneous build would look
like when placed into status, preferably by taking a wild stab at what
data we have in Jenkins for a representative output. I'm mostly
concerned with seeing a representative sketch of the full object so we
can start talking terminology and structure.
|
I will still take the todo but i wouldn't object to seeing @jimmidyson and @jstrachan's version as well, since they have more experience w/ jenkinsfile pipeline definitions. |
Definitely - three attempts here will be able to get us closer.
|
Conscious that while we need to have a target design in mind, I want to implement iteratively because things will become clearer as we go. Personally I'm still learning a lot about CD, pipelines in general & Jenkins pipeline specifically, but really feels like we're moving in the right direction already. |
@smarterclayton @jimmidyson @bparees can we make a real world example of using a pipeline/external BC? Let say I have 4 services baked by 4 pods. All pods run single container with application. I have the Now I want to build a pipeline that will test, deploy to stage and deploy to prod. So I create the Jenkinsfile in the rails app repo (?). Then I create the BuildConfig with the Maybe you already discussed this in Miami, but it will help me visualize this in my brain ;-) |
in terms of what metadata we'd wanna put into the JenkinsStrategy a start would be:
This Jenkinsfile could define, say, 3 stages. Once the BuildConfig is created, if Jenkins is running with this plugin https://github.com/fabric8io/openshift-jenkins-sync-plugin - this plugin in Jenkins would automatically create a Jenkins job for this pipeline (by watching OpenShift's BuildConfig's as per openshift/jenkins-sync-plugin#1). When this jenkins pipeline job is run (via web hook, via |
@jstrachan should URL for Jenkinsfile be an optional API field for JenkinsStrategy? |
834085b
to
cf0c269
Compare
All inputs that would normally be found in source should be an input
source (like Dockerfile). The relative path of Jenkinsfile can be in
strategy.
We need to add a URL source eventually, that would give the full flexibility.
|
@mfojtik yeah, most things we can configure on a Pipeline job in jenkins should probably go into the JenkinsStrategy; either embedding the Jenkinsfile, providing a URL to it or referencing it from a path relative to the git repo |
Most people are going to be storing their Jenkinsfile in source repo referenced from the build config source - that should be the primary use case. As Jenkins pipeline accepts only inline Jenkinsfile or Jenkins file from SCM, users that want to access the Jenkinsfile via a URL will be able to specify the Jenkinsfile URL in JenkinsPipelineStrategy. Jenkins plugin can download the file & create/update the Jenkins job as needed. Question: how do users trigger an update to the Jenkinsfile if it's accessed via a URL? |
@jimmidyson we can't ;) which is why I like the idea of either the Jenkinsfile being stored in git or inside the BuildConfig. Maybe we punt on the URL option for now? |
@jstrachan That's the answer I was hoping for ;) |
Added JenkinsfilePath (relative to source repo root) & Jenkinsfile (inline pipeline definition) to Jenkins pipeline strategy. |
@jimmidyson sounds good! |
0faaa1b
to
8bacaf0
Compare
@jimmidyson @jstrachan Keep in mind that if you have a legitimate source repo (eg a java app) that contains a jenkinsfile, you're really going to potentially end up with 2 buildconfigs:
where (2) is optional because the pipeline defined in (1) could just build the war (or the whole image) directly, rather than doing it in openshift. My point being that the only thing (1) cares about is the jenkinsfile, what repo it's in, or anything else that may or may not be in that repo, should be irrelevant to that buildconfig. |
@bparees Agreed - the source repo for a pipeline build config should the source repo containing the Jenkinsfile. |
@jimmidyson well, the other point I should have made is, if i have a microservices app that's built from 5 repos, and a pipeline that knows how to build/deploy all 5 pieces, which repo do i put the jenkinsfile in? I would think in a case like that i would want a separate repo just for the jenkinsfile, that contains nothing else. |
API is approved, we can follow up with additional status and the sub resources in a subsequent pull after we merge this. |
I think conversions are fixed now. On Fri, Apr 22, 2016 at 10:10 AM, Jimmi Dyson [email protected]
|
9ec5d2b
to
4ae9218
Compare
Rebased. |
@smarterclayton Doesn't look like generation is sorted :( Same errors in Jenkins check build (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_check/452/console) |
Please squash down to 2 commits - one for michal's, one for yours. |
Until we cut over to 1.6 on RHEL, you'll need to generate conversions on go 1.4.2. |
I'd like to merge this tomorrow morning - squash, regen on go 1.4.2 (or just revert to what is checked in), I'll do a final pass, and we'll get this in. |
4ae9218
to
d9fe147
Compare
Squashed to 2 commits (one for Jenkins pipeline strategy, one for on-demand provision of Jenkins). Regenerated stuff too, hope it passes muster... |
Obviously the regen'd stuff fails on travis which uses go 1.5.3/1.6... |
Unit test failures in your run.
You can ignore Travis for now, I'll take the follow up on that.
|
Hmm failure seems unrelated. Can someone please retest this? |
[merge] |
LGTM - thank you for hard work on this |
Evaluated for origin merge up to d9fe147 |
[test] |
Evaluated for origin test up to d9fe147 |
continuous-integration/openshift-jenkins/test FAILURE (https://ci.openshift.redhat.com/jenkins/job/test_pr_origin/3449/) |
Your unit tests are still failing. I'll try to pull together a fix and On Apr 29, 2016, at 3:16 PM, OpenShift Bot [email protected] wrote: continuous-integration/openshift-jenkins/test FAILURE ( — |
continuous-integration/openshift-jenkins/merge FAILURE (https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_origin/5760/) |
Looks like a few more test failures. I'll fix those.
|
Let's link in the follow ups here as they open - next steps are:
??? On Sat, Apr 30, 2016 at 6:26 PM, OpenShift Bot [email protected]
|
Ref: #6954
This just adds a simple External build strategy that indicates that builds will be handled by an external system, e.g. Jenkins. I expect that annotations on the build config will be used to configure anything required on the external job, e.g. path to Jenkinsfile in repo.
I'm also working on a Jenkins plugin that watches build configs with this build strategy & creates/updates/deletes Jenkins jobs as required.
Looking for feedback please.
/cc @jstrachan @rawlingsj @bparees @smarterclayton @mfojtik