[Toaster] Toaster master breakages
Damian, Alexandru
alexandru.damian at intel.com
Fri Jul 10 10:00:54 PDT 2015
Hi Ed,
I have more comments inline.
Cheers,
Alex
On Thu, Jul 9, 2015 at 1:32 PM, Ed Bartosh <ed.bartosh at linux.intel.com>
wrote:
> Hi Alex,
>
> On Mon, Jul 06, 2015 at 12:37:12PM +0100, Damian, Alexandru wrote:
> > Hello all,
> >
> > I've posted a patchset, on Friday 26, that broke the Toaster master in a
> > number of ways. As usual in these situations, the
> ,
> breakage was not caused
> > by a single fault, but by a chain of mis-happenings that show policy
> > failures in our processes.
>
> Where can I see the current policy/processes/tests described?
>
I am not sure that the process is codified elsewhere except previous
emails, and
minutes of meetings. So I'll go through the key points here:
1 - patches are submitted to the mailing list
2 - if there are no negative comments within (roughly) 2 days, the patch is
deemed accepted
3 - once or twice a week, somebody (deemed the submitter, and I used to be
the main executor of this task) takes all accepted patches, and creates a
submission patchset
4 - the submitter runs a smoke test on the patchset (mainly ensures that
the toaster starts and the main pages are accessible); personally I extra
verified that all existing pages are HTML5 compliant, and are not returning
404 or 500; that the whitespace is in order; and that all python files are
syntactically correct
5 - if the patchset passes the smoke testing, it is submitted upstream
The process failures happened at point 2. and point 4. above, specifically
-
2.1. - unreviewed patches were accepted because there is no way to
determine if a patch is ok or if nobody actually reviewed it; the risk was
accepted at that time because we didn't want to stop merging because lack
of reviews - the review was deemed opportunistic and not mandatory;
4.1. - the smoke testing is not enough to catch serious bugs, especially
related to lack of conformity to the design, or interactivity and user
experience bugs; I acknowledge this risk, which is rooted by the fact that
we commited features without corresponding testing code - either in unit,
functional, acceptance, or integration tests.
>
> > Paul was kind enough to help analyze the situation, and we've identified
> > some of the gaps:
> > - I've posted upstream a patch that I didn't actually intend to upstream
> in
> > this release, and which wasn't properly reviewed. Probably happened
> because
> > I hurried on manually submitting.
> > - While, thanks to latest efforts, we've got better in terms of unit
> > testing, we still have a significant gap in functional and integration
> > tests, especially automated tests.
> >
> >
> > Michael was very proactive and helpful in fixing the master, and I want
> to
> > thank him for that - but we shouldn't continue on this path. While we
> > continue to have the policy of not breaking master, we should take steps
> to
> > set out proper policies in place enforce correct submissions, and to
> > prevent us from chasing fires all the time.
> >
> >
> > This is a call for proposals and suggestions. Please help me figure out
> > solutions to have better policies around submission and testing, and
> proper
> > automation against these tasks.
> >
> > Please reply to this thread with:
> > - observations on any process/policy gaps that you might notice, and
> > proposals for fixing them
> > - suggestions around automating the patch review process
> > - suggestions around possibly changing the review requirements
> > - suggestions around automating the patch upstreaming process
> > - suggestions on improving the functional and integration testing
> coverage
>
> My suggestions are quite obvious and may be too generic. As I don't know
> what the current policy is I can't come up with something more specific
> for Toaster.
>
> 1. Review of changes:
>
> - Developers must run tests manually before submitting them
> for review. Changes that don't pass tests should not be submitted.
> If testing environment is not easy to replicate then
> it should be possible to upload changes to testing system and run
> tests remotely.
>
> - It would be great to have automatic testing triggered and results posted
> on the mailing list or emailed to maintainers and submitter for every
> review
> request.
>
> 2. Accepting changes to target branches:
>
> - Changes should be always tested either manually or automatically *before*
> they merged into target branch.
>
> - Changes must be rebased on top of target branch before testing.
> It would allow us to skip testing after the merge.
>
> - Pending review requests should be rebased and retested after target
> branch
> is updated.
>
> - Target branch must *always* pass *all* test cases. This would make it
> easy to track test regressions as if any test case doesn't pass it
> would mean regression. This also means that new test cases must not
> introduce regressions.
>
> - If set of changes causes regression it should be bisected to identify
> guilty change(s) and remove them from the set. After that all test cases
> should be run again.
>
> - If target branch is broken fixing it should have highest priority.
>
> 3. Testing coverage
>
> - Fresh installation up to successful build of core-minimal image
> should be covered by functional tests.
>
> - The more tests we have the better. Functional and integration tests
> are more important to have then unit or code style tests so it's
> better to start from them if we have limited resources.
>
> - If full testing takes long we should separate most important test
> cases into 'smoke test'.
>
> - Full test run should be continuosly checking target branches to
> identify test regressions as soon as they appear.
>
>
Ed, thank you for the suggestions. They make a lot of sense, and they are
good base to start discussion around changing the project policies, and
start improve the testing code and adopting needed tooling to support the
modified policies.
> --
> Regards,
> Ed
>
--
Alex Damian
Yocto Project
SSG / OTC
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.yoctoproject.org/pipermail/toaster/attachments/20150710/e752f572/attachment.html>
More information about the toaster
mailing list