We should organize this by “launch stage”. Experimental needs a very different (very low!) bar compared to stable for example.
We should make this as “self service” as possible. Experimental, in particular, should be self-service with no one to tell you “no.”
The results of the questions should be linked to some artifact associated with the launch, so we can quickly look them up. A well done beta launch should make your stable launch a mere formality and you shouldn’t have to do the paperwork twice.
A group, maybe #wg-architecture or some other group, should have input before things go to (beta or stable, not sure.) They don’t all need to agree, but say you have n(=2? 3?) approvals with no strongly negative feedback, then you’re good to go.
For features not launched to Enterprise, we need an explanation of whether the feature will eventually reach enterprise and how. Maybe this is just “is this beta feature on a path to stable with known issues and planned solutions” kind of thing.
For features not launched in all editors, we need an explanation of which editors and when (or the specific blockers.) Ideally #team-cody-core should turn the editors into a consistent platform and this part of the questionnaire can be deleted. But today we definitely have people launch VSCode features with little? no? regard for JetBrains. (Most recent example being autoedits.) This problem will get worse when VS goes GA.
For features that rely on backend features, we need some details of how the enablement will work and some evidence that QA has a reliable testing environment.
We need some process and transparency around this checklist. It will help people tremendously to see previous launches; see features move from experimental to beta; etc.
What is the target launch date?
Which launch stage is the feature in?
What population is the feature launching to?
How do we control the rollout to the above population?