NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More


Fixing the OpenStack Summit Submission Process

You know how it goes. The latest OpenStack Summit is barely over and there's the Call For Papers for the next one. So you (and a thousand or so other people) put together a few ideas and submit them, the community votes, and the winners get selected, right?  Not so fast.  
Here's what really happens.
Voting opens. Then the promotion starts; you beg your friends (and anyone who will listen) to vote for you. Your company puts together a "here are our talks, please vote for them" blog post. Maybe your company also sends out an email asking employees to vote for internally produced talks. You spend a couple of hours looking at hundreds of proposals and indicating how likely you are to attend that particular talk. Voting closes.
Then the track chairs select which talks will be part of the Summit.
Wait — what?  Track chairs?
That's right, the OpenStack Summit Conference talks are selected by track chairs, whose identities have traditionally been secret, along with the criteria they use to make their decisions.  It is these people who are affectionately (or not so affectionately) referred to as the "secret cabal" that chooses the Summit talks.
OpenStack is a community of doers, so it's no surprise that many of the people who feel like there's something wrong with the current process also have suggestions for improving it. "There's an implicit assumption," Steve Gordon, Senior Technical Product Manager at Red Hat told OpenStack:Now, "that anybody who is complaining about the process is doing it because their talks didn't get in, but that's not true."
Here at OpenStack:Now, we decided to take a look at these suggestions. We got insight from people who have successfully navigated the Summit speaking submission process and asked what worked, what doesn’t, and what they would change. Virtually everyone we spoke to has presented at an OpenStack Summit at least once, and most have also been track chairs at one time or another.


Issues seem to begin right at the beginning, with the voting process.
"The number of submissions this year was really large," said Maish Saidel-Keesing, Platform Architect at Cisco, who presented in Vancouver, and who has also served on the committee working on Foundation travel grants. "There's no way you can find a specific or interesting talk, unless you were specifically pointed to a URL - which means someone pointed you to it and obviously asked you to vote for it."
The reasons for so many submissions vary. Speaking at the Summit is seen as a badge of honor, and as a selling point for companies who can point to their influence at the event. There's also the very real issue of the influence that comes from presenting ideas at the Summit, which is meant to encourage usage of what was developed over the previous six months and determine what will be done in the next.  
But there are other reasons, as Matt Kassawara of Rackspace, who both presented and served as a track chair for the Vancouver Summit, points out. "A handful of companies make a significant chunk of submissions on the 'if you throw enough spaghetti at the wall, some of it will stick' principle, most of which appear vendor-centric, have a lack of substance (free ticket, please), or just plain suck. These submissions lower the overall signal-to-noise ratio, raise the amount of time necessary to review just a small percentage of total submissions, and bury potentially excellent submissions."
Rising above that noise brings its own challenges. "I think the voting process has been counterproductive for any number of reasons," said Dave Neary, SDN/NFV strategist at Red Hat, who has spoken in Portland, Paris, Hong Kong and Vancouver, and who has been a track chair at least twice. "Some people just aren't comfortable pitching their talks. I just don't think it serves the community well to have this process. These company blog posts -- 'vote for the talks that came from us' — they're creating a parochialism that's not healthy.  Everyone's looking after their own interests and not the project as a whole."
All of this might be a necessary evil if it weren't for one thing: the formal instructions to the track chairs insist that voting should be only one factor in their decision, and the votes aren't always used the way most people think they are. "When I was a track chair, we used the voting for 'color'," said Mirantis Director of Product Management Craig Peters. "The first thing we did was rank the talks based on the votes and we could see immediately that that was not a track; that was a popularity contest.  So we started from the top vote getters and then looked for other talks to fill out a coherent track."
Some track chairs don't even go that far. Former OpenStack Community Manager Stefano Maffulli wrote in his blog that "We never looked at the public votes because those are easily gamed and I think it would be unfair at this point to give priority to someone only because they work for an organization that can promote talks during the voting process. Each candidate needs to be evaluated based on what they bring to the Summits, not on their marketing teams." Furthermore, he told OpenStack:Now, "I think it's good that we have that kind of participation, but I don't think it's fair that that translates into the talk being accepted, because by the time I look at it as a track chair, I've already made up my mind, so there's really no value in the voting results."
Maffulli does see value in the process, however.  "The fact is that some of those proposals simply aren't good enough.  There isn't enough space. Besides, it's useful in that the proposals are all visible, and people can mine that data and find something cool, even if it doesn't make it to the Summit.  For example, organizers of meetups can find some gem, some unknown speaker or some topic for their meetup.  So there's some value in that being open. Ultimately, though, the purpose of the voting system is to get public engagement and involvement before the Summit starts.  It's a party, it's a celebration, it's a look at what's coming.  And for that, I think it works well."
Steve Gordon, who will be speaking in Tokyo, partially agrees. "The big thing that people seem to be willing to acknowledge is that the public vote is largely a worthless Twitter filling exercise.  The primary function seems to be to get the Foundation a marketing start for the Summit process."
We asked OpenStack Foundation Vice President, Marketing and Community Services Lauren Sell about this assertion. "Our community gets excited about the Summit when they start to see topics and themes emerge as sessions are available for voting, but I don’t think publicity is the primary reason for the process," she said. "From my perspective, the opportunity to vote on Summit sessions provides a strong community feedback mechanism, so it’s not just a small group of people making decisions. It also provides a level of transparency because all submitted sessions are published and available to review, analyze, etc. (such as the keyword analysis several community members perform each Summit, or how other community organizers mine the information to recruit speakers for their own regional events). The results give track chairs a starting point (or sometimes a tie breaker when needed) and it helps them rule out sessions that have been consistently poorly reviewed."

Inclusivity and new faces

The current process also has another unfortunate side effect: a skewing of choices towards people who are familiar to the voters, and to the track chairs.  It's not that there isn't a reason for that.  "It's absolutely based on how people know the speakers," said Shamail Tahir, Senior Technologist at EMC, who has spoken in Paris and Vancouver, and will be speaking in Tokyo. "People can't submit their prior work, so that's all the judges have to go on. If I bring in somebody I don't know, are they going to give me a product pitch? You can't always tell from the abstract."
"Yes, it does skew towards people who are known in the community," Stefano Maffulli said, "and I think that's fair. When we get to the point of deciding between two talks that are both compelling, we have a painful choice.  If we have to choose between a total unknown who has never spoken at other conferences, or at a meetup or other event, we ask about them. Does anybody in the community know them?  Are we confident that they can deliver a talk?  If we can't satisfy ourselves that they can, and there's somebody else with a similarly compelling talk and they're known, they get precedent."
The problem goes deeper than simply trying to avoid a popularity contest, however; most of the people who fall into the "known by the community" category are the people building OpenStack. Their opinion and perspective is certainly invaluable — but so are those of users, application developers, and operators, who are often left out by this process.  The community has been making a big push to include operators in planning, even holding a mid-cycle operator's meetup and special sessions at the Summit, but users and application developers are still fighting to be heard.  
For this reason it's hard to use even a basic requirement — that of being an Active Technical Contributor — to limit submissions.  "Actually, this came up during the installation guide meeting," Matt Kassawara said, "because testing the installation guide doesn't qualify for ATC, but it requires familiarity with OpenStack, takes quite some time to complete, and provides valuable feedback. I think ATC needs some expansion. There are many ways to contribute without a commit."
"Keeping the Summit centralized isn't behavior you want to promote," Shamail Tahir added, "but you need to make sure that you're providing good sessions.  You want to bring in new faces, but balance it with known quantities who will produce premium content."
Stefano Maffulli says he works hard to solve this problem. "To help make a decision, I check them out on Google, or on LinkedIn, or I look at their Slideshare presentations.  If they didn't write a very compelling bio, I try to find other ways to find out about them.  If they've spoken in the past, I may have a way to find out about them. That's what I mean by 'known': some kind of proven record. I would assume other track chairs are doing all that work as well."
As it turns out, he may be assuming too much. "First time speakers and people they didn't otherwise know automatically went to the bottom of the list," Matt Kassawara said of his term as a track chair. "To a lesser extent, this also impacted people with 'boring' job titles or biographies. I think we need to give everyone a fair chance."

Everybody's got an opinion...

One pattern you might be seeing here is that there is no pattern.  "This whole process causes inconsistency," Dave Neary said. "Some people use the votes consistently, some ignore them, some use them as a guideline. There are a number of reasons that I think we should get rid of it."
Shamail Tahir disagrees.  "I don't agree with getting rid of voting altogether.  I'm very against that, actually. With me, I think we have to be transparent about how much voting matters.  People 'in the know' know that it's one of the things factored in, but others don't understand."
His opinion mirrors Steve Gordon's. "I don't necessarily have a problem with the fact that we have the vote and that feeds into the track system," he said. "I have a problem with the lack of transparency around it. If you look at the confirmation email you get when submitting, there's just a note that 'We'll be in touch after the voting ends.'  So it's not surprising that every six months someone first finds out that there are track chairs at all, and that the vote may be ignored."
This perceived lack of transparency also bothers Mirantis Development Manager Dmitry Borodaenko.  "For a community that prides itself on having everything out in the open, I don't think we're doing a very good job in this case."

... and here's mine

Most of the people who talked to OpenStack:Now about the current process also had ideas for improving it.  Shamail Tahir suggested asking speakers to link to previous presentations to show their capabilities. "That's what LinuxCon does. People generally feel they do a good job, so researching their process would be a good starting point."  He also suggests ensuring diversity by creating track committees of one developer, one working group member, one operator, and one nominated person.
"I don't even want it to be that level of detail," Steve Gordon said. "I just want it to say somewhere that there ARE track chairs, and who they are so people know who to complain to, or who to volunteer to."
Several people mentioned the problems that arise because of speakers submitting the same or similar talks to different tracks, "just to multiply the possibility that the group will be sent to Tokyo. I understand corporate policy," Stefano Maffulli said, "but that's a lot more work for the track chairs. Unfortunately, limiting can't be automated; any filter can either be gamed or can damage the chances for someone who's legitimately submitting multiple different talks."
Borodaenko suggests a more radical change.  "I think we should go more in the direction used by the scientific community and more mature open source communities such as the linux kernel."  The process, he explained, works like this:
  1. All submissions are made privately; they cannot be disclosed until after the selection process is over, so there's no campaigning, and no biasing of the judges.
  2. The Peer Review panel is made up of a much larger number of people, and it's known who they are, but not who reviewed what.  So instead of 3 people reviewing all 300 submissions for a single track, you might have 20 people for each track, each of whom review a set of randomly selected submissions.  So in this case, if each of those submissions was reviewed by 3 judges, that's only 45 per person, rather than 300.
  3. Judges are randomly assigned proposals, which have all identifying information stripped out.  The system will know not to give a judge a proposal from his/her own company.
  4. Judges score each proposal on content (is it an interesting topic?), fit for the conference (should we cover this topic at the Summit?), and presentation (does it look like it's been well thought out and will be presented well?).  These scores are used to determine which presentations get in.
  5. Proposal authors get back the scores, and the explanations. In an ideal world, authors have a chance to appeal and resubmit with improvements based on the comments to be rescored as in steps 3 and 4, but even if there's not enough time for that, authors will have a better idea of why they did or didn't get in, and can use that feedback to create better submissions for next time.
  6. Scores determine which proposals get in, potentially with a final step where a set of publicly known individuals reviews the top scorers to make sure that we don't wind up with 100 sessions on the same topic, but still, the scores should be the final arbiters between whether one proposal or the other is accepted.
"So in the end," he explained, "it's about the content of the proposal, and not who you work for or who knows you or doesn't know you.  Scientific conferences have been doing it this way for many years, and it works very well."
We put this suggestion to the others we spoke to.
"It's a different approach," Stefano Maffulli said.  "It might work. But I have to say, having been both approved and not approved at LinuxCon, it doesn't remove the pain or the feeling that it's the same people getting in again and again. I think it's a change. It might work and it might not work.  But it does remove that kind of 'party' effect that we have, which makes it different from the Linux Foundation."
"I like not having public proposals, and instead using a peer selected list of reviewers," Dave Neary said. "The main issue at this point is that we're getting 1500 talk proposals, and this seems a little heavier weight than can be doable by the community. I don't think we need to go through all the trouble of anonymizing, and so on.  You just want to pick people who are respected in the community, who are undisputed experts, and who are trusted, and trust them to do the right thing.  I don't think you need any more than that."
"I think that it is great starting point, and can be refined," said Maish Saidel-Keesing.  "We just have to make sure that we understand the business logic behind why some of the current decisions have been made."

Moving forward

In the end, everyone we spoke to agreed on one thing: they want what's best for OpenStack, and for the OpenStack community. And it seems the Foundation is listening; as we went to press with this article, Lauren Sell told the openstack-community mailing list that the Tokyo selection process page on the OpenStack website would be updated with more details from an Etherpad that most hadn't known existed -- but that contains information such as the identity of track chairs.
So if you're going to Tokyo, have a great time.  If not, we hope to see you in Austin; regardless of whether the voting is a party, the Summit is always our favorite event of the year.
Nick Chase presented in both Atlanta and Vancouver.
Photo by Moyan Brenn on Flickr.

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.


Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.