Abstract Selection for the PASS Summit – the secrets revealed…

 

I’ve never had the opportunity to be on the abstract selection committee, so it was interesting to see the process in action. To be clear, I was not on one of the selection committees, but I am on the Program Committee so I was still involved in the process.
The abstract selection committees are chosen out of the group of people that apply to volunteer for the Program Committee. We work to ensure that each team includes at least one person that has been on an abstract selection team in the past. Our hope is that they can provide some additional guidance. We also provide at least one training session to go over the tools and answer any initial questions.
Prior to the call to speakers, the number of allocated sessions are set. They are allocated in total to fit the number of rooms that we have available.   That total number is then split between the tracks (Application and Database Development, BI Architecture, Development and Administration, BI Client Reporting and Delivery Topics, Enterprise Database Administration and Deployment and Professional Development) to help make certain that we provide a balanced Summit selection.
Once the call to speakers closed, we knew that the abstract review committees were going to be in for a lot of work.   Here are the numbers that we were looking at:
Total # of regular session abstracts submitted: 513
# of regular session community slots allocated: 72
Doing the math, that means that only 14% of the abstracts submitted were going to be selected. Within the tracks, that percentage ranged from 11% to 18%.  
During the review process, the individuals on each team go through the abstracts in their track and rate them on 4 different areas – Abstract, Topic, Speaker and Subjective. Each of these areas are rated using a 1-10 scale and there is an area for comments. The abstract section has to do with, among other things, whether the abstract was complete (were session goals identified?), clear (was it easy to understand what the session would be about?) and interesting. The topic referred to the interest in and relevancy of the chosen topic. As far as the speaker – the abstract review teams had access to a report that provided previous Summit evaluation data for previous Summit speakers. They could also draw on personal knowledge or other information that they had access to. All of the individual scores added up to a total rating per abstract for the team.
Once the individual team members were finished with the evaluations, they came together as a team to rank the sessions. Along with looking at the total rating, they also looked at the different topics that were covered to ensure that the sessions covered a broad range of topics. Once the abstracts were ranked, the teams updated the session status to Approved, Alternate or Considered (Not accepted). If the status was Considered, the teams provided a reason as to why the abstract was not selected.
At that point the list of sessions came back to the Program Committee managers. We made certain the correct number of sessions per track were chosen and that no speakers had more than two sessions. There were a couple of cases where speakers had more than two sessions – for these cases, we went back to the teams for additional selections.
That’s it. Well, I guess I mean, those are all of the steps – it’s a ton of work and I’m grateful to everyone involved for all of their hard work.   We recognize that there are probably ways to improve the process and we’re in the process of setting up meetings with all of the teams to get their input. I hope this provides clarification to some of the questions that people might have about the abstract selection process.