• Search Using:


EVAWI > Resources > Making a Difference Project
 

Evaluation of Conference Success

In the first phase of the project, the stated purpose was to "Make a Difference" in 8 U.S. communities by challenging the legal process to more effectively prosecute sexual offenders. By that measure, the conference evaluations and other indicators clearly demonstrate that the San Diego conference for U.S. participant-communities was a tremendous success. Click here for a report on the conference evaluation.

(1) Quantitative Conference Evaluations

As one evaluative measure of the success of the San Diego training conference, participants provided quantitative ratings on a number of different dimensions, for every single session from the opening banquet to the final group exercise. They rated everything from the quality of the information presented, to the faculty member's style of presentation, and the written materials that were provided. They also evaluated the actual success of each session in achieving a number of training objectives, such as increasing their understanding of specific issues and offering them strategies to use in their own professional work. All quantitative ratings were provided on a 5-point scale, where 1 = "not really" and 5 = "very much," so higher scores indicated a more favorable evaluation.

When examining the quantitative evaluations, there was an obvious discrepancy between the sessions focused on providing specific training material versus data collection efforts. On the one hand, virtually all of the sessions providing actual training content were above 4.0, expressing an extremely positive evaluation. In contrast, sessions focused on the purpose and methodology for data collection were considerably lower, typically in the range of 3.0 to 4.0. These concerns regarding the data collection methodology were raised repeatedly by conference participants. However, the overall evaluations of the conference were all extremely high. As illustrated by a graph of quantitative conference evaluations, overall ratings provided for the conference by each of the professional disciplines was at or above 4.0 on a 5.0 scale.

Thus, while participants expressed concern regarding the presentation of information pertaining to data collection, their evaluations were overall extremely positive.  These quantitative ratings were then animated by the narrative responses they provided to the open-ended prompts at the conclusion of the evaluation form.

(2) Open-Ended Conference Evaluations

Conference participants also responded to a number of open-ended prompts on the evaluation form. These questions asked about the aspects of the conference they “liked best,” “liked least,” and what they “would have liked to see done differently.” Not surprisingly, the most common response to the second question was that they liked material pertaining to data collection least. Therefore, more instructive for the present purposes are the responses to the questions about what they liked best and what they would have liked to see done differently.

When asked what they “liked best” about the conference, the most common responses were that the participants valued the training content and they appreciated the opportunity to communicate and network with others – within their community, within their discipline, and in other communities and professions. In evaluation after evaluation, participants stated that this conference allowed them to increase their knowledge and skills, by working cooperatively and learning from each other. This opportunity was unprecedented on a national level, and many of the participants cited how "monumental" this effort was, and how they felt "celebrated" and "honored" to be a part of it. Sample conference evaluations are provided in the conference evaluation report, for more detailed review.

When asked what they would like to see done differently, the most common request was for more training on a variety of topics. Participants said that they wanted more information on a wide range of specific topics, including more practical exercises, and more time for facilitated discussion with members of their own community team and from their own disciplines in other communities. Conference participants also indicated that they wanted to share written materials between the community teams, such as policies, procedures, protocols, and training manuals. Again, sample comments regarding what participants would have liked to see done differently are available for view in the conference evaluation report.

Taken together, the evaluations clearly suggest that the conference was extremely successful at bringing together the various stakeholders envisioned in the original proposal, forging meaningful bonds and networks of communication, and creating a forum for the ongoing technical assistance that will allow the participant-communities to truly “Make a Difference.” This conclusion is further supported by the overwhelmingly positive post-conference comments received in unsolicited communications from both conference participants and members of the EVAWI Board of Directors who were in attendance. Post-conference comments are available from both participants and members of the EVAWI Board of Directors in the conference evaluation report.

(3) Preliminary Data Collection

In the original vision for the Making a Difference project, research was seen as central -- both for establishing a baseline for current performance and for documenting the progress of any resulting reforms. However, the capacity for data collection varied widely across both discipline and community. Some agencies were able to capture very detailed data on their sexual assault caseload by simply submitting queries to a well-established, existing database. Others were not able to collect the information using any system other than manually pulling out the case files and writing down the data values by hand. Still others were not able to collapse the data into the specific categories that were defined for this purpose, as their agency coded or indexed the case files using different categories.

Yet every single participating agency was required to submit (prior to the conference) a detailed breakdown of their entire sexual assault caseload from January of 2004 to June of 2004, using descriptive categories provided by the research team for the Making a Difference project. These categories differed by discipline, so that the data provided was different for law enforcement, prosecution, forensic medicine, advocacy, and other victim service organizations. The data included such information as the level of acquaintance of the perpetrator and victim (if any), age and gender of both parties, and any prior criminal activity of the perpetrator.

Clearly, a primary purpose of this pre-conference effort was to evaluate the capacity of each agency for collecting such data. However, it was also designed to motivate the conference participants to create this capacity where it did not previously exist. In other words, many agencies in the participant communities are now able to collect such detailed descriptive information on their sexual assault caseload -- where they were not able to before -- because they were required to do so for participation in the Making a Difference project. This alone is an important indicator of conference success, as meaningful data collection is critical to creating institutional reform and documenting any resulting improvements.

As a secondary purpose, the pre-conference data collection also allowed the research team to finalize the methodology and coding categories. For example, several hours of conference time were spent discussing the criteria for including case files within the data collection efforts, and refining the methodological details for capturing this information in an ongoing way. For many participants, this was both painful and valuable, as evident by their conference evaluations. Therefore, the process was essential for establishing shared criteria and methods for collecting comparative data from the 8 U.S. communities in the future.

The power and utility of this data simply cannot be overstated. While prior research has reported findings based on similar kinds of data from individual agencies or disciplines, never before has any research project attempted to collect this level of detailed information from the entire spectrum of agencies involved in sexual assault response -- including law enforcement, prosecution, forensic medicine, advocacy, and victim services. With such data in hand, the research team will be able to provide a comprehensive picture of sexual assault incidents in a community, as victims and perpetrators interact with each of the agencies involved. For example, analysis of this data will allow us to better understand how many sexual assault victims report their crime to law enforcement, and how the characteristics of their assaults determine whether their case will proceed through the entire system to prosecution and conviction. We will even be able to link this analysis with comparable detail on the number of sexual assault victims seeking help from advocacy and other victim service organizations, and whether the characteristics of their crimes are similar or different to those reported to law enforcement and subsequently processed through the criminal justice system. Many participants stated in their conference evaluations that they appreciated the value of the data collection -- while they might not have prior to their involvement in the Making a Difference project. Several even signed up to join a sub-committee to work on refining and improving the data collection methodology for the project.

Finally, it is worth highlighting that the preliminary data collection also demonstrated the exceptional motivation of conference participants, and their skill in grasping difficult material and applying it within their unique professional context. The discrepancies in data collection methodologies were likely inevitable, given the wide variability across individuals, agencies, disciplines, and communities. Yet the success in clarifying these discrepancies and reaching a consensus for future efforts clearly laid the groundwork for unprecedented social action research in the second phase of the project.

(4) Post-Conference Reform Efforts

Perhaps most importantly, the success of the San Diego conference is seen in the reform efforts already underway in the 8 U.S. participant-communities. As one of the most obvious indicators, the lines of communication between conference participants are already well established, as project staff and conference participants talk and email frequently, in order to clarify issues raised at the conference, elicit feedback, coordinate activities, and work toward shared efforts. However, each of the 8 U.S. communities is engaged in significant reform efforts that are summarized in detail for those interested in determining the real impact of the Making a Difference project.

(5) Other Indicators of Conference Success

There also a variety of additional indicators that further confirm the sense of positive momentum created by the Making a Difference Project. First is the astonishing number of communities submitting applications to take part. Far surpassing even the expectations of optimistic project staff, a total of 88 U.S. communities submitted applications to participate in the conference. This number is particularly noteworthy given the extensive information required to apply and the level of commitment needed for a community to participate. Further demonstrating their commitment to the vision of the Making a Difference project, several of these communities independently pursued action to host their own conferences -- in Florida, Wisconsin, and Maryland.

Second is the media interest that was generated by the conference, both in the host community of San Diego and elsewhere. Before the conference, project staff prepared a press release, which was distributed widely across the country, to describe the vision of the Making a Difference conference and convey its critical significance. This press release sparked considerable media interest and coverage, including three live television interviews conducted with Joanne Archambault while hosting the conference in San Diego and an excellent article in the University of South Carolina newspaper. In addition, a follow-up press release was sent to each the of participant-communities following the conference to use when eliciting their own local media coverage. A sample press release and the South Carolina newspaper article are provided in the conference evaluation report.

 
© Copyright 2018 End Violence Against Women International. Site created by Threegate Media Group