conference evaluation
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 0)

H-INDEX

5
(FIVE YEARS 0)

2017 ◽  
Vol 60 ◽  
pp. 5-8
Author(s):  
Christine R. Schuler ◽  
Dawn N. Castillo ◽  
Cammie Chaumont Menéndez ◽  
Sergey Sinelnikov ◽  
Sydney Webb ◽  
...  

CJEM ◽  
2016 ◽  
Vol 18 (S1) ◽  
pp. S73-S73
Author(s):  
S.H. Yiu ◽  
S. Dewhirst ◽  
C. Lee ◽  
A. Jalaili ◽  
J.R. Frank

Introduction: Traditional post-conference speaker evaluations are inconsistently completed; meanwhile, real time social media tools such as Twitter are increasingly used in conferences. We sought to determine whether a correlation exists between traditional conference evaluation for a speaker and the number of real-time tweets it generated using data from a CAEP conference. Methods: This study utilized a retrospective design. The hashtag #CAEP14 was prospectively registered with Symplur, an online Twitter management tool, so that all tweets related to CAEP conference 2014 were stored. A tweet was associated with a session if it mentioned the speaker name, or if the tweet content and timing closely matched that of the session in the schedule. A tweet classification system was developed to differentiate original tweets from retweets, and quotes from comments generating further discussion. Two authors assessed and coded the first 200 tweets together to ensure a uniform approach to coding, and then independently coded the remaining tweets. Discrepancies were resolved by consensus. One author reviewed post-conference speaker evaluation, and abstracted the value corresponding to the question “The speaker was an effective communicator”. We present descriptive statistics and correlation analyses. Results: A total of 3,804 tweets were collected, with 2,218 (58.3%) associated with a session. Forty-eight (48%) (131 out of 274) of sessions receiving at least one tweet, with a mean of 11.7 tweets per session (95% CI of 0 to 57.5). In comparison, only 31% (85 out of 274) of sessions received a formal post conference speaker evaluation (p<0.005). For sessions that received at least one traditional post-conference evaluation, there was no significant correlation between the number of tweets and evaluation scores (R=0.087). This can be attributed to the fact that there was minimal variation between evaluation scores (median = 3.6 out of 5, IQR of 3.4 to 3.7). Conclusion: There was no correlation between the number of real-time tweets and traditional post-conference speaker evaluation. However, many sessions which received no formal speaker evaluation generated tweets, and the number of tweets was highly variable between sessions. Thus, Twitter metrics might be useful for conference organizers to supplement formal speaker evaluations.


2012 ◽  
Author(s):  
Carin R. Espenschied ◽  
Deborah J. MacDonald ◽  
Julie O. Culver ◽  
Sharon Sand ◽  
Karen Hurley ◽  
...  

2007 ◽  
Vol 5 (4) ◽  
pp. 261-270 ◽  
Author(s):  
Diane D Chapman ◽  
Colleen Aalsburg Wiessner ◽  
Julia Storberg-Walker ◽  
Tim Hatcher

Sign in / Sign up

Export Citation Format

Share Document