UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Meta-analyses of positive psychology interventions on well-being and depression : reanalyses and replication White, Carmela Anna 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2016_september_white_carmela.pdf [ 2.85MB ]
Metadata
JSON: 24-1.0308654.json
JSON-LD: 24-1.0308654-ld.json
RDF/XML (Pretty): 24-1.0308654-rdf.xml
RDF/JSON: 24-1.0308654-rdf.json
Turtle: 24-1.0308654-turtle.txt
N-Triples: 24-1.0308654-rdf-ntriples.txt
Original Record: 24-1.0308654-source.json
Full Text
24-1.0308654-fulltext.txt
Citation
24-1.0308654.ris

Full Text

META-ANALYSES OF POSITIVE PSYCHOLOGY INTERVENTIONS ON WELL-BEING AND DEPRESSION: REANALYSES AND REPLICATIONbyCarmela Anna WhiteB.A. (Hons.) Mount Royal University, 2012A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF ARTSinTHE COLLEGE OF GRADUATE STUDIES(Psychology)THE UNIVERSITY OF BRITISH COLUMBIA(Okanagan)August 2016© Carmela Anna White, 2016The undersigned certify that they have read, and recommend to the College of Graduate Studies for acceptance, a thesis entitled:Meta-analyses of positive psychology interventions on well-being and depression: Reanalysesand replicationssubmitted by_ Carmela A. White         in partial fulfillment of the requirements ofthe degree of _Master of Arts             .Dr. Mark Holder, Psychology,  Irving K. Barber School of Arts and Sciences                        Supervisor, ProfessorDr. Brian O'Connor, Psychology,  Irving K. Barber School of Arts and Sciences                 Supervisory Committee Member, ProfessorDr. Lesley Lutes, Psychology,  Irving K. Barber School of Arts and Sciences                        Supervisory Committee Member, Professor Dr. John Tyler Binfet, Education                   ____________________________________University Examiner, Professor__                    ____________________________________________________________External Examiner, Professor August 9, 2016_______________________(Date Submitted to Grad Studies)Additional Committee Members include:Please print name and faculty/school in the line abovePlease print name and facult/school in the line aboveiiAbstractAt least since the work of Fordyce (1977), researchers have been interested in the effectiveness of interventions designed to increase well-being. This interest has increased substantially since Seligman and Csikszentmihalyi (2000) coined the term 'positive psychology'. Interventions de-signed to increase well-being have become known as positive psychology interventions (PPIs). Two highly cited meta-analyses examined the effectiveness of PPIs on well-being and depres-sion: Sin and Lyubomirsky (2009) and Bolier et al. (2013). Whereas Sin and Lyubomirsky (2009) reported relatively large effects of PPIs on well-being (r = .29) and depression (r = 31), Bolier et al. (2013) reported much smaller effects on subjective well-being (r = .17), psycholo-gical well-being (r = .10), and depression (r = .11). A detailed examination of the two meta-ana-lyses reveals that the authors employed different approaches, used different inclusion and exclu-sion criteria, analyzed different sets of studies, described their methods with insufficient detail to clearly compare them, and failed to notice or properly account for significant small sample size bias. The first objective of the current study was to reanalyze the studies selected in each of the published meta-analyses, while taking into account small sample size bias. The second objective was to replicate each meta-analysis by extracting relevant effect sizes directly from the primary studies included in the meta-analyses. The third objective was to conduct a series of new meta-analyses using effect sizes extracted directly from all studies included in the previous meta-ana-lyses. Three previous meta-analyses were identified, reanalyzed, and replicated. The results of present study revealed three key findings: (1) many of the primary studies used a small sample size, (2) small sample size bias was found to be pronounced in many of the analyses, and (3) when small sample size bias was taken into account, the effect of PPIs on well-being were small but significant (r = .10), whereas PPIs effects on decreasing depression were not statistically sig-nificant (r = .00). Future PPI research needs to focus on (1) increasing sample sizes of primary studies and (2) assessing cumulative findings from comprehensive meta-analyses that address common issues such as small sample size bias.iiiPrefaceThis thesis is original, unpublished, and independent work by the author, Carmela A. White.Examination committee ...............................................................................................IIAbstract ................................................................................................................................IiiPreface ...................................................................................................................................IvTable of Contents .............................................................................................................vList of Tables .....................................................................................................................viiList of Figures .................................................................................................................viiiAcknowledgements .....................................................................................................xiiDedication .........................................................................................................................xiiiivTable of ContentsExamination Committee ..............................................................................................................iiAbstract ........................................................................................................................................iiiPreface ..........................................................................................................................................ivTable of Contents ..........................................................................................................................vList of Tables ..............................................................................................................................viiiList of Figures ...............................................................................................................................ixAcknowledgements ....................................................................................................................xivDedication ....................................................................................................................................xvChapter 1: Introduction................................................................................................................11.1  What is Well-Being?............................................................................................................11.2  Interventions that Increase Well-Being...............................................................................21.3  Previous Reviews of the Effectiveness of Positive Psychology Interventions...................31.3.1  Sin and Lyubomirsky (2009) Meta-Analysis..........................................................31.3.2  Bolier et al. (2013) Meta-Analysis.........................................................................51.4  Limitations of Previous Meta-Analyses..............................................................................61.4.1  Method Descriptions...............................................................................................71.4.2  Small Sample Size Bias and Publication Bias........................................................81.4.3  Effect Size Calculation..........................................................................................111.5  The Current Study.............................................................................................................12Chapter 2: Method.......................................................................................................................132.1  Search for Previous Meta-Analyses of PPI.......................................................................132.2  Effect Size Calculations....................................................................................................142.3  Coding of Primary Studies................................................................................................15v2.4  Calculation of Effect Sizes, Missing Data Imputation, and Other Irregularities..............162.5  Statistical Analysis............................................................................................................17Chapter 3: Results........................................................................................................................193.1  Sin and Lyubomirsky (2009) Meta-Analysis....................................................................193.1.1  Well-Being............................................................................................................193.1.1.1  Reanalysis of Reported Data.........................................................................193.1.1.2  Complete Replication of Meta-Analysis.......................................................193.1.2  Depression............................................................................................................203.1.2.1  Reanalysis of Reported Data.........................................................................203.1.2.2  Complete Replication of Meta-Analysis.......................................................213.2  Bolier et al. (2013) Meta-Analysis....................................................................................223.2.1  Subjective Well-Being..........................................................................................223.2.1.1  Reanalysis of Reported Data.........................................................................223.2.1.2  Complete Replication of Meta-Analysis.......................................................223.2.2  Psychological Well-Being.....................................................................................233.2.2.1  Reanalysis of Reported Data.........................................................................233.2.2.2  Complete Replication of Meta-Analysis.......................................................243.2.3  Depression............................................................................................................253.2.3.1  Reanalysis of Reported Data.........................................................................253.2.3.2  Complete Replication of Meta-Analysis.......................................................253.3  Weis and Speridakos (2011) Meta-Analysis.....................................................................263.3.1  Life Satisfaction....................................................................................................263.3.1.1  Reanalysis of Reported Data.........................................................................263.3.1.2  Complete Replication of Meta-Analysis.......................................................27vi3.4  Meta-Analyses Using all Studies in the Previous Meta-Analyses....................................273.4.1  Well-being.............................................................................................................283.4.2  Depression............................................................................................................283.4.3  Satisfaction With Life Scale (SWLS; Diener, Emmons,         Larsen, & Griffin, 1985).......................................................................................293.4.4  Effects of Intervention Administration Setting on Well-being.............................30Chapter 4: Discussion..................................................................................................................314.1  Summary of Main Findings and Comparison with Previous Literature...........................314.2  Implications.......................................................................................................................344.3  Limitations........................................................................................................................36Chapter 5: Conclusions...............................................................................................................385.1  Future Directions...............................................................................................................39References.....................................................................................................................................41viiList of TablesTable 1. Coded characteristics and coding descriptions................................................................60Table 2. Effect sizes determined by the current study, for each well-being measure and each study included in Sin and Lyubomirsky (2009) well-being meta-analysis..........................................................................................................65Table 3. Effect sizes determined by the current study, for each depression measure and each study included in Sin and Lyubomirsky (2009) depression meta-analysis..........................................................................................................73Table 4. Effect sizes determined by the current study, for each subjective well-being measure and each study included in Bolier et al. (2013) subjective well-being meta-analysis.......................................................................................75Table 5. Effect sizes determined by the current study, for each psychological well-being measure and each study included in Bolier et al. (2013) psychological well-being meta-analysis.....................................................78Table 6. Effect sizes determined by the current study, for each depression measure and each study included in Bolier et al. (2013) depression meta-analysis..........................................................................................................81Table 7. Effect sizes determined by the current study, for each life satisfaction measure and each study included in Weis and Speridakos (2011) life satisfaction meta-analysis................................................................................83Table 8. Summary of reanalyses of the previous meta-analyses...................................................84Table 9. Summary of replications of the previous meta-analyses.................................................85Table 10. Summary of meta-analyses using all studies in the previous meta-analyses.........................................................................................................86viiiList of FiguresFigure 1. Funnel plot of well-being effect sizes from Sin and Lyubomirsky (2009).....................87Figure 2. Trim-and-fill plot of well-being effect sizes from Sin and Lyubomirsky (2009).....................................................................................................................88Figure 3. Forest plot of well-being effect sizes from Sin and Lyubomirsky (2009)......................89Figure 4. Cumulative meta-analysis of well-being effect sizes from Sin and Lyubomirsky (2009)...............................................................................................90Figure 5. Limit meta analysis of well-being effect sizes from Sin and Lyubomirsky (2009)...............................................................................................91Figure 6. Reanalysis of Sin and Lyubomirsky (2009) well-being effect sizes: Forest plot..............................................................................................................92Figure 7. Reanalysis of Sin and Lyubomirsky (2009) well-being effect sizes. Top left panel the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.......................................................................................93Figure 8. Scatterplot of well-being effect sizes determined in the current replication vs. Sin and Lyubomirsky (2009) effect sizes.........................................................94Figure 9. Replication of Sin and Lyubomirsky (2009) meta-analysis for well-being: Forest plot..............................................................................................................95Figure 10. Replication of Sin and Lyubomirsky (2009) meta-analysis for well-being. Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis...........................................................96ixFigure 11. Reanalysis of Sin and Lyubomirsky (2009) depression effect sizes: Forest plot..............................................................................................................97Figure 12. Reanalysis of Sin and Lyubomirsky (2009) depression effect sizes. Top left panel the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis............................................................................98Figure 13. Scatterplot of depression effect sizes determined in the current replication vs. Sin and Lyubomirsky (2009) effect sizes.......................................99Figure 14. Replication of Sin and Lyubomirsky (2009) depression effect sizes: Forest plot............................................................................................................100Figure 15. Replication of Sin and Lyubomirsky (2009) meta-analysis for depression. Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis...............................................101Figure 16. Reanalysis of Bolier et al. (2013) SWB effect sizes: Forest plot...............................102Figure 17. Reanalysis of Bolier et al. (2013) SWB effect sizes. Top left panel the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis........................................................................................................103Figure 18. Scatterplot of SWB effect sizes determined in the current replication vs. Bolier et al. (2013) effect sizes.......................................................................104Figure 19. Replication of Bolier et al. (2013) meta-analysis for SWB: Forest plot.......................................................................................................................105xFigure 20. Replication of Bolier et al. (2013) meta-analysis for SWB. Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.....................................................................................106Figure 21. Reanalysis of Bolier et al. (2013) PWB effect sizes: Forest plot...............................107Figure 22. Reanalysis of Bolier et al. (2013) PWB effect sizes. Top left panel the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis........................................................................................................108Figure 23. Scatterplot of PWB effect sizes determined in the current replication vs. Bolier et al. (2013) effect sizes.......................................................................109Figure 24. Replication of Bolier et al. (2013) meta-analysis for PWB: Forest plot....................110Figure 25. Replication of Bolier et al. (2013) meta-analysis for PWB. Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis................................................................................................111Figure 26. Reanalysis of Bolier et al. (2013) depression effect sizes: Forest plot.......................112Figure 27. Reanalysis of Bolier et al. (2013) depression effect sizes. Top left panel the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis...............................................................................................113Figure 28. Scatterplot of depression effect sizes determined in the current replication vs. Bolier et al. (2013) effect sizes.....................................................114xiFigure 29. Replication of Bolier et al. (2013) meta-analysis for depression: Forest plot............................................................................................................115Figure 30. Replication of Bolier et al. (2013) meta-analysis for depression. Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.....................................................................................116Figure 31. Reanalysis of Weis and Speridakos (2011) life satisfaction effect sizes: Forest plot............................................................................................................117Figure 32. Reanalysis of Weis and Speridakos (2011) life satisfaction effect sizes. Top left panel the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis...............................................................................................118Figure 33. Scatterplot of life satisfaction effect sizes determined in the current replication vs. Weis and Speridakos (2011) effect sizes......................................119Figure 34. Replication of Weis and Speridakos (2011) meta-analysis for life satisfaction: Forest plot........................................................................................120Figure 35. Replication of Weis and Speridakos (2011) meta-analysis for life satisfaction. Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.........................................................121Figure 36. Replication of all previous meta-analyses for well-being, combined (Bolier et al., 2013; Sin & Lyubomirsky, 2009; Weis & Speridakos, 2011): Forest plot.................................................................................................122xiiFigure 37. Replication of all previous meta-analysis for well-being, combined (Bolier et al., 2013; Sin & Lyubomirsky, 2009; Weis & Speridakos, 2011). Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.....................................................................................123Figure 38. Replication of all previous meta-analysis for depression (Bolier et al., 2013; Sin & Lyubomirsky, 2009): Forest plot.....................................................124Figure 39. Replication of all previous meta-analysis for depression (Bolier et al., 2013; Sin & Lyubomirsky, 2009). Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis........................................................................................................125Figure 40. Well-being effect sizes that were calculated from SWLS only: Forest plot............................................................................................................126Figure 41. Well-being effect sizes that were calculated from SWLS only. Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.....................................................................................127xiiiAcknowledgementsFirst and foremost, I owe special thanks to my supervisor and mentor Dr. Mark Holder for his consistent and continuous support, guidance, patience, and humour throughout my gradu-ate training. I am truly grateful. I would also like to thank my committee members Drs. Lesley Lutes and Brian O'Connor for their encouragement, insightful comments, and unique perspect-ives. Together, this team allowed me to grow, be challenged, and thrive. Thank you. In addition to this research team, I would also like to graciously thank the Social Science and Humanities Research Council (SSHRC) for financially supporting me throughout this re-search endeavour.Next, I would like to thank my fellow students and clinical mentors. I could not imagine having gone through this process without such support, understanding, empathy, and laughter. You have all uniquely contributed to my well-being and success, be it personally, academically, and/or professionally. Thank you! To my family and friends, near and far, long-time and new, who have supported me in anyway they could (more than each of you are probably aware of) – I thank you! Although life gets busy and staying in touch is sometimes difficult I really want each of you to know (you know who you are) that I genuinely cannot express enough gratitude for all that you have done and continue to do. Love you all.Lastly, I would like to thank my close mentor and friend, Dr. Bob Uttl. You have believedin me since day one, encouraged me, challenged me, and have graciously continued to do so. I am incredibly lucky to have had someone who saw an academic side of me that I never believed or thought existed, never mind would have pursued. You continue to remind me of the 'bigger picture'... for all of it, I thank you Bob!  xivDedication“The mind is the limit. As long as you can envision the fact that you can do something, youcan do it, as long as you really believe 100%.”  Arnold SchwarzeneggerMy husband, my partner in life, my friend, and my biggest fan - Dave, this work is dedicated toyou.Words truly cannot express the degree of gratitude and appreciation I feel for ALL that you havedone. This incredible milestone would not have been achieved had it not been for your continu-ous support, love, dedication, and generosity. You have selflessly helped provide me such greatopportunities in order for me to pursue this dream. You continue to believe in me and remind meof the end goal. For all of this and for so much more, I thank you. Love you.~ LAL xv1 Chapter 1: IntroductionTraditionally, disease and deficits have been the focus of research and intervention effortsin psychology. As a result, mental health has been conceptualized as the absence of negative symptomology (Seligman, Rashid, & Parks, 2006). Although this focus has been invaluable to psychology, the expanding field of positive psychology offers a complementary approach by fo-cusing on what comprises well-being including strengths, life satisfaction, happiness, and posit-ive behaviours (Seligman & Csikszentmihalyi, 2000; Seligman, Steen, Park, & Peterson, 2005). These two sides of psychology provide a well-balanced view and understanding of humanity (Seligman et al., 2005) that is in line with the World Health Organization view that “Health is a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity.” (World Health Organization [WHO], 2015). Accordingly, although decreasing or eliminating negative symptomology is necessary, it is not sufficient to achieve overall well-being. It is also critical to focus on prevention and intervention strategies that create, build upon, and foster well-being. Positive psychology interventions (PPIs) should be used as supplemental approaches to those that address poor health. Rather than focusing directly on decreasing negat-ive symptomology, PPIs aim to increase positive affect, meaning in life, and engagement (Selig-man et al., 2006).1.1 What is Well-Being?Seligman (2011) identified five essential factors of well-being: Positive emotions, En-gagement, Relationships, Meaning, and Accomplishment (PERMA). Positive emotions can be any positive feelings including peace, satisfaction, hope, and pleasure. Engagement occurs when one is truly immersed in the present activity or situation and may enter a state of “flow” (see Csikszentmihalyi, 1990 for a definition of “flow”). Relationships are positive connections with others where one can share their personal accomplishments and setbacks. Meaning is derived when one dedicates their time to something greater than themselves (e.g., faith, community, vo-lunteering, and family), which may lead to feeling a sense of belonging. Accomplishment is achieved when one attains mastery, success, and competency (Seligman, 2011). Well-being, therefore, is a latent construct that incorporates both subjective and objective elements (2011).Subjective well-being (SWB) is the emotional and cognitive interpretation of the quality of one's life, and is often assessed by examining one’s happiness, affect, and satisfaction with life 1(Diener, 2000). Alternatively, psychological well-being (PWB) includes positive relations, per-sonal maturity, growth, and independence (Ryff, 1989). PWB reflects a broader, more multidi-mensional construct than SWB. Ryff developed a model of PWB with six dimensions: (1) Self-acceptance (viewing oneself positively); (2) Positive relations with others (the ability to be em-pathetic and connect with others in more than superficial ways); (3) Autonomy (self-motivation and independence); (4) Environmental mastery (the ability and maturity to control and choose environments that are most appropriate); (5) Purpose in life (a sense of belonging, significance, and chosen direction); and (6) Personal growth (continuously seeking growth and optimal func-tioning). In sum, well-being is a broad, multidimensional, construct that includes one’s affect, satisfaction with life, happiness, engagement with others, personal growth, and meaning in life.A life of happiness and well-being is a common desire across cultures (Diener, 2000) and social status (Kim-Prieto, Diener, Tamir, Scollon, & Diener, 2005). There are a host of benefits associated with increased happiness and well-being: improved physical health (Diener & Chan, 2011; Howell, Kern, & Lyubomirsky, 2007; Veenhoven, 2007); decreased risk of mortality for those suffering with physical illness (Lamers, Bolier, Westerhof, Smit, & Bohlmeijer, 2012); dis-ease prevention (Cohen, Doyle, Turner, Alper, & Skoner, 2003); reduction in biomarkers in car-diac patients (Nikrahan et al., 2016); prevention of mental illness (Keyes, Dhingra, & Simoes, 2010; Wood & Joseph, 2010); better coping and resiliency (Fredrickson, 2001; Tugade & Fre-drickson, 2004); and greater workplace productivity and satisfaction (Boehm & Lyubomirsky, 2009; Keyes & Grywacz, 2005). Thus, interventions to increase happiness and well-being are needed given the long-term benefits noted across several life domains.  1.2 Interventions that Increase Well-BeingSince the inception of positive psychology in 2000 (Seligman & Csikszentmihalyi, 2000),extensive research has focused on developing and investigating the efficacy of interventions de-signed to increase well-being in healthy, subclinical, and clinical populations. The main focus of these interventions is to increase positive emotions and experiences, and, thereby, decrease neg-ative symptomology in subclinical and/or clinical populations. For healthy populations the idea is to bring clients from a ‘languishing’ state of being to a ‘flourishing’ state of being (Keyes, 2005). For subclinical and clinical populations, the goals are to significantly reduce negative symptomology while also increasing well-being (Csikszentmihalyi, 2014). Interventions used to increase well-being are typically easy to follow, self-administered, and brief.2Fordyce (1977) developed the first documented well-being intervention to increase happi-ness, which comprised of 14 techniques including spending more time with others, enhancing close relationships, thinking positively, admiring and appreciating happiness, and refraining fromworrying. More recent and common interventions developed and tested by Seligman, Steen, Park, and Peterson (2005) included: (1) Gratitude visits/letters - where participants write and de-liver a letter of gratitude to someone who has been particularly kind or helpful in the past, but who was never suitably thanked; (2) Three good things – each night for one week participants write down three good things that went well each day and identify the reasons these things went well; (3) You at your best – participants write down a story of when they were at their best, identify their personal strengths that were utilized in the story, and then read this story and re-view their personal strengths each day for one week; and (4) Using signature strengths – parti-cipants complete and receive feedback from the character strengths inventory (Peterson, Park, & Seligman, 2005), and then use one of their top five character strengths in a different way each day for one week. There are many other similar interventions, such as loving kindness meditation(Fredrickson, Cohn, Coffey, Pek, & Finkel, 2008), acts of kindness (Lyubomirsky, Sheldon, & Schkade, 2005), hope therapy (Cheavens, Feldman, Gum, Michael, & Snyder, 2006), optimism exercises (Sheldon & Lyubomirsky, 2006), mindfulness-based strength practices (Niemiec, Rashid, & Spinella, 2012), well-being therapy (Fava, Rafanelli, Cazzaro, Conti, & Grandi, 1998; Fava & Ruini, 2003), and positive psychotherapy (Seligman et al., 2006).1.3 Previous Reviews of the Effectiveness of Positive Psychology InterventionsTwo frequently cited meta-analyses have examined the overall effectiveness and moder-ating variables of positive psychology interventions: Sin and Lyubomirsky (2009) and Bolier et al. (2013).1.3.1 Sin and Lyubomirsky (2009) Meta-Analysis Sin and Lyubomirsky collected and analyzed 49 independent studies that used PPIs de-signed to increase well-being. Additionally, they analyzed 25 studies using PPIs that assessed possible decreases in depressive symptomology. They conducted an unweighted, random and fixed model meta-analysis but reported only unweighted average rs as estimated effect sizes for both well-being and depression outcomes. Their inclusion criteria included the following: (a) all PPI studies published in English between 1977 and 2008, (b) studies where the primary purpose 3of the intervention and/or activity was to increase well-being (e.g., increase happiness, and in-crease positive emotions, behaviours and cognitions) rather than decrease negative symptomo-logy, (c) studies that reported pre- and post- comparisons of measures of well-being or depres-sion, (d) studies that compared findings to some form of a control group, and (e) studies that  provided enough information to determine an effect size comparing treatment and control groups. Their exclusion criteria included all studies that were directed at improving one’s physic-al health or altering mood states.Sin and Lyubomirsky (2009) also examined the effects of six moderators (three ‘parti-cipant’ moderators and three ‘methodological’ moderators): (1) whether the participants were de-pressed or non-depressed, (2) the age of participants, (3) whether the participant chose to take part in the intervention or not, (4) whether the intervention was provided as individual therapy, group therapy, or self-administered, (5), the length of the intervention and, (6) the nature of the ‘control’ group (e.g., placebo group, or treatments as usual). They conducted both fixed and ran-dom effects models. Lastly, in an attempt to address publication bias they used the Fail-safe N method (Rosenthal, 1979, 1991; Rosenthal & Rosnow, 2008). Publication bias occurs when stud-ies with no effects or small effects are not published, and therefore are not found and included in meta-analyses. This bias may lead to an overestimation of the overall effect size. To address this potential problem, the Fail-safe N method involves calculating the number of original studies with small or zero effect sizes that would need to be found and analyzed in the meta-analysis be-fore the p value reaches non-significance (Rosenthal, 1979). The underlying rationale is that if only a small number of studies are needed to nullify the finding of significance, then it is very possible and realistic that the true effect size could be zero. However, if a large number of studiesare required to nullify the effect, then it is less likely that the true effect size is not zero.Sin and Lyubomirsky noted the presence of publication bias in their data. However, basedon the Fail-safe N method, they concluded that 2519 studies investigating increasing well-being and 420 studies investigating decreasing depressive symptoms would be needed to nullify their findings (2009). For well-being, the meta-analysis revealed a significant effect size of r = .29 (equivalent to d = .61) based on 49 studies. For decreasing depressive symptomology, a signific-ant effect size r = .31 (equivalent to d = .65) was found based on 25 studies. The participant moderator analyses revealed that depression, age, and self-selection were statistically significant moderators using fixed effect model but only age was a statistically signi-ficant moderator using random effect model. Similarly, methodological moderator analyses re-4vealed that intervention format, intervention duration, and comparison group type were statistic-ally significant moderators using a fixed effect model but only intervention format was statistic-ally significant moderator using a random effect model. In general, the moderator analyses were based on a very small number of primary studies within each moderator subgroup. For example, the age moderator analyses had only three primary studies in both of the most extreme groups (i.e., child/adolescent group and older adult group).Although this meta-analysis was the first meta-analytic review of PPIs and provided a be-neficial foundation, Bolier et al. (2013) noted several limitations including inclusion of primary studies that used both randomization and quasi-experimental designs, failure to assess study quality in moderator analyses, and inclusion of interventions that were not developed directly within the positive psychology schema; therefore, the researchers decided to conduct another meta-analysis with similar goals and to address the aforementioned limitations. 1.3.2 Bolier et al. (2013) Meta-AnalysisBolier and colleagues (2013) conducted a weighted random model meta-analysis examin-ing effects of PPIs in randomized control trials only. Their inclusion criteria for each study were as follows: (a) the study had to examine the effects of a PPI as defined by increasing well-being (e.g., increased happiness as well as increased positive emotions, behaviours and cognitions) rather than decreasing negative symptomology, and that the intervention was developed within the conceptual framework of positive psychology; (b) the study had to use randomization and there had to be a comparative control group; (c) the publication had to be peer reviewed; (d) SWB, PWB, or depression had to be measured as an outcome variable; and (e) the publication  had to provide enough information to obtain an effect size comparing treatment and control groups. Their exclusion criteria consisted of three components: (1) studies that used physical activity to increase well-being were excluded; (2) studies that employed interventions such as mindfulness, meditation, forgiveness therapy, and reminiscence interventions were excluded (Bolier et al. reasoned that “extensive meta-analyses have already been published for these types of interventions” p. 3); and (3) studies that sampled from “diseased populations” that are not founded within the positive psychology theory were excluded. They examined six moderators: (1) whether the participant chose to take part in the intervention or not; (2) the length of the inter-vention (less than four weeks, four to eight weeks, more than eight weeks); (3) whether the inter-vention was provided as individual therapy, group therapy, or self-administered; (4) how the 5sample was recruited (community, internet, referral, university); (5) whether the sample had psychosocial problems; and (6) study quality rating (conducted as part of their meta-analysis).A total of 39 studies were included in their final analyses (Bolier et al., 2013). They foundeffect sizes of d = .34 (equivalent to r = .17), d = .20 (equivalent to r = .10), and d = .23 (equival-ent to r = .11) for SWB, PWB, and depression, respectively. After finding heterogeneity they re-moved outlier effect sizes which decreased all the effect sizes (from d = .34 to .26 (r = .13) for SWB, from d = .20 to .17 (r =.08) for PWB, and from d = .23 to .18 (r = .09) for depression). As expected, publication bias was found for all three of their outcome measures but less so for SWB. The Trim and Fill method imputed three studies for PWB and five studies for depression reducing their effects from d = .20 to .16 (r = .08) and from d = .23 to .16 (r = .08), respectively. It is not quite clear from the article whether they removed outliers prior to conducting the Trim and Fill method or after. Their moderator analyses revealed significantly higher effects on depression for five of their six moderators: 1) longer duration of therapy was more effective, 2) individual therapy was more effective than group, and self-selected, 3) participants referred from a health care practi-tioner benefited most, 4) samples selected from groups with psychosocial problems demonstratedlarger effect sizes, and 5) lower quality studies revealed larger effect sizes. In contrast, there wereno significant moderator effects for SWB and PWB. However, many moderator analyses were performed on a small number of primary studies and many of them had only one or two primary studies in moderator subgroups. Therefore, these analyses are difficult to interpret.1.4 Limitations of Previous Meta-AnalysesBolier and colleagues (2013) conducted an improved meta-analysis that addressed some of the limitations found in the Sin and Lyubomirsky’s previous meta-analysis (2009). However, some limitations were still present and unaddressed. First, both previous meta-analyses lacked some details in their description of their methodology such that some aspects were difficult to replicate due to lack of detail. Second, the impact of small sample size bias was largely over-looked. Third, effect-size calculations were not clearly described and effect sizes extracted by Sin and Lyubomirsky (2009) and Bolier et al. (2013) were often substantially different for the same study. Each of these critical issues are described in more detail next. 61.4.1 Method DescriptionsIt is well-established that many published meta-analyses are so poorly described that a reader does not have enough information to understand what was done, what the reported results mean, and how to replicate these meta-analyses (Moher, Liberati, Tetzlaff, Altman, & PRISMA Group, 2009; Mulrow, 1987; Sacks, Barrier, Reitman, Ancona-Berk, & Chalmers, 1987; Sacks, Reitman, Pagano, & Kupelnick, 1996). This lack of methodological detail and clarity has led to anumber of guidelines to improve transparency and methodological clarity when reporting meta-analyses. Task forces and guidelines have been established and recommended to ensure transpar-ency in conducting research, particularly for meta-analyses or systematic reviews (Moher et al., 2009). Not only is clarity critical for the reader's conceptualization and understanding of the re-search design and methodology, it is also essential for replication purposes. Unfortunately, a number of methodological and conceptual factors were unclear in the positive psychology meta-analyses discussed above. A few of these issues are discussed next. The literature search described in Sin and Lyubomirsky’s meta-analysis (2009) is not rep-licable for two reasons: (1) the search parameters are not sufficiently described, and (2) the search strategy included searching whatever was available in Sin and Lyubomirsky's private lib-raries and gathering studies from their colleagues. Similarly, the literature search described in Bolier et al. (2013) was also not replicable. For example, although Bolier at al. (2013) listed nu-merous terms they used in conducting their searches, they did not specify how they combined them when conducting their searches. Moreover, Bolier et al. exclusion criteria are difficult to defend for two reasons. First, Bolier et al. excluded meditation, mindfulness, forgiveness, and life-review interventions because previous reviews and meta-analyses have already been conduc-ted for these types of interventions. Excluding relevant studies, regardless of the reason, is prob-lematic as it results in a meta-analysis of only some PPI interventions. Second, Bolier et al. also substantially truncated their search by only considering studies from 1998 (“…the start of the positive psychology movement)” (p. 2). This eliminates interventions designed to increase well-being prior to 1998 including the seminal work of Fordyce in 1977 and 1983 (Schueller, Kash-dan, & Parks, 2014). Schueller et al. (2014) presented a critical review of the issues and implica-tions of limiting literature searches of PPIs in this way. In addition, Sin and Lyubomirsky (2009) reported only unweighted averaged rs as effect sizes for well-being r = .29 (d = .61) and for depression r = .31 (d = .65) even though they provided p values for both fixed and random effect model. It is unclear from the article how 7these unweighted averaged rs were actually determined and why fixed and random effect model effect size estimates were not provided.Finally, though Bolier et al. (2013) improved on Sin and Lyubomirsky’s (2009) analyses by using Trim and Fill methods and by examining outliers, their reporting of such findings is limited. In their results section they reported smaller effect sizes after correcting for publication bias and outliers, but failed to mention these findings in the abstract or discussion sections of their article. Rather, they continued to report the effect sizes that were inflated by small sample size and publication bias. Readers are required to read the entire document, particularly the res-ults section, in order to learn about these inflated effect sizes. Readers who skimmed this article, or only read the abstract or discussion, would not be aware that the effect sizes reported in those sections were incorrect. 1.4.2 Small Sample Size Bias and Publication BiasSmall sample size bias (also called small study bias) occurs when smaller studies (with less precise findings) report larger effects than larger studies (with more precise findings). Small sample size bias is frequently caused by publication bias. It is well established that journals are much more likely to publish studies with statistically significant findings than studies reporting null effects (Hedges, 1989). As the result, small studies typically report much larger effect sizes than larger studies. In turn, small sample size bias has become a significant problem in meta-ana-lyses and numerous methods have been developed for identifying and estimating effect sizes in presence of small sample size bias (Borenstein, Hedges, Higgins, & Rothstein, 2009).To identify whether small sample size bias is present, one of the first steps is to plot the data in various ways. A simple scatterplot of effect sizes against study sizes may be the first step. Similar to a simple scatterplot, a funnel plot is scatterplot that graphically displays the relation-ships between estimated effect sizes on the x-axis and sample size or precision on the y-axis. Typically, larger studies appear at the top and cluster toward the mean effect size, whereas smal-ler studies usually appear at the bottom of the graph and are much more spread out due to the greater sampling error variability that exists in smaller samples (Borenstein et al., 2009). When the funnel plot appears symmetrical it indicates an absence of sample size bias. However, if the plot appears to be asymmetrical it indicates small sample size bias (frequently caused by publica-tion bias). For example, if the plot is missing more studies near the bottom and on only one side 8(where nonsignificant studies would have been plotted had they been published) this indicates the presence of small sample size bias (Light & Pillemer, 1984). Sin and Lyubomirsky (2009) noted asymmetry in a funnel plot of their data even though they did not include the funnel plots in their article. However, using the Fail-safe N, they argued that even though publication bias may be present it is “...not large enough to render the overall results nonsignificant” (Sin & Lyubomirsky, p. 477). Unfortunately, Fail-safe N method is no longer considered useful in assessing significance of small sample bias because it considers only statistical significance rather than substantive or practical significance, and it improperly as-sumes that effect sizes in the unpublished studies are zero (Borenstein et al., 2009).Figure 1 shows the funnel plot of well-being data taken from Sin and Lyubomirsky’s (2009) Table 1. The funnel plot shows a clear asymmetry indicating the presence of substantial small sample size bias. The regression test of funnel plot asymmetry confirmed the presence of asymmetry, t(47) = 4.46, p < .001. Thus, it is necessary to account for small sample size bias when estimating a well-being effect size.One of the most widely used method to estimate effect size in the presence of small sample size bias is the Trim-and-Fill method (Duval & Tweedie, 2000). The Trim and Fill meth-od is an iterative process that removes (‘trims’) the small sample sized studies that are causing the asymmetry (as seen in funnel plots) and re-computes the effect size each time creating, in theory, an estimated unbiased effect size. This process continues until the funnel plot appears symmetrical. Next, it replaces (‘fills’) the studies that were removed and their mirror images (to fill symmetry) and then estimates the unbiased effect size. Figure 2 shows the result of a Trim-and-Fill analysis of Sin and Lyubomirsky’s well-being data. The solid dark circles are the estim-ated effect sizes for the studies in their meta-analysis and the empty circles are imputed studies required to restore the symmetry. The dotted vertical line shows the estimated effect size when the small sample size bias is taken into account. The Trim-and-Fill method resulted in the im-putation of 16 missing studies and the estimated random effect weighted r = .13 which was sub-stantially lower than the unweighted mean of r = .29 reported by Sin and Lyubomirsky (2009). However, despite its widespread use, the Trim and Fill method has been criticized because it still inflates the estimated effect sizes (Peters, Sutton, Jones, Abrams, & Rushton, 2007; Terrin, Schmid, Lau, & Olkin, 2003).Figure 3 shows that small sample size effects can be identified even in the forest plot of Sin and Lyubomirsky’s well-being data. Forest plots are graphical representations of all calcu-9lated effect sizes from the meta-analysis. The squares represent the effect size and the horizontal line represents the 95% confidence interval. The size of the square indicates the size of the sample; larger squares indicate larger sample sizes and smaller squares indicate smaller samples. Figure 3 indicates that large sample studies were closer to zero whereas small sample studies res-ulted in much larger effects.Forest plots have been adopted for depicting results of cumulative meta-analyses de-signed to estimate effects sizes in the presence of small sample bias. Figure 4 is a cumulative meta-analysis forest plot for well-being effect sizes reported by Sin and Lyubomirsky (2009). Briefly, this type of cumulative meta-analysis involves entering the study with the largest sample size first and plotting that effect size. Next, the study with the second largest sample size is entered and the meta-analysis of those two effect sizes is calculated and plotted. Next, the study with the third largest sample size is entered and the meta-analysis of all three effect sizes is cal-culated and plotted. This process is repeated until no additional studies remain to be entered. As shown in Figure 4, the meta-analyses of the largest sized studies resulted in an effect size of ap-proximately r = .10 until studies with smaller samples (less than 100 participants) were added one by one to the meta-analysis. At that point, the estimated effect sizes began to drift towards the right side of the forest plot and indicated increasing effect sizes. The best estimate of an ef-fect size in the presence of a small-study effect is the effect size based on studies prior to the startof the drift, or about r = .10 (Borenstein et al., 2009). This is considerably smaller than the effect size reported by Sin and Lyubomirsky (2009).Stanley and Doucouliagos (2014) argued that estimating effect sizes using only the top 10% of the most precise studies tend to perform best when estimating effect sizes in the presenceof small sample size bias (TOP10 estimate). Sin and Lyubomirsky (2009) meta-analysis of well-being included 49 studies, and thus, the TOP10 estimate based on the 5 most precise studies is r = .12.However, the most advanced methods for estimating effect sizes in the presence of small study size bias are computationally intensive and have been developed only recently. Most signi-ficantly, Rucker, Schwarzer, Carpernter, Binder, and Schumaker (2011) introduced a limit meta-analysis that first estimates shrunken study effects using empirical Bayes estimates of study ef-fects after taking into account small-study effects and then calculates adjusted effect size estim-ates and associated confidence intervals. Figure 5 shows the results of the limit meta-analysis ap-10plied to well-being effect sizes as reported by Sin and Lyubomirsky (2009). The limit meta-ana-lysis indicates that the effect size of PPIs on well-being was nearly zero.In contrast to Sin and Lyubomirsky (2009), Bolier et al. (2013) addressed publication biasby computing the Orwin’s fail-safe number, and by using the Trim and Fill method (see above; Duval & Tweedie, 2000). The Orwin’s fail-safe number (Orwin, 1983) addresses the problems with the Fail-safe N method by allowing the researcher to specify the mean effect size in the missing studies rather than assuming the value to be zero. Furthermore, the researcher can also specify the threshold effect size that may be substantively important when determining how many missing studies are required to cause concern rather than looking at statistical significance alone (Borenstein et al., 2009). Though the Orwin’s fail-safe number and Trim and Fill methods used to address publication bias are better than the Fail-safe N method, these approaches have their limitations and have been superseded by methods described above (Borenstein et al., 2009; Sterne, Egger, & Smith, 2001). Thus, it is unclear whether reanalysis of Bolier et al.’s (2013) data using more appropriate methods for taking into account small sample size effects would confirm their findings or result in smaller effect size estimates.1.4.3 Effect Size CalculationThe critical review of Sin and Lyubomirsky (2009) and Bolier et al. (2013) meta-analysesraises several issues regarding effect size calculations for individual primary studies. First, Sin and Lyubomirsky (2009) reported only averaged unweighted rs as effect size estimates for well-being and depression. However, these estimates give the same weight to all studies, regardless ofsample size, and are considered inappropriate (Borenstein et al., 2009).Second, the previous meta-analyses did not describe in sufficient detail how they calcu-lated effect sizes for each primary study. For example, Sin and Lyubomirsky (2009) stated that effect sizes were “computed from Cohen’s d, F, t, p, or descriptive statistics” (p. 469). Bolier et al. (2013) state that they calculated Cohen’s d from the post intervention means and standard de-viations and, in some instances, “on the basis of pre- post-change score” without giving any fur-ther details. This lack of clarity is especially important because the calculation of effect sizes dif-fers depending on study design (e.g., whether the study is a between-subject or within-subject design; Morris, 2008). Thus, effect size calculations can produce different results depending on whether the study used a repeated measure design (Dunlap, Cortina, Vaslow, Burke, 1996). In re-peated measures designs, when effect sizes are calculated from test statistics such as Fs, and ts 11using usual formulae, the resulting effect sizes can be substantially inflated (Dunlap, Cortina, Vaslow, Burke, 1996; Lakens, 2013). Third, the examination of Sin and Lyubomirsky (2009) and Bolier et al. (2013) meta-ana-lyses showed that some of the articles they included overlapped. However, the correlations between the effect sizes extracted by Sin and Lyubormirsky (2009) and Bolier et al. (2013) were low, suggesting that the effect sizes were determined differently in the two meta-analyses.1.5 The Current Study At present, there is a growing body of literature focused on what delineating facets of well-being are and methods by which to increase overall well-being. As stated previously, Sin and Lyubomirsky (2009) and Bolier et al. (2013) are the two most commonly cited meta-ana-lyses investigating the effectiveness of PPIs on increasing well-being and decreasing depressive symptoms. These studies have been cited 849 and 213 times, respectively, in Google Scholar as of May, 2016. These meta-analyses provided a foundation for examining the effectiveness of PPIs. However, due to a number of critical limitations indicated above, the aim of the current study was to replicate these previous two meta-analyses, as well as any other published meta-analyses that examined the effectiveness of PPIs on increasing global well-being and decreasing depressive symptoms. The first objective of the current study was to reanalyze the reported data provided by each of the meta-analyses while taking into account small sample size bias and com-paring how similar or dissimilar the findings are in comparison to the relevant meta-analysis. Thesecond objective was to replicate each meta-analysis starting from extracting relevant effect sizesdirectly from the primary studies rather than relying on the data published in the previous meta-analyses. The third objective was to conduct a series of new meta-analyses using effect sizes ex-tracted from all studies included in the previous meta-analyses to determine the effect of PPIs on well-being, depression, and specific measures of well-being (e.g., SWLS). The fourth objective was to examine, if possible, whether the effects of PPIs are moderated by variables such as type of therapy (individual, group, self) and therapy setting (clinic, home, online). In conducting thesemeta-analyses, the data were analyzed using a weighted random effect models while taking into account small sample size bias using selected methods discussed above.122 Chapter 2: Method2.1 Search for Previous Meta-Analyses of PPI To locate relevant meta-analyses, in addition to those of Sin and Lyubomirsky (2009) andBolier et al. (2013), PubMed, PsychARTICLES, and MEDLINE were searched using full text search and the following combination of terms: (‘intervention’ OR ‘therapy’ OR ‘treatment’) AND (“positive psychology”). The retrieved abstracts were examined for articles that may have included a meta-analysis. Next, the articles with possible meta-analyses were hand searched for relevant meta-analyses.To be included in the current ‘to-be-replicated meta-analyses’, a meta-analysis must have met the following criteria. First, the meta-analysis must have separately examined the effective-ness of an intervention on increasing well-being and/or decreasing depression symptoms in chil-dren, adolescents, or adults. Second, the meta-analysis must have identified each primary study included in the meta-analysis and provided relevant effect sizes for each of them. Third, the measures of well-being used in the studies included in the meta-analysis needed to be primarily global rather than narrow and specific (i.e., the measures could not have exclusively focused on one very specific aspect of well-being). Accordingly, the global measures of well-being include measures of SWB, life satisfaction, happiness, and positive affect. The narrow and specific meas-ures of well-being include measures of gratitude, hope, and forgiveness. Fourth, the studies in-cluded in the meta-analysis needed to use measures of depression rather than other aspects of ill-being, for example, anxiety or stress. Fifth, the meta-analyses needed to be written in English. This search yielded three meta-analyses: Bolier et al. (2013), Sin and Lyubomirsky (2009), and Weis and Speridakos (2011). Although Davis et al.'s (2016) meta-analysis also examined effects of PPIs on well-being and depression, it was not included because these authors combined meas-ures of well-being and depression into a single effect size for many of their primary studies. Sin and Lyubomirsky (2009) referenced 49 primary studies for well-being and 25 primarystudies for depression. Bolier et al. (2013) referenced 28 primary studies for SWB, 20 primary studies for PWB, and 14 primary studies for depression. Weis and Speridakos (2011) referenced 10 primary studies for life satisfaction. 132.2 Effect Size Calculations Primary studies of the effectiveness of interventions on well-being and/or depression symptoms used a variety of study designs, including repeated measures, pre-post designs and between subjects post-only measures designs. Although it is relatively straightforward to calcu-late effect sizes (i.e., rs or Cohen’s ds) for between subject post-only designs using means, stand-ard deviations, Fs, ts, or ps, it is much more challenging to calculate effect sizes for repeated measures pre-post designs. Primary studies using repeated measures pre-post designs rarely re-port sufficient statistical detail (such as correlations between pre and post scores), and thus, it is often necessary to impute estimated pre-post correlations using data obtained from other studies. Critically, it is not appropriate to use Fs, ts, or ps to calculate effect sizes using formulae de-signed for between subject designs, that is, formulae that do not take into account pre-post cor-relations. Accordingly, the initial approach was to calculate effect sizes for pre-post repeated measures designs using a formula recommended by Morris (2008), specifically, dppc2, using means, standard deviations, and whenever necessary, imputed pre-post correlations, and, in addi-tion, to calculate effect sizes using only post means and standard deviations, effectively treating these repeated measures pre-post designs as between subjects post-only designs. However, be-cause primary studies did not report pre-post correlations for outcome measures, it was not pos-sible to calculate dppc2 without imputing such correlations from elsewhere for each study.Some primary studies used multiple outcome measures. To ensure that each study con-tributed only one effect size for each meta-analysis, effect sizes were first calculated for each outcome measure and then aggregated to yield one single effect size while taking into account the correlations among the within study outcomes using methods described in Schmidt and Hunter (2014) and imputing recommended default correlation of r = .50 for between within-study effects (Wampold et al., 1997). The aggregation of within study outcomes was done using R package MAc (Re & Hoyt, 2012).Similarly, some primary studies used multiple interventions. Moreover, only some of these interventions were designed within the positive psychology framework to improve well-being and/or decrease depression symptoms. Thus, effect sizes were calculated for each interven-tion designed to improve well-being and/or decrease depression symptoms within the positive psychology framework and resulting effect sizes were aggregated to yield a single effect size from each study. For example, Emmons and McCullough (2003) presented three experimental conditions: (a) participants wrote out a list of things they were grateful for in their life, (b) parti-14cipants wrote out a list of hassles they encountered that day, and (c) participants wrote out eventsthat happened that week that impacted their life. In this case, the first condition (gratitude listing)was classified as the intervention group and the last condition (event listing) as the control group.As another example, Lyubomirsky, Dickerhoof, Boehm, and Sheldon (2011) presented three ex-perimental conditions: (a) expressing optimism, (b) expressing gratitude, and (c) listing activitiesfrom the previous week. In this case, the first two conditions (expressing optimism and gratitude)were classified as the intervention groups and the third condition was classified as the control group. Subsequently, the effect sizes obtained for the two interventions were aggregated into a single effect size for that particular study using methods recommended by Schmidt and Hunter (2014) as described above.Finally, some primary studies used multiple control or comparison groups, ranging from interventions that likely decreased well-being (e.g., asking participants to reflect on negative ex-periences), to neutral controls, to interventions that increased well-being. In these cases, the mostneutral control was chosen when calculating effect sizes. However, in some cases the control group was not clearly identified. For example, Low et al. (2006) included three groups of female patients with breast cancer, who were asked to write about one of the three possible options: (a) positive thoughts about their breast cancer experience, (b) deepest thoughts and feelings about their experience with breast cancer, and (c) facts about breast cancer and treatment. The first con-dition (positive thoughts) was classified as the intervention, which fits within the positive psy-chology framework, and the last condition (facts about breast cancer and its treatment) was used as the control. Furthermore, this control group was classified as relevant to PPI (‘neutralwr’, shown in Table 1). That is, the subject of the writing task these participants completed was relev-ant or similar to the PPI condition. Therefore, writing facts about breast cancer may create emo-tions and cognitions (both negative or positive) that may influence how participants respond on questionnaires and thus, may defeat the required purposes or nature of a true control condition.2.3 Coding of Primary Studies Table 1 describes the characteristics coded for each primary study. Variables that de-scribed study design and method included study design (i.e., pre-post, post only, etc.), samples sizes of each condition, means, standard deviations or standard errors, test statistics (including p values), effect sizes, internal consistency of outcome measures, outcome measure test-retest cor-15relations for each time point, duration between pre, post, and follow up, random assignment of each individual participant, and type of control group.Variables that described samples included sample origin (where the sample was originallyrecruited from), how the sample was recruited, age of participants (overall and per condition), a description of the sample (e.g., undergraduate students, or participants with depression, or parti-cipants with cancer, or children from a middle school class), whether the sample had a clinical status (this could be psychological, physical, chronic, or neurological), whether the sample had depressive symptoms, whether the sample was using medication for their depression, and wheth-er the sample was involved with any psychotherapy throughout the duration of the intervention.Variables that described interventions included the name of the intervention, whether the intervention was solely based on positives (i.e., best possible selves) or on a mix of positive and more traditional aspects of interventions (i.e., discuss the good and bad memories), whether the format of the intervention was in a group or individual (one on one) setting, who administered the intervention, how many intervention sessions were used, the length of each session, the total duration of the intervention, and the location of the intervention setting (e.g., participants home, laboratory setting, clinic, or mixed). When an article presented a variety of session lengths (e.g., first two sessions were 45 minutes in length and the subsequent five sessions were 30 minutes in length), an average session length was calculated. Variables related to the quality and comprehensiveness of the reported details of each study included variables such as whether the study reported necessary information (e.g., means, standard deviations or standard errors, sample sizes, pre-post correlations, inferential test values, and p-values).2.4 Calculation of Effect Sizes, Missing Data Imputation, and Other IrregularitiesEffect sizes for primary study outcomes were calculated from available data in the follow-ing order of preference: (1) the post intervention means and standard deviations, (2) the post in-tervention ANOVA F-values, (3) the post intervention Cohen's ds, (4) the post intervention p-val-ues, and (5) the pre-post difference score means and standard deviations as the difference between intervention and control effect sizes.A number of primary studies included in the previous meta-analyses did not report suffi-cient data to calculate effect sizes. In the previous meta-analyses, the effects sizes for these stud-ies were imputed to be zero (e.g., Goldstein, 2007; Lyubomirsky et al., 2005, Study 1 and 2; 16Sheldon et al., 2002). In the current replication analyses, such studies were excluded unless miss-ing data could be imputed from other relevant sources. For example, if standard deviations for anoutcome measure were missing in one study/experiment but were reported elsewhere (e.g., for another study/experiment within the same article), the missing standard deviations were imputed from the available ones to allow calculation of effect sizes (e.g., Pretorious, 2008).A number of primary studies reported only overall sample size and did not report sample size for each control and intervention group. In such cases, the sample sizes for control and inter-vention groups were estimated by dividing overall sample size by the number of control and in-tervention groups. Lastly, Shapira and Mongrain (2010), Sergeant and Mongrain (2011), Mon-grain and Anselmo-Matthews (2012), and Mongrain, Chin, and Shapira (2011) are four articles that report on four seemingly different studies but actually report on different conditions/inter-ventions of the same study. Accordingly, these four articles were treated as a single study.2.5 Statistical AnalysisOnce all effect sizes were calculated they were pooled to obtain a weighted effect size of PPIs using a random effects model. A random effects model was chosen because the true PPI ef-fects are unlikely to be the same and are likely to vary across the interventions, participants, and designs (Borenstein et al., 2009; Liberati et al., 2009). A fixed effect meta-analysis assumes that all primary study effects estimate one common underlying true effect size. In contrast, a random effect meta-analysis assumes that primary study effects may estimate different underlying true effect sizes (e.g., a true effect size may be different for younger adults than for older adults, as well as for shorter or longer duration interventions). Heterogeneity – variation or inconsistency found among effect sizes – is expected to be due to chance as well as due to the array of interventions and samples used. Considerable hetero-geneity indicates substantial differences between studies. To assess this, two common heterogen-eity statistics were calculated: Cochran’s Q (Cochran, 1954) and I² (Higgins & Thompson, 2002).The Q statistic employs a chi-square distribution k (number of studies) – 1 degrees of freedom - and only informs us of whether or not heterogeneity exists; it does not indicate how much hetero-geneity exists and it is dependent on a sample size. In contrast, the I² statistic provides a percent-age of total between-study variability found among the effects sizes, where a result of I² = 0 means that the variability found among the estimated effects size is due solely to sampling error within studies (Huedo-Medina, Sánchez-Meca, Marín-Martínez, & Botella, 2006).17Small study effects were assessed by first examining scatter plots, forest plots, and funnelplots. Several methods were used to estimate effect sizes while taking into account small study effects. First, the Trim and Fill procedure (see introduction) was used (Duval & Tweedie, 2000). Second, a cumulative meta-analysis was used to determine how much the addition of small size studies would change the estimated effect size (see introduction for more detail). Third, the effectsizes were estimated based on the top 10% (TOP10) of the most precise studies (Stanley & Dou-couliagos, 2014). Stanley and Doucouliagos (2014) demonstrated that the TOP10, despite its simplicity, performs well in estimating effect sizes in the presence of small sample size bias. Fi-nally, the effect sizes were estimated using limit meta-analysis (Rücker et al., 2011) which is the most sophisticated of the methods developed for estimating effect sizes in the presence of small sample size effects. Only the Trim-and-Fill results and the limit meta-analysis results are repor-ted in this work. All analyses were conducted using R (R Core Team, 2015), including packages compute.es (Re, 2013), MAc (Re & Hoyt, 2012) meta (Schwarzer, 2015), metafor (Viechtbauer, 2010), and metasens (Schwarzer, Carpenter, & Rucker, 2014).Outliers were identified as effect sizes that were at least 1.5 times the interquartile range above the upper quartile or below the lower quartile of the distribution of effect sizes. When out-liers were identified, a meta-analysis was re-run after removal of the outliers to assess sensitivity of the findings to the presence of the outliers.183 Chapter 3: Results3.1 Sin and Lyubomirsky (2009) Meta-AnalysisSin and Lyubomirsky (2009) reported only unweighted mean rs for effects of PPIs on well-being and on depression. They provided the summary of their meta-analyses in their Table 4. They reported unweighted mean r = .29 for well-being and r = .31 for depression.3.1.1 Well-Being3.1.1.1 Reanalysis of Reported DataThe reanalysis used data reported by Sin and Lyubomirsky (2009) in their Table 1. Figure6 shows the forest plot of effect sizes (rs) as reported by Sin and Lyubomirsky, including total sample size for each study in column “Total”. The forest plot indicates that study size and effect sizes were inversely related. A random effect model estimated effect size r = .24 [95% CI = (0.18, 0.30)] with substantial heterogeneity as measured by I² = 71.9%.Figure 7, top left panel, shows the distribution of study sizes. Many studies employed small samples and less than 30% of the studies included at least 100 participants. Figure 7, top right panel, is a scatterplot of effect sizes relative to study size. The scatterplot indicates that ef-fect sizes were inversely related to sample size. Figure 7, bottom left panel, presents the funnel plot showing substantial asymmetry, indicating small study size bias. The regression test of the funnel plot symmetry confirmed that the plot was asymmetrical, t(47) = 4.46, p < .001. Accord-ingly, it was necessary to estimate the effect size in the presence of the small study size bias. The trim-and-fill method required that 16 studies were imputed and the effect size r with the imputed studies was r = .13 [95% CI = (0.06, 0.19)]. The limit meta-analyses (Figure 7, bottom right) es-timated an effect size of r = .08 [95% CI = (0.00, 0.15)]. A test of small-study effects showed Q-Q'(1) = 50.83, p < .001; test of residual heterogeneity indicated Q(47) = 120.24, p < .001. Thus, taking into account small study effects, the reanalyses resulted in much smaller estimated effect size for well-being than that reported by Sin and Lyubomirsky (2009).3.1.1.2 Complete Replication of Meta-AnalysisTable 2 reports well-being effect sizes determined as described above for each outcome meas-ure and each intervention comparison. These effect sizes were then aggregated to yield a single effect size for each study comparable to those reported in Sin and Lyubomirsky (2009) using the 19aggregation method described in the Method section. Figure 8 shows that the correlation betweenthe effect sizes reported by Sin and Lyubomirsky (2009) and the effect sizes calculated through this replication was high, r = .78 [95% CI = (0.62, 0.88)]. Figure 9 shows the forest plot of the replication effect sizes and indicates that effect sizes and sample size are inversely related. A random effect model estimated an effect size of r = .23 [95% CI = (0.17, 0.30)] with moderate heterogeneity as measured by I² = 56.5%. Figure 10, top left panel, shows the distribution of study sizes. Many studies had small samples and only about 30%of the studies had over 100 participants. Figure 10, top right panel, shows the scatterplot of effectsizes by study size. The scatterplot illustrates that sample size and effect sizes are inversely re-lated. Figure 10 bottom left panel, shows the funnel plot with substantial asymmetry, indicating small study size bias. The regression test of the funnel plot symmetry confirmed that the plot wasasymmetrical, t(38) = 3.19, p = .003. Accordingly, it was necessary to estimate the effect size in the presence of the small study size bias. The trim-and-fill method required that 12 studies were imputed and the effect size r with the imputed studies was r = .14 [95% CI = (0.07, 0.21)]. The limit meta-analyses (Figure 10, bottom right) estimated an effect size of r = 0.10 [95% CI = (-0.01, 0.20)]. A test of small-study effects showed Q-Q'(1) = 18.89, p < .001; a test of residual heterogeneity indicated that Q(38) = 70.68, p < .001. Thus, similar to the reanalyses of Sin and Lyubomirsky’s (2009) own data, the replication resulted in much smaller effect size estimate than that originally reported by Sin and Lyubomirsky.3.1.2 Depression3.1.2.1 Reanalysis of Reported DataThe reanalysis used data reported by Sin and Lyubomirsky (2009) in their Table 2. Figure11 shows the forest plot of effect sizes. Again, the forest plot indicates that effect sizes were in-versely related to sample. A random effect model estimated an effect size of r = .25 [95% CI = (0.14, 0.34)] with substantial heterogeneity as measured by I² = 74%.Figure 12, top panel, shows the distribution of study sizes. Many studies employed small samples and only about 20% of the studies had over 100 participants. Figure 12, the top right panel, shows the scatterplot of effect sizes by study size. The scatterplot indicates that sample size and effects sizes were inversely related. Figure 12, bottom left, shows the funnel plot show-ing substantial asymmetry, indicating small study size bias. The regression test of the funnel plot symmetry confirmed that the plot was asymmetrical, t(23) = 3.20, p = .004. Accordingly, it is ne-20cessary to estimate the effect size in the presence of the small study size bias. The trim-and-fill method required that 8 studies are imputed and the effect size r with the imputed studies was r = .10 [95% CI = (-0.02, 0.21)]. The limit meta-analyses (Figure 12 bottom right) estimated an ef-fect size of r = .04 [95% CI = (-0.05, 0.13)]. A test of small-study effects showed Q-Q'(1) = 28.40, p < .001; a test of residual heterogeneity indicated Q(23) = 63.79, p < .001. Thus, similar to the re-analysis of well-being effect sizes, taking into account small study effects, the reana-lyses resulted in much smaller, non-significant estimated effect sizes of PPIs on depression.3.1.2.2 Complete Replication of Meta-AnalysisTable 3 reports depression effect sizes determined as described above for each outcome measure and each intervention comparison. These effect sizes were then aggregated to yield a single effect size for each study comparable to those reported in Sin and Lyubomirsky (2009) us-ing the aggregation method described in the Method section. Figure 13 shows a scatter plot of the effect sizes reported by Sin and Lyubomirsky (2009) and the effect sizes calculated through this replication. The correlations between the two sets of effects sizes was high, r = .78 [95% CI = (0.22, 0.91). Figure 14 shows the forest plot of the replication effect sizes. Again, the forest plot indic-ates an inverse relation between effect sizes and sample size. A random effect model estimated an effect size of r = .26 [95% CI = (0.14, 0.38)] with substantial heterogeneity as measured by I² = 70.1%. Figure 15, top left panel, shows the distribution of study sizes. Many studies had small samples and only about 20% of the studies had over 100 participants. Figure 15, top right panel, shows the scatterplot of effect sizes by study size. The scatterplot indicates that  sample size and effect sizes are negatively correlated. Figure 15, bottom left panel, displays the funnel plot show-ing substantial asymmetry, indicating small study size bias. The regression test of the funnel plot symmetry confirmed that the plot was asymmetrical, t(19) = 5.33, p < .001. Accordingly, it is ne-cessary to estimate the effect size in the presence of the small study size bias. The trim-and-fill method required that 10 studies are imputed and the effect size r with the imputed studies was r =0.06 [95% CI = (-0.08, 0.19)]. The limit meta-analyses (Figure 15, bottom right) estimated an ef-fect size of r = .03 [95% CI = (-0.17, 0.11)]. A test of small-study effects showed Q-Q'(1) = 40.06, p < .001; a test of residual heterogeneity showed Q(19) = 26.82, p = .109. Thus, similar to the re-analysis of depression effect sizes, taking into account small study effects, the replication resulted in much smaller, non-significant estimated effect of PPIs on depression.213.2 Bolier et al. (2013) Meta-AnalysisBolier et al. (2013) reported effects of PPIs on SWB, PWB, and depression. They repor-ted Cohen d = .34 (r = .17) for SWB, Cohen d = .20 (r = .10) for PWB, and Cohen d = .23 (r = .11) for depression. Although not reported in their abstract nor discussion section, they also foundsmall study size bias and, using trim-and-fill method, they estimated a corrected Cohen d to be .16 (r = .08) for PWB and .16 (r = .08) for depression.3.2.1 Subjective Well-Being3.2.1.1 Reanalysis of Reported DataThe reanalysis used data reported by Bolier et al (2013) in their Table 2 and Figure 2. Figure 16 shows the forest plot of effect sizes reported by Bolier et al. (2013). The forest plot re-veals no obvious relationship between effect sizes and study sizes. Random effects model estim-ated effect size r = .17 [95% CI = (0.11, 0.22)] with moderate heterogeneity as measured by I² = 47.1.%.Figure 17, top left panel, shows the distribution of study sizes. Less than 40% of the stud-ies had over 100 participants. Figure 17, top right panel, shows the scatterplot of effect sizes as a function of study size. The scatterplot indicates that most of the effect sizes were positive and above zero, and reveals only minimal small study size effects. Bottom left of Figure 17 shows the funnel plots which indicate modest asymmetry. Consistent with this observation, a regressiontest of the funnel plot symmetry confirmed no statistically significant asymmetry, t(26) = 1.06, p = .299. Furthermore, the limit meta-analyses (shown in Figure 17, bottom right) estimated an ef-fect size of r = .13 [95% CI = (0.02, 0.24)], comparable to a random effect model without any adjustments. A test of small-study effects showed Q-Q'(1) = 2.12, p < .145; a test of residual het-erogeneity indicated Q(26) = 48.96, p = .004. The reanalysis of Bolier et al.’s (2013) SWB data confirmed their results perfectly.3.2.1.2 Complete Replication of Meta-AnalysisTables 4 reports effect sizes determined as described above for each outcome measure and each intervention comparison. These effect sizes were then aggregated to yield a single ef-fect size for each study comparable to those reported in Bolier et al. (2013). This scatter plot in Figure 18 is a scatter plot that displays the relationship  between the effect sizes reported by Bol-ier et al. (2013) and the effect sizes calculated through this replication; the correlation was r = .2285 [95% CI = (0.68, 0.94). Figure 19 shows the forest plot of effect sizes with no obvious signs of small study effects. A random effects model estimated an effect size of r = .19 [95% CI = (0.11, 0.26)] with moderate heterogeneity as measured by I² (63.8%).Figure 20, top left panel, shows the distribution of study sizes. Many studies had small samples and less than 40% of the studies had over 100 participants. Figure 20, top right panel, shows the scatterplot of effect sizes by study size and indicates only a small relationship betweenthem. The bottom left of Figure 20 shows the funnel plot and indicates only a degree of asym-metry. Consistent with this observation, a regression test of the funnel plot symmetry confirmed no statistically significant asymmetry, t(22) = 1.43, p = .166. Furthermore, the limit meta-ana-lyses (shown in Figure 20, bottom right) estimated an effect size of r = .12 [95% CI = (-0.01, 0.25)]. A test of small-study effects showed Q-Q'(1) = 5.42, p = .020; a test of residual hetero-geneity indicated Q(22) = 58.10, p < .001. These results are similar to those reported by Bolier etal. (2013) and obtained by the reanalysis of Bolier et al.’s data.3.2.2 Psychological Well-Being3.2.2.1 Reanalysis of Reported DataThe reanalysis used data reported by Bolier et al (2013) in their Table 2 and Figure 3. Figure 21 shows the forest plot of effect sizes and indicates that effect sizes and sample size are inversely related  A random effect model estimated an effect size of r = .09 [95% CI = (0.04, 0.14)] with heterogeneity, as measured by I², = 35.2%.Figure 22, top panel, shows the distribution of study sizes. Many studies had small samples; 35% of them employed more than 100 participants. Figure 22, bottom panel, shows the scatterplot of effect sizes by study size and indicates that sample size and effect sizes were negat-ively correlated. Figure 22, bottom left, shows the funnel plot showing substantial asymmetry, indicating small study size bias. The regression test of the funnel plot symmetry confirmed that the plot was asymmetrical, t(18) = 2.68, p = .015. Accordingly, it is necessary to estimate the ef-fect size in the presence of the small study size bias. The trim-and-fill method required that 5 studies are imputed and the results with the imputed studies was r = .06 [95% CI = (-0.001, 0.12)]. The limit meta-analyses (Figure 22, bottom right) estimated an effect size of r = .02 [95%CI = (-0.04, 0.08)]. A test of small-study effects showed Q-Q'(1) = 8.36, p = .004; a test of resid-ual heterogeneity indicated Q(18) = 20.97, p = .281. 23The analysis was recalculated after removing the outliers. A random effect model estim-ated an effect size of r = .06 [95% CI = (0.03, 0.10)] with no heterogeneity found, as measured by I², = 0%. The regression test of the funnel plot symmetry revealed no significant asymmetry t(17) = 2.13, p = .048. The limit meta-analyses estimated an effect size of r = .01 [95% CI = (-0.05, 0.08)]. A test of small-study effects showed Q-Q'(1) = 3.68 p = .06; a test of residual het-erogeneity indicated Q(17) = 13.81, p = .681. Thus, a reanalysis of Bolier et al.’s (2013) PWB re-vealed somewhat smaller effect sizes than reported by Bolier et al.3.2.2.2 Complete Replication of Meta-AnalysisTables 5 reports effect sizes determined as described above for each outcome measure and each intervention comparison. These effect sizes were then aggregated to yield a single ef-fect size for each study comparable to those reported in Bolier et al. (2013). The scatterplot shown in Figure 23 shows the correlation between the effect sizes reported by Bolier et al. and the effect sizes calculated through this replication; the correlation was high, r = .88 [95% CI = (0.68, 0.96). Figure 24 shows the forest plot of replication effect sizes. Again, the forest plot in-dicates that effect sizes and sample size were inversely related. A random effect model estimated an effect size of r = .16 [95% CI = (0.08, 0.24)] with moderate heterogeneity as measured by I², 41.6%.Figure 25, top left panel, shows the distribution of study sizes. Many studies had small samples and only about 35% of the studies had over 100 participants. Figure 25, top right panel, shows the scatterplot of effect sizes by study size and indicates that sample size and effect size are negatively correlated. Bottom left of Figure 25 shows the funnel plot showing substantial asymmetry, indicating small study size bias. The regression test of the funnel plot symmetry con-firmed that the plot was asymmetrical, t(14) = 2.46, p = .028. Accordingly, it is necessary to es-timate the effect size in the presence of the small study size bias. The trim-and-fill method re-quired that 5 studies are imputed and the effect size r with the imputed studies was r = .10 [95% CI = (0.01, 0.18)]. The limit meta-analyses (Figure 25, bottom right) estimated an effect size of r = .03 [95% CI = (-.08, 0.15)]. A test of small-study effects showed Q-Q'(1) = 7.73, p = .005; a test of residual heterogeneity indicated Q(14) = 17.97, p = .208. 243.2.3 Depression3.2.3.1 Reanalysis of Reported DataThe reanalysis used data reported by Bolier et al (2013) in their Table 2 and Figure 4. Figure 26 shows the forest plot of effect sizes. A random effect model estimated an effect size of r = .0.10 [95% CI = (0.03, 0.16)] with moderate heterogeneity as measured by I² = 51.4%.Figure 27, top panel, shows the distribution of study sizes. Many studies had small samples and only 43% of the studies had over 100 participants. Figure 27, bottom panel, shows the scatterplot of effect sizes by study size and depicts a negative correlation between sample size and effect size. Figure 27 shows the funnel plot showing substantial asymmetry, indicating small study size bias. The regression test of the funnel plot symmetry confirmed that the plot wasasymmetrical, t(12) = 2.71, p = .019. Accordingly, it is necessary to estimate the effect size in thepresence of the small study size bias. The trim-and-fill method required that 5 studies are im-puted and the effect size r with the imputed studies was r = .07 [95% CI = (-0.01, 0.14)]. The limit meta-analyses (shown in Figure 27) estimated an effect size of r = .02 [95% CI = (-0.04, 0.07)]. A test of small-study effects showed Q-Q'(1) = 10.14, p = .002; a test of residual hetero-geneity indicated Q(12) = 16.60, p = .165. The analyses was recalculated after removing the outliers. A random effect model estim-ated an effect size of r = 0.07 [95% CI = (0.02, 0.12)] with some heterogeneity as measured by I²= 27.7%. The regression test of the funnel plot symmetry revealed no significant funnel plot asymmetry, t(10) = 1.55, p = .152. The limit meta-analyses estimated an effect size of r = .03 [95% CI = (-0.03, 0.09)]. A test of small-study effects showed Q-Q'(1) = 2.95, p = .086; a test of residual heterogeneity indicated Q(10) = 12.27, p = .268. Thus, the reanalyses of Bolier et al.’s data revealed a smaller, non-significant effect for depression, however, this finding was sensitive to the removal of outliers.3.2.3.2 Complete Replication of Meta-AnalysisTables 6 reports effect sizes determined as described above for each outcome measure and each intervention comparison. These effect sizes were then aggregated to yield a single ef-fect size for each study comparable to those reported in Bolier et al. (2013). Figure 28 displays the correlation between the effect sizes reported by Bolier et al. (2013) and the effect sizes calcu-lated through this replication; the correlation was relatively high, r = .81 [95% CI = (0.48, 0.94). Figure 29 shows the forest plot of effect sizes and displays  no apparent small study size effects. 25A random effect model estimated an effect size of r = .14 [95% CI = (0.07, 0.22)] with moderate heterogeneity as measured by I² = 29.4%.Figure 30, top right panel, shows the distribution of study sizes. Many studies had small samples whereby less than 45% of the studies had over 100 participants. Figure 30, top right pan-el, shows the scatterplot of effect sizes by study size and depicts only minimal small study size effects. The funnel plot in the bottom left of Figure 30 shows minor asymmetry. Consistent with this observation, a regression test of the funnel plot symmetry confirmed no statistically signific-ant asymmetry, t(11) = .51, p = .623. The trim-and-fill method resulted in the imputation of 2 studies and the effect size r with the imputed studies was r = .12 [95% CI = (0.04, 0.21)]. The limit meta-analyses (bottom right of Figure 30) estimated an effect size of r = .09 [95% CI = -.01, 0.18)]. A test of small-study effects showed Q-Q'(1) = 0.39, p = .534; a test of residual het-erogeneity indicated Q(11) = 16.62, p = .120.The effect size estimates were recalculated after removal of outliers. A random effect model estimated an effect size of r = 0. 17 [95% CI = (0.12, 0.23)] with no heterogeneity as measured by I² = 0%. The limit meta -analyses estimated an effect size of r = .14 [95% CI = (.04,0.23)]. A test of small-study effects showed Q-Q'(1) = .95, p = .330; a test of residual heterogen-eity indicated Q(7) = .82, p = .997. The replication analyses indicated somewhat higher effects for depression than that reported by Bolier et al. (2013). 3.3 Weis and Speridakos (2011) Meta-AnalysisWeis and Speridakos (2011) reported mean Cohen’s d for effects of hope enhancing PPIs on life satisfaction or SWB from 10 studies only. They reported a random effect weighted mean d = .16 (r = .08). Because of a very small number of studies, the interpretation of these results as well as reanalyses and replications, including funnel plots, tests of funnel plot asymmetry, trim-and-fill analyses, and limit meta-analyses is limited.3.3.1 Life Satisfaction3.3.1.1 Reanalysis of Reported DataThe reanalysis used data reported by Weis and Speridakos (2011) in their Table 1 for life satisfaction. Figure 31 shows the forest plot of effect sizes. A random effect model estimated an effect size of r = 0.07 [95% CI = (-0.01, 0.16)] with no heterogeneity as measured by I² (0%). 26Figure 32, top left panel, shows the distribution of study sizes. Only one out of 10 studies had more than 100 participants. The top right panel shows the scatterplot of effect sizes and study sizes. The bottom left panel shows the funnel plot and the bottom right panel shows the results of limit meta-analysis. A regression test of the funnel plot symmetry confirmed no statist-ically significant asymmetry, t(8) = 1.59, p = .151. The trim-and-fill method resulted in the im-putation of 2 studies and the effect size r with the imputed studies was r = .08 [95% CI = (0.002, 0.16)]. The limit meta-analyses (bottom right of Figure 30) estimated an effect size of r = .17 [95% CI = -.07, 0.39)]. A test of small-study effects showed Q-Q'(1) = 0.71, p = .400; a test of re-sidual heterogeneity indicated Q(8) = 2.25, p = .973.3.3.1.2 Complete Replication of Meta-AnalysisTable 7 reports effect sizes determined as described above for each outcome measure and each intervention comparison. These effect sizes were then aggregated to yield a single effect size for each study comparable to those reported in Weis and Speridakos (2011). Figure 33 showsthe correlation between the effect sizes reported by Weis and Speridakos and the effect sizes cal-culated through this replication; the correlation was high, r = .87 [95% CI = (0.42, 0.98).Figure 34 shows the forest plot of effect sizes. A random effect model estimated an effect size of r = 0.13 [95% CI = (0.02, 0.23)] with no heterogeneity as measured by I² = 0%. Figure 35shows the distribution of sample sizes (top left), scatter plot of sample size against effect size (top right), funnel plot (bottom left), and limit meta-analysis (bottom right). The limit meta-ana-lyses estimated an effect size of r = .31 [95% CI = -0.12, 0.64)]. A test of small-study effects showed Q-Q'(1) = 0.77, p = .380; a test of residual heterogeneity indicated Q(6) = 5.35, p = .500.3.4 Meta-Analyses Using All Studies in the Previous Meta-AnalysesSeveral meta-analyses were conducted on the replication effect sizes using effect sizes extracted from all studies included in the previous meta-analyses to determine the effect of PPIs on well-being, depression, specific measures of well-being (e.g., SWLS), and administration set-ting.273.4.1 Well-beingSWB and PWB were combined for this overall analyses. Sin and Lyubomirsky (2009) in-cluded measures of both domains into their well-being analyses, and thus Bolier et al., (2013) SWB and PWB were also combined.Figure 36 shows the forest plot of effect sizes. As is evident, study size and effect size were inversely related A random effects model estimated an effect size of r = .19 [95% CI = (0.15, 0.24)] with moderate heterogeneity as measured by I², 52.5%.  Figure 37 top left panel, shows the distribution of study sizes. Many studies had small samples and less than 25% of the studies had over 100 participants. Figure 37, top right panel, shows the scatterplot of effect sizes by study size and indicates that  sample size and effect size are negatively correlated. The bottom left of Figure 37 shows the funnel plots which indicate substantial asymmetry, indicating small study size bias. Consistent with this observation, a re-gression test of the funnel plot symmetry confirmed significant asymmetry, t(60) = 3.11, p < .005. Accordingly, it is necessary to estimate the effect size in the presence of the small study sizebias. The trim-and-fill method required that 14 studies are imputed and the effect size r with the imputed studies was r = .13 [95% CI = (0.08, 0.18)]. The limit meta-analyses (Figure 37, bottom right) estimated an effect size of r = 0.10 [95% CI = (0.03, 0.17)]. A test of small-study effects showed Q-Q'(1) = 17.83, p < .001; a test of residual heterogeneity Q(60) = 110.64, p < .001. Thus, the replication meta-analysis of all well-being effect sizes revealed small but statistically significant effect of r = .10.3.4.2 DepressionFigure 38 shows the forest plot of replication effect sizes. Again, this plot indicated that sample size and effect size are inversely related . A random effects model estimated an effect sizeof r = .19 [95% CI = (0.10, 0.27)] with moderate heterogeneity as measured by I², 66%.Figure 39 top left panel, shows the distribution of study sizes. Many studies had small samples and less than 25% of the studies had over 100 participants. Figure 39, top right panel, shows the scatterplot of effect sizes by study size and indicates that sample size and effect sizes were negatively correlated. Bottom left of Figure 39 shows the funnel plots which indicate sub-stantial asymmetry, indicating small study size bias. Consistent with this observation, a regres-sion test of the funnel plot symmetry confirmed significant asymmetry, t(25) = 2.30, p = .030. Thus, it is necessary to estimate the effect size in the presence of the small study size bias. The 28trim-and-fill method required that 8 studies are imputed and the effect size r with the imputed studies was r = .09 [95% CI = (0.00, 0.19)]. The limit meta-analyses (Figure 39, bottom right) estimated an effect size of r = -0.03 [95% CI = (-0.11, 0.05)]. A test of small-study effects showed Q-Q'(1) = 13.37, p < .001; a test of residual heterogeneity Q(25) = 63.19, p < .001.After removing the four identified outliers, a random effects model estimated an effect size of r = .11 [95% CI = (0.04, 0.18)] with moderate heterogeneity as measured by I², 47.6%. A regression test of the funnel plot symmetry revealed no significant asymmetry, t(21) = .94, p = .357. The limit meta-analyses estimated an effect size of r = .00 [95% CI = (-0.08, 0.08)]. A test of small-study effects showed Q-Q'(1) = 1.70, p = .193; a test of residual heterogeneity Q(21) = 40.25, p = .007. Thus, the replication meta-analysis of depression effect sizes revealed no statist-ically significant effect.3.4.3 Satisfaction With Life Scale (SWLS; Diener, Emmons, Larsen, & Griffin, 1985)One of the objectives was to examine effects of PPIs on specific measures of well-being and/or depression. However, because of a wide variety of measures used and a relatively small number of studies, only SWLS was used in more than ten different studies and thus, only SWLS data could be analyzed.Figure 40 shows the forest plot of effect sizes. As is evident, an inverse relationship between sample size and effect size exists. A random effects model estimated an effect size of r =.10 [95% CI = (-0.01, 0.22)] with moderate heterogeneity as measured by I², 57%.Figure 41 top left panel, shows the distribution of study sizes. Many studies had small samples and less than 20% of the studies had over 100 participants. Figure 41, top right panel, shows the scatterplot of effect sizes by study size and  indicates no substantial small study ef-fects. The funnel plot in the bottom left of Figure 41 shows no apparent asymmetry. Consistent with this observation, a regression test of the funnel plot symmetry revealed no significant asym-metry, t(11) = 0.32, p = .759. The limit meta-analyses (Figure 41 , bottom right) estimated an ef-fect size of r = 0.09 [95% CI = (-0.12, 0.28)]. A test of small-study effects showed Q-Q'(1) = 0.25, p = .006; a test of residual heterogeneity Q(11) = 27.67, p < .005. Although the effects of PPIs on SWLS were not statistically significant, likely due to small effects and a small number ofstudies, the effect size was consistent with the effect of PPIs on well-being in general.293.4.4 Effects of Intervention Administration Setting on Well-beingWell-being data was analyzed separately by type of intervention administration setting: controlled (e.g., clinics, workplace, labs) versus uncontrolled (home, online). For more con-trolled settings, a random effects model estimated an effect size of r = .23 [95% CI = (.16, 0.31)] with moderate heterogeneity as measured by I², 54.2%. A regression test of the funnel plot sym-metry confirmed no significant asymmetry, t(27) = 1.55, p = .133. The limit meta-analyses estim-ated an effect size of r = 0.13 [95% CI = (-0.00, 0.23)]. A test of small-study effects showed Q-Q'(1) = 4.99, p = .026; a test of residual heterogeneity Q(27) = 56.17, p < .001. For uncontrolled settings, a random effects model estimated an effect size of r = .17 [95% CI = (.09, 0.24)] with moderate heterogeneity as measured by I², 58.9%. A regression test of the funnel plot symmetry confirmed no significant asymmetry, t(13) = 1.35, p = .207. The limit meta-analyses estimated aneffect size of r = 0.11 [95% CI = (-0.02, 0.24)]. A test of small-study effects showed Q-Q'(1) = 4.18, p = .041; a test of residual heterogeneity Q(13) = 29.90, p < .005.304 Chapter 4: Discussion4.1 Summary of Main Findings and Comparison with Previous LiteratureDeveloping and assessing the efficacy of PPIs has become an important focus of re-searchers since the inception of positive psychology, and consequently, conducting a meta-ana-lysis was an essential step in moving the field forward. Sin and Lyubomirsky (2009), provided researchers, clinicians, and readers with findings suggesting that PPIs are effective at increasing well-being and decreasing depressive symptoms. Their article provided a general description of what comprises PPIs and how many of these interventions had been developed and tested. This meta-analysis laid a critical foundation to further examine the PPI literature. Accordingly, shortlythereafter, Bolier and colleagues (2013) also conducted a meta-analysis but they were interested in assessing randomized control trials only and study quality of primary studies. They also found that PPIs are effective in increasing well-being and decreasing depression symptoms, however, the effects were much smaller than those of Sin and Lyubomirsky (2009). Bolier et al., (2013) at-tributed these differences to the selection criteria. In order to better understand the discrepancy between these two meta-analyses, the current study explored the earlier work with the central goal of advancing our understanding of positive psychology. The present study had four primary objectives. First, it re-analyzed the effect sizes repor-ted in previous meta-analyses of PPIs (Bolier et al., 2013; Sin & Lyubomirsky, 2009; Weis & Speridakos, 2011) using appropriate methods and accounting for small sample bias. Second, it replicated the previous meta-analyses by recalculating effect sizes directly from the primary studies. Third, it conducted a series of new meta-analyses using effect sizes extracted from all studies included in the previous meta-analyses to determine the effect of PPIs on well-being and depression. Fourth, when possible, it examined potential moderating effects of variables such as type of therapy (individual, group, self) and therapy setting (clinic, home, online) on PPIs. Reanalysis of effect sizes reported by Sin and Lyubomirsky (2009) revealed smaller ef-fect size estimates for both well-being (r = .08) and depression (r = .04) than the previous au-thors originally reported (r = .29 and r = .31, respectively). There were two major reasons for theinflated estimates reported by Sin and Lyubomirsky. First, Sin and Lyubomirsky reported effect size estimates as simple unweighted averages of study level effect sizes (i.e., they averaged rs across the studies included in their meta-analysis). This approach has limitations because it de-scribes a particular sample of studies resulting in limited generalizability, and because it gives 31equal weight to small- and large-size studies (Borenstein et al., 2009). Second, Sin and Ly-ubomirsky noted that their effect sizes resulted in asymmetric funnel plots, but they used Fail Safe N to conclude that small-study effects did not significantly inflate their findings. However, the Fail Safe N is no longer considered useful in assessing small-study effects (Borenstein et al., 2009). The present study’s reanalysis confirmed that the funnel plots were asymmetric for both well-being and depression, and the random effects limit meta-analysis estimates are much smal-ler (and not statistically significant for depression) due to small-study effects.Replication of Sin and Lyubomirsky (2009) meta-analyses revealed relatively high correl-ations between effect sizes determined by the current study and by those in the previous study forboth well-being and depression. Consistent with the similar effect sizes extracted from the primary studies, the replication analyses and estimated effect sizes for well-being and for depres-sion were very similar to those obtained by the reanalyses of effect sizes reported by Sin and Ly-ubomirsky. The replication analyses resulted in nearly the same findings as those resulting from the reanalyses despite that several studies that did not report essential data to calculate effect sizes were excluded from the replications.Reanalysis of effect sizes reported by Bolier et al. (2013) revealed the same estimated ef-fect size for SWB (r = .17). However, the estimated effect sizes for PWB (r = .02), and for de-pression (r = .02) were smaller (and not statistically significant) than originally reported in Bolieret al. (2013) (r = .09, and r = .11, respectively). When outliers were removed, the estimated ef-fect sizes for PWB were r = .01 and for depression r = .07. The latter result is partially attribut-able to the test of funnel plot asymmetry being no longer statistically significant, in part due to the smaller number of effect sizes. However, the limit meta-analysis estimated the effect size for depression after the removal of outliers as r = .03.Replication of the Bolier et al. (2013) meta-analyses revealed relatively high correlations between effect sizes determined by the current study and those reported in their meta-analysis forSWB, PWB, and depression. Despite the removal of several original studies (due to insufficient data to calculate effect sizes), the replication analyses of SWB and PWB were very similar to those obtained by the reanalyses. Replication results for depression resulted in a larger estimated effect sizes of r(13) = .14 without removing outliers and r(9) = .17 with removal of outliers. However, these results need to be viewed with caution as they are based on a small number of studies. Moreover, even though the small-study effects were not statistically significant, the scat-32terplots of effect sizes versus study sample sizes show that large-size studies resulted in substan-tially larger effects than small size studies.The reanalysis of effect sizes reported by Weis and Speridakos (2011) revealed the same estimated effect size for life satisfaction (r = .07) as reported in the original meta-analysis. However, whereas the confidence interval on the random effect estimate in the replication in-cluded zero, Weis and Speridakos (2011) reported that the confidence interval on their estimate did not include zero. The reason for the discrepancy is difficult to determine as Weis and Speridakos (2011) did not report their analytical strategy in sufficient detail.Replication of Weis and Speridakos (2011) meta-analysis revealed relatively high correla-tions between effect sizes determined by the reanalysis in the current study and those effect sizes reported in their meta-analysis. The replication revealed somewhat higher and statistically signif-icant estimated effect size r(8) = .13. One potential explanation for this small difference is that one study was excluded from the replication because it did not report data that was needed to cal-culate effect sizes (Weis and Speridakos imputed r = 0 as the effect size for this study). Another reason is that one study included in Weis and Speridakos was not available. Finally, variability inestimated effect sizes is expected due to the small number of studies included.In summary, the reanalyses and replications of Sin and Lyubomirsky (2009), Bolier et al. (2013), and Weis and Speridakos (2011) meta-analyses indicate that there is a small effect of ap-proximately r = .10 of PPIs on well-being, including life satisfaction. In contrast, the effect of PPIs on depression was nearly zero when based on the studies included in Sin and Lyubormirsky (2009) and highly variable, and sensitive to outliers, when based on studies included in Bolier et al. (2013). Notably, Sin and Lyubomirsky (2009) included nearly twice as many studies as Bolierat al. (2013) in their meta-analysis of the effects of PPIs on depression.The present study’s reexamination of all the studies included in the three previous meta-analyses provided further insight into these findings. The first meta-analysis examined the effectsof PPIs on well-being measures. When Sin and Lyubomirsky's (2009) studies of well-being, which included both SWB and PWB, and Bolier et al.’s (2013) studies of SWB and PWB were combined, there was a total of 62. After accounting for small-sample bias, the overall effect was r = .10 with 95% CI = (.03, .17). The second meta-analysis examined the effects of PPIs on de-pression. After accounting for small-sample bias, an overall effect of r = .03 was calculated with a 95% CI = (-.11, .05). This finding were not very sensitive to the removal of four outliers; the estimated effect size after removal of 4 outliers was r(23) = .11 with 95% CI = (.04, .18) and the 33small-study effect was no longer statistically significant, possibly due to the loss of statistical power. Consistent with this, the limit meta-analysis produced an estimated effect size of r(23) = .00 with 95% CI = (-.08, .08) which is nearly identical to the estimate with the outliers. Evidence of heterogeneity in a meta-analysis is to be expected, given that included stud-ies are often diverse (i.e., differences in methodological designs, samples, interventions, inter-vention durations and dosages, outcome measures, etc.). What is important is the extent to which the degree of heterogeneity (inconsistency between studies) influence the interpretation of the findings of the meta-analysis (Higgins, Thompson, Deeks, & Altman, 2003). As can be seen fromTable 10, heterogeneity is still evident within the data after accounting for small sample size bias.That is, there remains a moderate degree of inconsistency between studies that could be ex-plained by other factors (moderators). Unfortunately, because of the lack of methodological de-tail in primary studies it was impossible to conduct meaningful moderator analyses. The initial idea to analyze the effect of PPIs on specific measures of well-being and de-pression was thwarted by small number of studies and large variability in measures used. The most frequent measure of well-being (i.e., the SWLS) was used in only 13 studies and the most frequent measure of depression was used in only 5 studies. Although the results of these meta-analyses were consistent with the findings above, they are impossible to interpret due to the small number of studies and low statistical power.Similarly, the initial goal of analyzing the effects of possible moderators was impossible to implement because of the small number of studies, substantial small-study effects (the large moderator), and unclear descriptions of methods reported in the primary studies that made it im-possible to determine the status of each potential moderator. Nevertheless, the meta-analyses of effects of PPIs conducted in controlled (e.g., clinics, workplace) vs. less controlled settings (e.g., home, online) on well-being revealed similar effect sizes: r(29) = .23 for controlled settings and r(15) = .17 for home and online settings, and when adjusted for small-study effects, these esti-mates were reduced to r(29) = .13 and r(15) = .11, for controlled, and home and online settings, respectively. Thus, these data suggest that there is no appreciable difference between the effec-tiveness of PPIs conducted in controlled vs. home and online settings.4.2 ImplicationsThe current study has a number of implications. First, the effects of PPIs on increasing well-being are small and not significant for decreasing depression. As noted above, the major 34reason for the larger effects reported in previous meta-analyses was that these studies did not ap-propriately take into account prevalent small-study effects. The small-study effects are a frequentproblem with meta-analyses in many fields and a number of methods (e.g., cumulative meta-ana-lysis, TOP10, limit meta-analysis) have been developed to estimate effect sizes in the presence ofsmall-study effects. Unfortunately, these methods were not employed in the previous meta-ana-lyses addressed by the current study. Given the presence of the small-study effects, future meta-analyses of the PPIs must take into account small-study effects using appropriate estimation methods.Second, these findings are tentative because the previous meta-analyses did not include all available studies. To illustrate, Bolier et al.’s (2013) inclusion criteria are restrictive because, for example, they excluded (a) all relevant studies published prior to the coining of the term “Positive Psychology”, (b) all studies of effects of mindfulness and meditation on well-being, and (c) all studies that did not explicitly mention “positive psychology”. As pointed out by Schueller, et al. (2014), Bolier et al.’s inclusion criteria are too narrow and exclude numerous studies that use the same interventions and same outcome measures. If a substantial number of relevant studies are not included, the findings based on only a small sample of relevant studies may not reflect the cumulative findings across the population of previous studies. In turn, a fail-ure to conduct a comprehensive search for primary studies also reduces meta-analysists’ ability to conduct meaningful moderator analyses (Schueller, et al., 2014).Third, the failure to include all available studies in the previous meta-analyses suggests the need for a comprehensive meta-analysis of PPIs effect on well-being starting with a compre-hensive search for relevant studies. A preliminary search using PsycInfo for studies of PPIs usingonly the most obvious search strategy (search for all studies mentioning both “positive psycho-logy” and at least one of the terms “intervention” or “therapy”) yielded over 200 relevant studies in March of 2016, more than tripling the number of studies included in the three previous meta-analyses.  Fourth, the review of the primary studies revealed persistent problems with method and results sections, as well as not carefully considering the quality of primary studies (Bolier et al., 2013). In general, no primary studies with pre-post designs reported pre-post correlations for out-come measures that are necessary to calculate the most appropriate effect sizes (Morris, 2008). Though some of the authors of the primary studies were contacted by email, they did not providethese correlations. As a result, the current study relied primarily on the post data only, following 35the approach adopted by Bolier et al. (2013). Accordingly, these findings suggest that researchersneed to be more aware of the need to report all necessary statistical information to facilitate fu-ture meta-analyses. Although numerous guidelines have been provided for reporting the results of studies such as JARS (APA Publication and Communications Board Working Group on Journ-al Article Reporting Standards, 2008), researchers appear to be slow to adopt them, and the present findings suggest the need to push for adoption of such guidelines by researchers in the PPI field.Fifth, it is evident from the diverse inclusion and exclusion criteria of previous meta-ana-lyses that there is no agreement as to what constitutes a PPI. Bolier et al. (2013) excluded inter-ventions that others consider PPIs (e.g., mindfulness and meditation). Bolier et al. even specu-lated that different inclusion criteria were the reason for discrepancies between their findings andthose of Sin and Lyubomirsky (2009). The current reanalysis casts doubt on this explanation as the findings were comparable when small-study effects were taken into account. However, the definition of a PPI is critical for determining which studies to include in future meta-analyses. Schueller et al. (2014) argued that including only studies that mention “positive psychology” would miss many 'positive intervention' studies. Similarly, Parks and Biswas-Diener (2013) ac-knowledged that it can be rather arduous to define interventions that are aimed at increasing the ‘positives’. Clearly, this is one of the tasks that needs to be addressed in the near future.4.3 LimitationsThe limitations of the present study stem from the choices of the methods adopted. Most importantly, both the reanalyses and replications rely on the selection of primary studies includedin previous meta-analyses. Because there are numerous studies that were not included in the pre-vious meta-analyses, the findings may not be representative of the literature on PPIs.The primary studies did not always comprehensively report details of their methods and results. Although more appropriate effect size indices exist for pre-post designs (Dunlap, Cortina,Vaslow, & Burke, 1996; Moris, 2008), they could not be used and inferior, post-only effect size indices were used because primary studies did not report pre-post correlations for outcome meas-ures. A number of studies failed to report even standard deviation and other statistics needed for effect size calculations. Similarly, the primary studies often failed to report features of their methods that would allow a meta-analyst to determine whether their intervention was individual or group, in clinic/controlled setting or at home or online settings, etc. This, together with the 36small number of studies and presence of widespread small-study effects, made it impossible to conduct meaningful moderator analyses.As noted above, the previous meta-analyses did not adopt a common PPI definition. As a result, their inclusion and exclusion criteria were inconsistent and different samples of PPI stud-ies were included in each study.In addition to the inconsistency between the definitions of PPIs, there is also inconsist-ency in selecting the relevant outcome measures. The majority of studies included in the previousPPI meta-analyses used outcomes measures such as life satisfaction, happiness, positive affect, and engaging relations. However, many other studies included measures of depression, anxiety, and stress. This raises a question of whether they are better conceptualized as comprising more orthogonal dimensions that are correlated but separable. Unfortunately, at this time, there is little consensus with respect to this issue. For example, as noted previously, Davis et al. (2016) adop-ted a combined measure of well-being and depression for about half of the studies in their meta-analysis, assuming that well-being and ill-being are merely at the opposite ends of a single con-tinuum. Others, including Keys (2005), argue that well-being and ill-being are poorly correlated, orthogonal constructs.375 Chapter 5: ConclusionIn conclusion, the objectives of the current study were to re-analyze the data reported in previous meta-analyses examining the effectiveness of PPIs on increasing well-being and de-creasing depression, conducting a complete replication (extracting data from original sources) of the previous meta-analyses, and combining data across meta-analyses to calculate overall estim-ated effect sizes. The reanalysis of previously reported data showed that although correlations between the current effect sizes and the other meta-analyses effect sizes were fairly high (sug-gesting that the same data were extracted), the effect sizes of this current study were lower and often nonsignificant. The major contributing factor for this discrepancy was that the present study accounted for the strong presence of small-sample size bias. When taking small-study ef-fects into account, the findings from the current reanalyses and replication meta-analysis sug-gests that PPIs have a small effect on increasing well-being (r = .10 for 62 studies), but no signi-ficant effects on decreasing depressive symptoms (27 studies).Although the effect of PPIs on well-being was shown to be small (r = .10) the magnitude of this effect is similar to that of the relationship between asbestos and laryngeal cancer (r = .10; Bushman & Anderson, 2001) and larger than the relationships between Aspirin and heart attacks (r = .03; Rosenthal, 1995). Despite similarly ‘small’ effects, support for research aimed at in-creasing well-being, particularly mental well-being, is generally low and nowhere near the level directed at studies directed at decreasing ill-being. Current findings concluded that the imple-mentation of PPIs increase well-being. That is, using PPIs reduces illness and related mortality, produces greater career and work place productivity, and produces more effective coping and re-silience in times of stress and trepidation.Research within the PPI literature requires more external support from funding and gov-ernment agencies in order to further examine, empirically validate, and advance these interven-tions.  A significant increase in external support would benefit and advance scientific knowledge in two ways: (1) primary studies would be better able to increase sample sizes, conduct more powerful studies, and run randomized control trials, and (2) the impact of highly accessible PPIs will have a boomerang effect where positive outcomes will be advantageous in terms of physical health (Diener & Chan, 2011; Howell, Kern, & Lyubomirsky, 2007; Veenhoven, 2007), disease and illness prevention (Cohen, Doyle, Turner, Alper, & Skoner, 2003), and greater workplace productivity and satisfaction (Boehm & Lyubomirsky, 2009; Keyes & Grywacz, 2005). 38It is also important to contextualize the non-significant findings of the effects of PPIs on depressive symptoms. In addition to potentially not having enough power to detect significant findings (entire sample size consisted of only 27 studies), perhaps PPIs are less effective at treat-ing depressive symptoms as the primary source of treatment. That is, PPIs may not be helpful, or even potentially contraindicative, as a first source of treatment for patients with depressive symp-toms. It may be necessary to address depressive symptoms through empirically-validated inter-ventions (e.g., cognitive behavioural therapy; Beck, 1964) first to alleviate the negative symp-toms and then introduce PPIs as a supplemental treatment. This could be an important next step for future research examining the relationship between depression symptoms and PPIs. 5.1 Future DirectionsIn addition to developing a consensus on the definition of what constitutes a positive in-tervention, it is necessary to empirically determine the extent to which well-being and ill-being measures are assessing the same or different constructs. In turn, this will enlighten the debate on whether well-being and ill-being are on opposite ends of the same spectrum or are better concep-tualized as representing constructs on orthogonal dimensions. However, to date, no large com-prehensive study has examined the extent to which various measures of well-being and ill-being correlate. Second, given that the current meta-analysis found relatively small effect sizes for PPIs, future research should consider approaches that might increase the efficacy of newly developed interventions. For example, exposing participants to longer-term interventions is one suggestion. As an example, Howell, Passmore and Holder (in press) suggested that an implicit theory of well-being manipulation, that can be viewed as an intervention, might be more effective if delivered as a longer informational seminar, rather than as a brief single session. Similarly, McMahon and Estes (2015) discussed in their meta-analysis of studies of affective changes following exposure to nature, the majority of experimental studies have only employed short duration, single experiences with natural environments. Along with Capaldi, Passmore, Nisbet, Zelenski, and Dopko (2015), they supported the development of interventions over longer terms such as that utilized by Passmore and Howell (2014). Not only may duration of intervention influence the overall effect, but also the dosage of the intervention (i.e., a one-time 30-minute task such as writing a gratitude letter versus three 10-minute tasks in one week). Additionally, rather than relying on a single type of intervention, the development and test of PPIs that are 39multi-modal and multi-construct (e.g., best possible self, gratitude letters, character strengths) may produce larger effects over single-element interventions (Lutes et al., 2016).Lastly, a comprehensive meta-analysis of all relevant studies will be influential in advan-cing the field at large. Such a meta-analysis is likely to allow for meaningful moderator analyses in answering questions such as: Is group administration more effective than individual adminis-tration? Are longer interventions more effective than shorter interventions? Are some types of in-terventions more effective than other types of interventions? Importantly, a comprehensive meta-analysis is likely to provide a more definitive determination of how effective PPIs are at increas-ing well-being.40ReferencesAbbott, J.-A., Klein, B., Hamilton, C., & Rosenthal, A. J. (2009). The impact of online resilience training for sales managers on wellbeing and performance. E-Journal of Applied Psychology, 5, 89–95. http://doi.org/10.7790/ejap.v5i1.145APA Publications and Communications Board Working Group on Journal Article Reporting Standards. (2008). Reporting standards for research in psychology: Why do we need them? What might they be? The American Psychologist, 63, 839–851. http://doi.org/10.1037/0003-066X.63.9.839Beck, A. T. (1964). Thinking and depression. II. Theory and therapy. Archives of General Psychiatry, 10, 561–571.Bedard, M., Felteau, M., Mazmanian, D., Fedyk, K., Klein, R., Richardson, J., … Minthorn-Biggs, M.-B. (2003). Pilot evaluation of a mindfulness-based intervention to improve quality of life among individuals who sustained traumatic brain injuries. Disability and Rehabilitation, 25, 722–731. http://doi.org/10.1080/0963828031000090489Boehm, J., & Lyubomirsky, S. (2009). The Promise of Sustainable Happiness. Psychology Faculty Books and Book Chapters. Retrieved from http://digitalcommons.chapman.edu/psychology_books/10Boehm, J. K., Lyubomirsky, S., & Sheldon, K. M. (2011). A longitudinal experimental study comparing the effectiveness of happiness-enhancing strategies in Anglo Americans and Asian Americans. Cognition & Emotion, 25, 1263–1272. http://doi.org/10.1080/02699931.2010.541227Bolier, L., Haverman, M., Westerhof, G. J., Riper, H., Smit, F., & Bohlmeijer, E. (2013). Positivepsychology interventions: A meta-analysis of randomized controlled studies. BMC PublicHealth, 13, 119. http://doi.org/10.1186/1471-2458-13-11941Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to Meta-Analysis. West Sussex, UK: Wiley.Buchanan, C. L. (2007). Making hope happen for students receiving special education services (Order No. 3303999). Available from ProQuest Dissertations & Theses Global. (304860800). Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.proquest.com/docview/304860800?accountid=14656  Buchanan, K. E., & Bardi, A. (2010). Acts of kindness and acts of novelty affect life aatisfaction.The Journal of Social Psychology, 150, 235–237. http://doi.org/10.1080/00224540903365554Burton, C. M., & King, L. A. (2004). The health benefits of writing about intensely positive experiences. Journal of Research in Personality, 38, 150–163. http://doi.org/10.1016/S0092-6566(03)00058-8Bushman, B. J., & Anderson, C. A. (2001). Media violence and the American public: Scientific facts versus media misinformation. American Psychologist, 56, 477–489. http://doi.org/10.1037/0003-066X.56.6-7.477Capaldi, C. A., Passmore, H.-A., Nisbet, E. K., Zelenski, J. M., & Dopko, R. L. (2015). Flourishing in nature: A review of the well-being benefits of connecting with nature and its application as a positive psychology intervention. International Journal of Wellbeing, 5, 1-16.Cheavens, J. S., Feldman, D. B., Gum, A., Michael, S. T., & Snyder, C. R. (2006). Hope therapy in a community sample: A pilot investigation. Social Indicators Research, 77, 61–78. http://doi.org/10.1007/s11205-005-5553-0Cochran, W. G. (1954). The combination of estimates from different experiments. Biometrics, 10, 101-129.42Cohen, S., Doyle, W. J., Turner, R. B., Alper, C. M., & Skoner, D. P. (2003). Emotional style and susceptibility to the common cold. Psychosomatic Medicine, 65, 652–657. http://doi.org/10.1097/01.PSY.0000077508.57784.DACook, E. A. (1998). Effects of reminiscence on life satisfaction of elderly female nursing home residents. Health Care for Women International, 19, 109–118. http://doi.org/10.1080/073993398246449Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York,  NY: Harper Collins.Csikszentmihalyi, M. (2014). Flow and the foundations of positive psychology: The collected works of Mihaly Csikszentmihalyi. New York, NY: Springer.Curry, L. A., & Snyder, C. R. (2000). Hope takes the field: Mind matters in athletic performances. In Handbook of hope: Theory, measures, and applications (pp. 243–259). San Diego, CA: Academic Press.Davis, D. E., Choe, E., Meyers, J., Wade, N., Varjas, K., Gifford, A., … Worthington, E. L. (2016). Thankful for the little things: A meta-analysis of gratitude interventions. Journal of Counseling Psychology, 63, 20–31. http://doi.org/10.1037/Davis, M. C. (2004). Life review therapy as an intervention to manage depression and enhance life satisfaction in individuals with right hemisphere cerebral vascular accidents. Issues inMental Health Nursing, 25, 503–515. http://doi.org/10.1080/01612840490443455Della Porta, M. D., Sin, N. L., & Lyubomirsky, S. (2009).Searching for the placebo effect in happiness-enhancing interventions: An experimental longitudinal study with depressed participants. Paper presented at the Annual Meeting of the Society for Personality and Social Psychology, Tampa, FL.43Diener, E. (2000). Subjective well-being: The science of happiness and a proposal for a national index. American Psychologist, 55, 34–43.Diener, E., & Chan, M. Y. (2011). Happy people live longer: Subjective well-being contributes tohealth and longevity. Applied Psychology: Health and Well-Being, 3, 1–43. http://doi.org/10.1111/j.1758-0854.2010.01045.xDiener, E., Emmons, R. A., Larsen, R. J., & Griffin, S. (1985). The Satisfaction with Life Scale. Journal of Personality Assessment, 49, 71–75. http://doi.org/10.1207/s15327752jpa4901_13Duggleby, W. D., Degner, L., Williams, A., Wright, K., Cooper, D., Popkin, D., & Holtslander, L.(2007). Living with hope: Initial evaluation of a psychosocial hope intervention for older palliative home care patients. Journal of Pain and Symptom Management, 33, 247–257. http://doi.org/10.1016/j.jpainsymman.2006.09.013Dunlap, W. P., Cortina, J. M., Vaslow, J. B., & Burke, M. J. (1996). Meta-analysis of experimentswith matched groups or repeated measures designs. Psychological Methods, 1, 170–177. http://doi.org/10.1037/1082-Duval, S., & Tweedie, R. (2000). Trim and fill: A simple funnel-plot-based method of testing andadjusting for publication bias in meta-analysis. Biometrics, 56, 455–463.Emmons, R. A., & McCullough, M. E. (2003). Counting blessings versus burdens: An experimental investigation of gratitude and subjective well-being in daily life. Journal of Personality and Social Psychology, 84, 377–389. http://doi.org/10.1037/0022-3514.84.2.377Fava, G. A., Rafanelli, C., Cazzaro, M., Conti, S., & Grandi, S. (1998). Well-being therapy. A novel psychotherapeutic approach for residual symptoms of affective disorders. Psychological Medicine, 28, 475–480.44Fava, G. A., & Ruini, C. (2003). Development and characteristics of a well-being enhancing psychotherapeutic strategy: well-being therapy. Journal of Behavior Therapy and Experimental Psychiatry, 34, 45–63.Fava, G. A., Ruini, C., Rafanelli, C., Finos, L., Salmaso, L., Mangelli, L., & Sirigatti, S. (2005). Well-being therapy of generalized anxiety disorder. Psychotherapy and Psychosomatics, 74, 26–30. http://doi.org/10.1159/000082023Feldman, D. B., & Dreher, D. E. (2012). Can hope be changed in 90 Minutes? Testing the efficacy of a single-session goal-pursuit intervention for college students. Journal of Happiness Studies, 13, 745–759. http://doi.org/10.1007/s10902-011-9292-4Fordyce, M. W. (1977). Development of a program to increase personal happiness. Journal of Counseling Psychology, 24, 511–521. http://doi.org/10.1037/0022-0167.24.6.511Fordyce, M. W. (1983). A program to increase happiness: Further studies. Journal of Counseling Psychology, 30, 483–498. http://doi.org/10.1037/0022-0167.30.4.483Fredrickson, B. L. (2001). The role of positive emotions in positive psychology. The American Psychologist, 56, 218–226.Fredrickson, B. L., Cohn, M. A., Coffey, K. A., Pek, J., & Finkel, S. M. (2008). Open hearts build lives: Positive emotions, induced through loving-kindness meditation, build consequential personal resources. Journal of Personality and Social Psychology, 95, 1045-1062Freedman, S. R., & Enright, R. D. (1996). Forgiveness as an intervention goal with incest survivors. Journal of Consulting and Clinical Psychology, 64, 983–992. http://doi.org/10.1037/0022-006X.64.5.983Frieswijk, N., Steverink, N., Buunk, B. P., & Slaets, J. P. J. (2006). The effectiveness of a bibliotherapy in increasing the self-management ability of slightly to moderately frail 45older people. Patient Education and Counseling, 61, 219–227. http://doi.org/10.1016/j.pec.2005.03.011Froh, J. J., Sefick, W. J., & Emmons, R. A. (2008). Counting blessings in early adolescents: An experimental study of gratitude and subjective well-being. Journal of School Psychology,46, 213–233. http://doi.org/10.1016/j.jsp.2007.03.005Gander, F., Proyer, R. T., Ruch, W., & Wyss, T. (2012). Strength-based positive interventions: Further evidence for their potential in enhancing well-being and alleviating depression. Journal of Happiness Studies, 14, 1241–1259. http://doi.org/10.1007/s10902-012-9380-0Goldstein, E. D. (2007). Sacred moments: Implications on well-being and stress. Journal of Clinical Psychology, 63, 1001–1019. http://doi.org/10.1002/jclp.20402Grant, A. M. (2012). Making positive change: A randomized study comparing solution-focused vs. problem-focused coaching questions. Journal of Systemic Therapies, 31, 21–35. http://doi.org/10.1521/jsyt.2012.31.2.21Grant, A. M., Curtayne, L., & Burton, G. (2009). Executive coaching enhances goal attainment, resilience and workplace well-being: A randomised controlled study. The Journal of Positive Psychology, 4, 396–407. http://doi.org/10.1080/17439760902992456Green, L. S., Oades, L. G., & Grant, A. M. (2006). Cognitive-behavioral, solution-focused life coaching: Enhancing goal striving, well-being, and hope. The Journal of Positive Psychology, 1, 142–149. http://doi.org/10.1080/17439760600619849Grossman, P., Tiefenthaler-Gilmer, U., Raysz, A., & Kesper, U. (2007). Mindfulness training as an intervention for fibromyalgia: Evidence of postintervention and 3-year follow-up benefits in well-being. Psychotherapy and Psychosomatics, 76, 226–233. http://doi.org/10.1159/00010150146Hedges, L. V. (1989). Estimating the Normal Mean and Variance Under A Publication Selection Model. In L. J. Gleser, M. D. Perlman, S. J. Press, & A. R. Sampson (Eds.), Contributions to Probability and Statistics (pp. 447–458). Springer New York. Retrieved from http://link.springer.com/chapter/10.1007/978-1-4612-3678-8_31Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21, 1539–1558. http://doi.org/10.1002/sim.1186Higgins, J. P. T., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. BMJ, 327(7414), 557–560. http://doi.org/10.1136/bmj.Howell, R. T., Kern, M. L., & Lyubomirsky, S. (2007). Health benefits: Meta-analytically determining the impact of well-being on objective health outcomes. Health Psychology Review, 1, 83–136. http://doi.org/10.1080/17437190701492486Howell, A. J.,Passmore, H.-A., & Holder, M. D. (in press). Implicit theories of well-being predictwell-being and the endorsement of therapeutic lifestyle changes. Journal of Happiness Studies. doi:10.1007/s10902-015-9697-6Huedo-Medina, T. B., Sánchez-Meca, J., Marín-Martínez, F., & Botella, J. (2006). Assessing heterogeneity in meta-analysis: Q statistic or I2 index? Psychological Methods, 11, 193–206. http://doi.org/10.1037/1082-989X.11.2.193Hurley, D. B., & Kwon, P. (2012). Results of a study to increase savoring the moment: Differential impact on positive and negative outcomes. Journal of Happiness Studies, 13, 579–588. http://doi.org/10.1007/s10902-011-9280-8Irving, L. M., R, C., Cheavens, J., Gravel, L., Hanke, J., Hilberg, P., & Nelson, N. (2004). The relationships between hope and outcomes at the pretreatment, beginning, and later phasesof psychotherapy. Journal of Psychotherapy Integration, 14, 419–443. http://doi.org/10.1037/1053-0479.14.4.41947Keyes, C. L. M. (2005). Mental illness and/or mental health? Investigating axioms of the complete state model of health. Journal of Consulting and Clinical Psychology, 73, 539–548. http://doi.org/10.1037/0022-006X.73.3.539Keyes, C. L. M., Dhingra, S. S., & Simoes, E. J. (2010). Change in level of positive mental health as a predictor of future risk of mental illness. American Journal of Public Health, 100, 2366–2371. http://doi.org/10.2105/AJPH.2010.192245Keyes, C. L. M., & Grzywacz, J. G. (2005). Health as a complete state: The added value in work performance and healthcare costs. Journal of Occupational and Environmental Medicine / American College of Occupational and Environmental Medicine, 47, 523–532.Kim-Prieto, C., Diener, E., Tamir, M., Scollon, C., & Diener, M. (2005). Integrating the diverse definitions of happiness: A time-sequential framework of subjective well-being. Journal of Happiness Studies, 6, 261–300. http://doi.org/10.1007/s10902-005-7226-8King, L. A. (2001). The health benefits of writing about life goals. Personality and Social Psychology Bulletin, 27, 798–807. http://doi.org/10.1177/0146167201277003King, L. A., & Miner, K. N. (2000). Writing about the perceived benefits of traumatic events: Implications for physical health. Personality and Social Psychology Bulletin, 26, 220–230. http://doi.org/10.1177/0146167200264008Kremers, I., Steverink, N., Albersnagel, F., & Slaets, J. (2006). Improved self-management ability and well-being in older women after a short group intervention. Aging & Mental Health, 10, 476–484.Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 863. http://doi.org/10.3389/fpsyg.48Lamers, S. M. A., Bolier, L., Westerhof, G. J., Smit, F., & Bohlmeijer, E. T. (2011). The impact of emotional well-being on long-term recovery and survival in physical illness: A meta-analysis. Journal of Behavioral Medicine, 35, 538–547. http://doi.org/10.1007/s10865-011-9379-8Layous, K., Nelson, S. K., & Lyubomirsky, S. (2013). What is the optimal way to deliver a positive activity intervention? The case of writing about one’s best possible selves. Journal of Happiness Studies, 14, 635–654. http://doi.org/10.1007/s10902-012-9346-2Liberati, A., Altman, D. G., Tetzlaff, J., Mulrow, C., Gøtzsche, P. C., Ioannidis, J. P. A., … Moher, D. (2009). The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: Explanation and elaboration. BMJ, 6(7), 1-28. http://doi.org/10.1136/bmj.b2700Lichter, S., Haye, K., & Kammann, R. (1980). Increasing happiness through cognitive retraining.New Zealand Psychologist, 9, 57–64.Light, R.J., & Pillemer, D.B. (1984). Summing up: The science of reviewing research. Cambridge, MA: Harvard University Press.Lin, W.-F., Mack, D., Enright, R. D., Krahn, D., & Baskin, T. W. (2004). Effects of forgiveness therapy on anger, mood, and vulnerability to substance use among inpatient substance-dependent clients. Journal of Consulting and Clinical Psychology, 72, 1114–1121. http://doi.org/10.1037/0022-006X.72.6.1114Low, C. A., Stanton, A. L., & Danoff-Burg, S. (2006). Expressive disclosure and benefit finding among breast cancer patients: Mechanisms for positive health effects. Health Psychology,25, 181–189. http://doi.org/10.1037/0278-6133.25.2.181Lutes, L. D., Wirtz, D. R., Chrusch, C., Kanippayoor J. M., Leitner, D., Heintzelman, S., … & Diener, E. (2016, May). ENHANCE: Enduring Happiness and Continued Self-49Enhancement. Poster session presented at the International Behavioral Trials Network Conference, Montreal, Quebec, Canada.Luthans, F., Avey, J. B., Avolio, B. J., & Peterson, S. J. (2010). The development and resulting performance impact of positive psychological capital. Human Resource Development Quarterly, 21, 41–67. http://doi.org/10.1002/hrdq.20034Luthans, F., Avey, J. B., & Patera, J. L. (2008). Experimental analysis of a web-based training intervention to develop positive psychological capital. Academy of Management Learning & Education, 7, 209–221. http://doi.org/10.5465/AMLE.2008.32712618Lyubomirsky, S., Dickerhoof, R., Boehm, J. K., & Sheldon, K. M. (2011). Becoming happier takes both a will and a proper way: an experimental longitudinal intervention to boost well-being. Emotion, 1, 391–402. http://doi.org/10.1037/a0022575Lyubomirsky, S., Sheldon, K. M., & Schkade, D. (2005). Pursuing happiness: The architecture ofsustainable change. Review of General Psychology, 9, 111–131. Lyubomirsky, S., Sousa, L., & Dickerhoof, R. (2006). The costs and benefits of writing, talking, and thinking about life’s triumphs and defeats. Journal of Personality and Social Psychology, 90, 692–708. http://doi.org/10.1037/0022-3514.90.4.692Lyubomirsky, S., Tkach, C., & Sheldon, K.M. (2004). [Pursuing sustained happiness through random acts of kindness and counting one’s blessings: Tests of two six-week interventions].Unpublished raw data. Retrieved from: Lyubomirsky, S., Sheldon, K. M., & Schkade, D. (2005). Pursuing happiness: The architecture of sustainable change. Review of General Psychology, 9, 111–131.Macleod, A. K., Coates, E., & Hetherton, J. (2008). Increasing well-being through teaching goal-setting and planning skills: results of a brief intervention. Journal of Happiness Studies, 9, 185–196.50Martínez-Martí, M. L., Avia, M. D., & Hernández-Lloreda, M. J. (2010). The effects of counting blessings on subjective well-being: a gratitude intervention in a Spanish sample. The Spanish Journal of Psychology, 13, 886–896.McMahan, E. A., & Estes, D. (2015). The effect of contact with natural environments on positiveand negative affect: A meta-analysis. The Journal of Positive Psychology, 10, 507–519.Mitchell, J., Stanimirovic, R., Klein, B., & Vella-Brodrick, D. (2009). A randomised controlled trial of a self-guided internet intervention promoting well-being. Computers in Human Behavior, 25, 749–760.Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & Group, T. P. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA Statement. PLOS Med, 6, e1000097. http://doi.org/10.1371/journal.pmed.1000097Mongrain, M., & Anselmo-Matthews, T. (2012). Do positive psychology exercises work? A replication of Seligman et al. (2005). Journal of Clinical Psychology, 68, 382-389. http://doi.org/10.1002/jclp.21839Mongrain, M., Chin, J. M., & Shapira, L. B. (2011). Practicing compassion increases happiness and self-esteem. Journal of Happiness Studies, 12, 963–981. http://doi.org/10.1007/s10902-010-9239-1Morris, S. B. (2008). Estimating Effect Sizes From the Pretest-Posttest-Control Group Designs. Organizational Research Methods. http://doi.org/10.1177/Mulrow, C. D. (1987). The medical review article: state of the science. Annals of Internal Medicine, 106, 485–488.Niemiec, R., Rashid, T., & Spinella, M. (2012). Strong mindfulness: Integrating mindfulness andcharacter strengths. Journal of Mental Health Counseling, 34, 240–253.51Nikrahan, G. R., Laferton, J. A. C., Asgari, K., Kalantari, M., Abedi, M. R., Etesampour, A., … Huffman, J. C. (2016). Effects of Positive Psychology Interventions on Risk Biomarkers in Coronary Patients: A Randomized, Wait-List Controlled Pilot Trial. Psychosomatics, 57, 359–368. http://doi.org/10.1016/j.psym.Orwin, R. G. (1983). A fail-safe N for effect size in meta-analysis. Journal of Educational and Behavioral Statistics, 8, 157–159. http://doi.org/10.3102/10769986008002157Otake, K., Shimai, S., Tanaka-matsumi, J., Otsui, K., & Fredrickson, B. L. (2006). Happy people become happier through kindness: A counting kindnesses intervention. Journal of Happiness Studies, 7, 361–375. http://dx.doi.org/10.1007/s10902-005-3650-zPage, K. M., & Vella-Brodrick, D. A. (2013). The working for wellness program: RCT of an employee well-being intervention. Journal of Happiness Studies, 14, 1007–1031. http://doi.org/10.1007/s10902-012-9366-yParks, A. C., & Biswas-Diener, R. (2013). Positive interventions: Past, present, and future. In T. Kashdan & J. Ciarrochi (Eds.), Mindfulness, Acceptance, and Positive Psychology: The Seven Foundations of Well-Being, (pp. 140-165). Oakland, CA: Context PressPassmore, H.-A., & Howell, A. J. (2014). Nature involvement increases hedonic and eudaimonic well-being: A two-week experimental study. Ecopsychology, 6, 148-154Peters, M. L., Flink, I. K., Boersma, K., & Linton, S. J. (2010). Manipulating optimism: Can imagining a best possible self be used to increase positive future expectancies? The Journal of Positive Psychology, 5, 204–211. http://doi.org/10.1080/17439761003790963Peters, J. L., Sutton, A. J., Jones, D. R., Abrams, K. R., & Rushton, L. (2007). Performance of the trim and fill method in the presence of publication bias and between-study heterogeneity. Statistics in Medicine, 26, 4544–4562. http://doi.org/10.1002/sim.52Peterson, C., Park, N., & Seligman, M. E. P. (2005). Assessment of character strengths. In G. P. Koocher, J. C. Norcross, & S. S. Hill III (Eds.), Psychologists’ desk reference (2nd ed., pp. 93–98). New York, NY: Oxford University Press.Pretorius, C., Venter, C., Temane, M., & Wissing, M. (2008). The design and evaluation of a hope enhancement programme for adults. Journal of Psychology in Africa, 18, 301–308. http://doi.org/10.1080/14330237.2008.10820202Quoidbach, J., Wood, A. M., & Hansenne, M. (2009). Back to the future: The effect of daily practice of mental time travel into the future on happiness and anxiety. The Journal of Positive Psychology, 4, 349–355. http://doi.org/10.1080/17439760902992365R Core Team (2015). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL https://www.R-project.org/.Radloff, L. S. (1977). The CES-D Scale: A self-report depression scale for research in the generalpopulation. Applied Psychological Measurement, 1, 385–401. http://doi.org/10.1177/014662167700100306Rashid, T., Anjum, A., & Lennox, C. (2006). [Positive psychotherapy for middle school children.] Unpublished manuscript. Retrieved from: J. R. Z. Abela & B. L. Hankin (Eds.),Handbook of Depression in children and adolescents: Causes, treatment and prevention (pp. 250–287). New York, NY: Guilford Press.Re, A. D. (2013). compute.es: Compute Effect Sizes. R package version 0.2-2. URL http://cran.r-project.org/web/packages/compute.esRe, A. D., & Hoyt, W. T. (2012). MAc: Meta-Analysis with Correlations. R package version 1.1. http://CRAN.R-project.org/package=MAc53Reed, G. L., & Enright, R. D. (2006). The effects of forgiveness therapy on depression, anxiety, and posttraumatic stress for women after spousal emotional abuse. Journal of Consulting and Clinical Psychology, 74, 920–929. http://doi.org/10.1037/0022-006X.74.5.920Ripley, J. S., & Worthington, E. L. (2002). Hope-focused and forgiveness-based group interventions to promote marital enrichment. Journal of Counseling and Development , 80, 452–463.Rosenthal, R. (1979). The ‘‘file drawer problem’’ and tolerance for null results. Psychological Bulletin, 86, 638–641.Rosenthal, R. (1991). Meta-analysis: A review. Psychosomatic Medicine, 53, 247–271.Rosenthal, R. (1995). Progress in Clinical Psychology: Is There Any? Clinical Psychology: Scence and Practice, 2, 133–150. http://doi.org/10.1111/j.1468-2850.1995.tb00035.xRosenthal, R., & Rosnow, R.L. (2008). Essentials of behavioral research: Methods and data analysis (3rd ed.). New York, NY: McGraw-Hill.Rücker, G., Schwarzer, G., Carpenter, J. R., Binder, H., & Schumacher, M. (2011). Treatment-effect estimates adjusted for small-study effects via a limit meta-analysis. Biostatistics, 12, 122–142.Ruini, C., Belaise, C., Brombin, C., Caffo, E., & Fava, G. A. (2006). Well-being therapy in school settings: a pilot study. Psychotherapy and Psychosomatics, 75, 331–336. http://doi.org/10.1159/000095438Rustøen, T., Wiklund, I., Hanestad, B. R., & Moum, T. (1998). Nursing intervention to increase hope and quality of life in newly diagnosed cancer patients. Cancer Nursing, 21, 235–245.Ryff, C. D. (1989). Happiness is everything, or is it? Explorations on the meaning of psychological well-being. Journal of Personality and Social Psychology, 57, 1069-1081.54Sacks, H. S., Berrier, J., Reitman, D., Ancona-Berk, V. A., & Chalmers, T. C. (1987). Meta-analyses of randomized controlled trials. The New England Journal of Medicine, 316, 450–455. http://doi.org/10.1056/Sacks, H. S., Reitman, D., Pagano, D., & Kupelnick, B. (1996). Meta-analysis: an update. The Mount Sinai Journal of Medicine, New York, 63, 216–224.Schueller, S., Kashdan, T., & Parks, A. (2014). Synthesizing positive psychological interventions: Suggestions for conducting and interpreting meta-analyses. International Journal of Wellbeing, 4, 91-98. Retrieved from http://www.internationaljournalofwellbeing.org/index.php/ijow/article/view/310Schueller, S. M., & Parks, A. C. (2012). Disseminating self-help: positive psychology exercises in an online trial. Journal Of Medical Internet Research, 14, e63–e63. http://doi.org/10.2196/jmir.1850Schmidt, F. L., & Hunter, J. E. (2014). Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. SAGE Publications.Schwarzer, G., Carpenter, J., & Rücker, G. (2014). Metasens: Advanced statistical methods to model and adjust for bias in meta-analysis (Version 0.3-0). Retrieved from https://cran.r-project.org/Schwarzer, G. (2015). Meta: General package for meta-analysis (Version 4.4-0). Retrieved from https://cran.r-project.org/Seligman, M. E. P. (2004, September). [Positive interventions: More evidence of effectiveness]. Unpublished data. Retrieved from: Retrieved from: J. R. Z. Abela & B. L. Hankin (Eds.), Handbook of Depression in children and adolescents: Causes, treatment and prevention (pp. 250–287). New York, NY: Guilford Press.55Seligman, M. E. P. (2011). Flourish: A Visionary New Understanding of Happiness and Well-being. New York, NY: Simon and Schuster.Seligman, M. E. P., & Csikszentmihalyi, M. (Eds.). (2000). Positive psychology. American Psychologist, 55, 5-14. Seligman, M. E. P., Rashid, T., & Parks, A. C. (2006). Positive psychotherapy. American Psychologist, 61, 774–788. http://doi.org/10.1037/0003-066X.61.8.774Seligman, M. E. P., Steen, T. A., Park, N., & Peterson, C. (2005). Positive psychology progress: empirical validation of interventions. American Psychologist, 60, 410–421. http://doi.org/10.1037/0003-066X.60.5.410Sergeant, S., & Mongrain, M. (2011). Are positive psychology exercises helpful for people with depressive personality styles? The Journal of Positive Psychology, 6, 260–272. http://doi.org/10.1080/17439760.2011.577089Shapira, L. B., & Mongrain, M. (2010). The benefits of self-compassion and optimism exercises for individuals vulnerable to depression. The Journal of Positive Psychology, 5, 377–389.http://doi.org/10.1080/17439760.2010.516763Sheldon, K. M., Kasser, T., Smith, K., & Share, T. (2002). Personal goals and psychological growth: Testing an intervention to enhance goal attainment and personality integration. Journal of Personality, 70, 5–31. http://doi.org/10.1111/1467-6494.00176Sheldon, K. M., & Lyubomirsky, S. (2006). How to increase and sustain positive emotion: The effects of expressing gratitude and visualizing best possible selves. The Journal of Positive Psychology, 1, 73–82. http://doi.org/10.1080/17439760500510676Sin, N. L., & Lyubomirsky, S. (2009). Enhancing well-being and alleviating depressive symptoms with positive psychology interventions: a practice-friendly meta-analysis. Journal of Clinical Psychology, 65, 467–487. http://doi.org/10.1002/jclp.2059356Smith, W. P., Compton, W. C., & West, W. B. (1995). Meditation as an adjunct to a happiness enhancement program. Journal of Clinical Psychology, 51, 269–273.Spence, G. B., & Grant, A. M. (2007). Professional and peer life coaching and the enhancement of goal striving and well-being: An exploratory study. The Journal of Positive Psychology, 2, 185–194. http://doi.org/10.1080/17439760701228896Stanley, T. D., & Doucouliagos, H. (2014). Meta-regression approximations to reduce publication selection bias. Research Synthesis Methods, 5, 60–78. http://doi.org/10.1002/jrsm.Sterne, J.A., Egger, M. & Smith, G. D. (2001). Systematic reviews in healthcare: investigating and dealing with publication and other biases in meta-analysis. British Medical Journal, 323, 101–105.Surawy, C., Roberts, J., & Silver, A. (2005). The effect of mindfulness training on mood and measures of fatigue, activity, and quality of life in patients with chronic fatigue syndromeon a hospital waiting list: A series of exploratory studies. Behavioural and Cognitive Psychotherapy, 33, 103–109. http://doi.org/10.1017/S135246580400181XTerrin, N., Schmid, C. H., Lau, J., & Olkin, I. (2003). Adjusting for publication bias in the presence of heterogeneity. Statistics in Medicine, 22, 2113–2126. http://doi.org/10.1002/sim.Tkach, C. T. (2006). Unlocking the treasury of human kindness: Enduring improvements in mood, happiness, and self -evaluations (Order No. 3204031). Available from ProQuest Dissertations & Theses Global. (305002749). Retrieved from http://ezproxy.library.ubc.ca/login?url=http://search.proquest.com/docview/305002749?accountid=1465657Trump, M. R. M. (1997). The impact of hopeful narratives on state hope, state self-esteem, and state positive and negative affect for adult female survivors of incest. Available from ProQuest Dissertations & Theses Global. Retrieved fromhttp://search.proquest.com.ezproxy.library.ubc.ca/docview/304361445/abstract/FF39780DB2264ED2PQ/1Tugade, M. M., & Fredrickson, B. L. (2004). Resilient individuals use positive emotions to bounce back from negative emotional experiences. Journal of Personality and Social Psychology, 86, 320–333. http://doi.org/10.1037/0022-3514.86.2.320Veenhoven, R. (2007). Healthy happiness: Effects of happiness on physical health and the consequences for preventive health care. Journal of Happiness Studies, 9, 449–469. http://doi.org/10.1007/s10902-006-9042-1Viechtbauer, W. (2010). Conducting meta-analyses in R with the metaphor package. Journal of Statistical Software, 36, 1-48. http://www.jstatsoft.org/v36/i03/Watkins, P. C., Woodward, K., Stone, T., & Kolts, R. (2003). Gratitude and happiness: Development of a measure of gratitude and relationships with subjective well-being. Social Behavior and Personality, 31, 431–452.http://doi.org/10.2224/sbp..Wampold, B. E., Mondin, G. W., Moody, M., Stich, E, Benson, K., & Hyun-nie, A. (1997). A meta-analysis of outcome studies comparing bona fide psychotherapies: Empirically, "all must have prizes." Psychological Bulletin, 122, 203-215.Weis, R., & Speridakos, E. C. (2011). A meta-analysis of hope enhancement strategies in clinical and community settings. Psychology of Well-Being: Theory, Research and Practice, 1, 1–16. http://doi.org/10.1186/2211-1522-1-5WHO | World Health Organization. (2015). About WHO. Retrieved from http://www.who.int/about/en/58Wing, J. F., Schutte, N. S., & Byrne, B. (2006). The effect of positive writing on emotional intelligence and life satisfaction. Journal of Clinical Psychology, 62, 1291–1302. http://doi.org/10.1002/jclp.20292Wood, A. M., & Joseph, S. (2010). The absence of positive psychological (eudemonic) well-being as a risk factor for depression: A ten year cohort study. Journal of Affective Disorders, 122, 213–217. http://doi.org/10.1016/j.jad.2009.06.032Zautra, A. J., Davis, M. C., Reich, J. W., Nicassario, P., Tennen, H., Finan, P., … Irwin, M. R. (2008). Comparison of cognitive behavioral and mindfulness meditation interventions on adaptation to rheumatoid arthritis for patients with and without history of recurrent depression. Journal of Consulting and Clinical Psychology, 76, 408–421. http://doi.org/10.1037/0022-006X.76.3.408Ziv, N., Chaim, A. B., & Itamar, O. (2011). The effect of positive music and dispositional hope on state hope and affect. Psychology of Music, 39, 3–17. http://doi.org/10.1177/030573560935192059TablesTable 1. Coded characteristics and coding descriptions.Characteristics Coding descriptionsStudy ID Id assigned to each studyName of 1st author First author last nameYear of publication Year of publicationExperiment number 1 = either the first study of that article or the only study of that article, 2 = second study of that article, 3 = third study, etc. Title letters First letters of the first four words in title (e.g., “Positive psychology intervention effects on …” = ppi) Design 2g-pre-post2g-post3g-pre-post3g-post, etc. Number of control groups nNumber of intervention groups nSame control group If no, leave bank. If yes, code each of those row’s with the same number but a different number than other articles. Intervention description Brief name of the interventionIntervention focus 1=all negative, 2=mostly negative, 3=half/half,4=mostly positive, 5=all positive Intervention format Indi = individualgrp =groupmix = mixIntervention administration psycho = psychologistpsychi = psychiatristra = research assistantgstud = grad studenttrinstr = trained instructorphd = someone with a phd but that’s all you knowschteachr = a teacher/professor/lecturer of a classother  = other 60Table 1. Coded characteristics and coding descriptions.Characteristics Coding descriptionsNumber of intervention sessions nIntervention session duration minutesTotal intervention duration weeks (in weeks, if less than week divide by 7)Intervention setting lab = laboratory settingclini = clinical (practitioners office, hospital, clinic), online  = purely onlinehome = home (sent home with packages) Time from pre to post in weeks Control group type nothing = no treatment/task waitlistneutral taskcomp = comparable treatment (cbt, dbt, mindfulness)alt = alternative treatmentplacebo = given some instruction but no intervention is implementedneg = negative taskBest control group (i.e., a neutral task, vs. a task that is ‘negative’, vs. a waitlist/do nothing’)  1=yes0.8=equal to another0.5=not the bestBest control group comments Brief comment about the chosen control group Random assignment to groups 0=no, 1=yesMeasures of well-being List using acronyms, separate with ;Measures of depression List using acronyms, separate with ;Measures other (outcome only) List using acronyms, separate with ;Follow up 0=no, 1=yes Maxumum follow up Duration (in weeks) of the maximum follow uptime from the post-measures (i.e., if a study has1 follow up 2 months after post-measures, thenit would be coded as 8. If another study had 2 follow ups, 2 months and 3 months after post measures than it would be coded as 12 Sample origin (where did the sample come from)ustud = university studentsgrstud = grade school students com = communityclinic = clinic, hospital, etc.other = other61Table 1. Coded characteristics and coding descriptions.Characteristics Coding descriptionsIntervention setting lab = laboratory settingclini = clinical (practitioners office, hospital, clinic), online  = purely onlinehome = home (sent home with packages) Time from pre to post in weeks Control group type nothing = no treatment/task waitlistneutral taskcomp = comparable treatment (cbt, dbt, mindfulness)alt = alternative treatmentplacebo = given some instruction but no intervention is implementedneg = negative taskBest control group (i.e., a neutral task, vs. a task that is ‘negative’, vs. a waitlist/do nothing’)  1=yes0.8=equal to another0.5=not the bestBest control group comments Brief comment about the chosen control group Random assignment to groups 0=no, 1=yesMeasures of well-being List using acronyms, separate with ;Measures of depression List using acronyms, separate with ;Measures other (outcome only) List using acronyms, separate with ;Follow up 0=no, 1=yes Maxumum follow up Duration (in weeks) of the maximum follow uptime from the post-measures (i.e., if a study has1 follow up 2 months after post-measures, thenit would be coded as 8. If another study had 2 follow ups, 2 months and 3 months after post measures than it would be coded as 12 Sample origin (where did the sample come from)ustud = university studentsgrstud = grade school students com = communityclinic = clinic, hospital, etc.other = other62Table 1. Coded characteristics and coding descriptions.Characteristics Coding descriptionsSample recruitment self = self-referred, voluntary sign upreferred = referred by others (clinic)creq = class requirement (as an actual requirement of course materials, NOT as optional credits, such as SONA)mixed = mix of aboveother = other ns = not specifiedSample size for experimental group nSample size for control group nMean age for experimental group meanMean age for control group meanSample description Verbal brief descriptionMean age for all groups Mean age for the total sample:1=<18, 2=18-29, 3=30-39, 4=40-49, 5=50-59, 6=60-69, 7=70-79, 8=80+Sample depression status 0    = not depressed0.5 = some depressed 1    = all depressedOR 0 to 1 actual proportion depressedSample on depression medication (%) %Sample on psychotherapy (%) %Sample clinical status normal, psydiag, physdiag,chrodiag, braindiagDoes the article report relevant means 0=no, 1=yesDoes the article report relevant mean differences 0=no, 1=yesDoes the article report relevant SD’s and/or SE’s0=no, 1=yesDoes the article report correlations of outcome variables at each time point 0=no, 1=yesDoes the article report relevant F statistics 0=no, 1=yesDoes the article report relevant F p values 0=no, 1=yesDoes the article report relevant t statistics 0=no, 1=yes63Table 1. Coded characteristics and coding descriptions.Characteristics Coding descriptionsDoes the article report relevant t p values 0=no, 1=yesDoes the article report relevant effect sizes 0=no, 1=yesDoes the article report test-retest data on any of the relevant outcome measures0=no, 1=yesDoes the article report internal consistency data on any of the relevant outcome measures0=no, 1=yes64Table 2. Effect sizes determined by the current study, for each well-being measure and each study included in Sin and Lyubomirsky (2009) well-being meta-analysis.ID Study Available  Data Measureª Nt Nc N.total r1001 Bedard.2003.1 prepost-msds SF-36-MH 10 3 13 0.691002 Burton.2004.1 post-msds PA-NS 48 42 90 0.541003 Cheavens.2006.1 prepost-msds TSHS 16 16 32 0.171003 Cheavens.2006.1 prepost-msds PIL 16 16 32 0.011004 Cook.1998.1 prepost-ancovaF LSI-A 18 18 36 0.351005 Davis.2004.1 post-msds LSI-Z 7 7 14 0.41008 Emmons.2003.1 post-msds PA-NS  65 67 132 0.11009 Emmons.2003.3 post-anovaF PA-NS 33 32 65 0.271009 Emmons.2003.3 post-anovaF global life appraisals 33 32 65 0.421009 Emmons.2003.3 post-anovaF connection with others 33 32 65 0.391009 Emmons.2003.3 post-tpvalue PANAS-P-observer 26 26 52 0.261009 Emmons.2003.3 post-tpvalue SWLS-observer 26 26 52 0.321010 Fava.1998.1 prepost-msds PWB-AU 10 10 20 0.121010 Fava.1998.1 prepost-msds PWB-EM 10 10 20 0.21010 Fava.1998.1 prepost-msds PWB-PG 10 10 20 0.221010 Fava.1998.1 prepost-msds PWB-PR 10 10 20 0.221010 Fava.1998.1 prepost-msds PWB-PL 10 10 20 0.011010 Fava.1998.1 prepost-msds PWB-SA 10 10 20 0.181010 Fava.1998.1 prepost-msds SQ-RLX 10 10 20 0.241010 Fava.1998.1 prepost-msds SQ-CON 10 10 20 0.171010 Fava.1998.1 prepost-msds SQ-PHS 10 10 20 -0.171010 Fava.1998.1 prepost-msds SQ-FRN 10 10 20 0.541011 Fava.2005.1 prepost-msds PWB-AU 8 8 16 0.511011 Fava.2005.1 prepost-msds PWB-EM 8 8 16 0.5465Table 2. Effect sizes determined by the current study, for each well-being measure and each study included in Sin and Lyubomirsky (2009) well-being meta-analysis.ID Study Available  Data Measureª Nt Nc N.total r1011 Fava.2005.1 prepost-msds PWB-PG 8 8 16 0.631011 Fava.2005.1 prepost-msds PWB-PR 8 8 16 0.41011 Fava.2005.1 prepost-msds PWB-PL 8 8 16 0.621011 Fava.2005.1 prepost-msds PWB-SA 8 8 16 0.581011 Fava.2005.1 prepost-msds SQ-RLX 8 8 16 -0.331011 Fava.2005.1 prepost-msds SQ-CON 8 8 16 -0.231011 Fava.2005.1 prepost-msds SQ-PHS 8 8 16 -0.121011 Fava.2005.1 prepost-msds SQ-FRN 8 8 16 -0.21012 Fordyce.1977.1 post-msds HM - scale 48 60 108 0.21012 Fordyce.1977.1 post-msds HM - scale 44 60 104 0.31012 Fordyce.1977.1 post-msds HM - scale 50 60 110 0.341013 Fordyce.1977.2 post-msds HM - scale 39 29 68 0.431013 Fordyce.1977.2 post-msds HM - scale 39 29 68 0.371015 Fordyce.1983.4 post-msds SDL-AH 64 39 103 0.181015 Fordyce.1983.4 post-msds SDL-P 64 39 103 0.181015 Fordyce.1983.4 post-msds SDL-AV 64 39 103 0.191015 Fordyce.1983.4 post-msds SDL-LS 64 39 103 0.161015 Fordyce.1983.4 post-msds SDL-TS 64 39 103 0.231015 Fordyce.1983.4 post-msds HM - scale 64 39 103 0.151016 Fordyce.1983.6 prepost-msds HM - scale 14 13 27 0.021016 Fordyce.1983.6 prepost-msds HM - scale 10 13 23 0.041016 Fordyce.1983.6 prepost-msds HM - scale 12 13 25 0.081016 Fordyce.1983.6 prepost-msds HM - scale 8 13 21 0.151017 Freedman.1996.1 prepost-msds HS 6 6 12 0.7266Table 2. Effect sizes determined by the current study, for each well-being measure and each study included in Sin and Lyubomirsky (2009) well-being meta-analysis.ID Study Available  Data Measureª Nt Nc N.total r1018 Froh.2008.1 post-msds GS (lately) 76 65 141 -0.081018 Froh.2008.1 post-msds GS (next week) 76 65 141 0.081018 Froh.2008.1 post-msds BMSLSS 76 65 141 0.131018 Froh.2008.1 post-msds BMSLSS 76 65 141 0.061020 Green.2006.1 prepost-msds SWLS 23 25 48 0.451020 Green.2006.1 prepost-msds PANAS-P 25 25 50 0.391020 Green.2006.1 prepost-msds HTS-C 25 24 49 0.181020 Green.2006.1 prepost-msds PWB-PG 25 25 50 0.131020 Green.2006.1 prepost-msds PWB-EM 25 25 50 0.341020 Green.2006.1 prepost-msds PWB-AU 25 25 50 0.031020 Green.2006.1 prepost-msds PWB-PR 25 25 50 0.351020 Green.2006.1 prepost-msds PWB-PL 25 25 50 0.51020 Green.2006.1 prepost-msds PWB-SA 25 25 50 0.381021 Grossman.2007.1 prepost-msds QOL-PA 39 13 52 0.331022 King.2000.1 post-msds D&E-P 32 23 55 0.061023 King.2001.1 post-msds D&E-NP 19 16 35 -0.041023 King.2001.1 post-msds D&E-NP 22 16 38 0.251024 Kremers.2006.1 prepost-msds SPFILS 46 73 119 0.131025 Lichter.1980.1 prepost-msds PHAHB 10 13 23 0.381025 Lichter.1980.1 prepost-msds HAP-AFFECT 10 13 23 0.221025 Lichter.1980.1 prepost-msds DS-S 10 13 23 0.41026 Lichter.1980.2 prepost-msds HAP-AFFECT 25 23 48 0.191026 Lichter.1980.2 prepost-msds DS-S 25 23 48 0.291027 Low.2006.1 post-msds PMS-P 20 16 36 0.0967Table 2. Effect sizes determined by the current study, for each well-being measure and each study included in Sin and Lyubomirsky (2009) well-being meta-analysis.ID Study Available  Data Measureª Nt Nc N.total r1030 Lyubomirsky.2011.1 prepost-difmsds UPL+PL+SWLS+SHS 107 101 208 0.081030 Lyubomirsky.2011.1 prepost-difmsds UPL+PL+SWLS+SHS 111 101 212 0.031031 MacLeod.2008.1 prepost-msds PANAS-P 29 35 64 0.271031 MacLeod.2008.1 prepost-msds SWLS 29 35 64 0.141032 MacLeod.2008.2 prepost-msds PANAS-P 9 11 20 0.421032 MacLeod.2008.2 prepost-msds SWLS 9 11 20 0.031033 Otake.2006.2 prepost-difmsds JSHS 71 48 119 0.251034 Rashid.2006.1 post-cohend PPTI-C 11 11 22 0.411035 Reed.2006.1 prepost-msds PWB-EM 10 10 20 0.661036 Ruini.2006.1 prepost-msds PWB-AU 57 54 111 -0.071036 Ruini.2006.1 prepost-msds PWB-EM 57 54 111 0.041036 Ruini.2006.1 prepost-msds PWB-PG 57 54 111 -0.131036 Ruini.2006.1 prepost-msds PWB-PR 57 54 111 -0.121036 Ruini.2006.1 prepost-msds PWB-PL 57 54 111 -0.211036 Ruini.2006.1 prepost-msds PWB-SA 57 54 111 -0.171036 Ruini.2006.1 prepost-msds SQ-RLX 57 54 111 0.191036 Ruini.2006.1 prepost-msds SQ-CON 57 54 111 0.071036 Ruini.2006.1 prepost-msds SQ-PHS 57 54 111 0.151036 Ruini.2006.1 prepost-msds SQ-FRN 57 54 111 -0.051037 Seligman.2004.1 post-cohend SWLS 102 83 185 0.161038 Seligman.2005.1 prepost-msds SHI 80 70 150 NA1038 Seligman.2005.1 prepost-msds SHI 59 70 129 NA1038 Seligman.2005.1 prepost-msds SHI 68 70 138 NA1038 Seligman.2005.1 prepost-msds SHI 66 70 136 NA68Table 2. Effect sizes determined by the current study, for each well-being measure and each study included in Sin and Lyubomirsky (2009) well-being meta-analysis.ID Study Available  Data Measureª Nt Nc N.total r1038 Seligman.2005.1 prepost-msds SHI 68 70 138 NA1039 Seligman.2006.1 prepost-msds SWLS 14 20 34 -0.011040 Seligman.2006.2 prepost-msds SWLS 11 9 20 0.231040 Seligman.2006.2 prepost-msds PPTI 11 9 20 0.41042 Sheldon.2006.1 prepost-msds PANAS-P 21 23 44 -0.081042 Sheldon.2006.1 prepost-msds PANAS-P 23 23 46 0.31043 Smith.1995.1 prepost-difmsds HM 17 12 29 0.381043 Smith.1995.1 prepost-difmsds PHI 17 12 29 0.551043 Smith.1995.1 prepost-difmsds HM 7 12 19 0.481043 Smith.1995.1 prepost-difmsds PHI 7 12 19 0.581044 Spence.2007.1 prepost-msds SWLS 20 17 37 0.381044 Spence.2007.1 prepost-msds B-PA 20 17 37 0.161044 Spence.2007.1 prepost-msds PWB-AU 20 17 37 0.41044 Spence.2007.1 prepost-msds PWB-EM 20 17 37 0.131044 Spence.2007.1 prepost-msds PWB-PR 20 17 37 0.071044 Spence.2007.1 prepost-msds PWB-PL 20 17 37 0.351044 Spence.2007.1 prepost-msds PWB-PG 20 17 37 0.351044 Spence.2007.1 prepost-msds PWB-SA 20 17 37 0.281044 Spence.2007.1 prepost-msds SWLS 20 17 37 0.381044 Spence.2007.1 prepost-msds B-PA 20 17 37 0.251044 Spence.2007.1 prepost-msds PWB-AU 20 17 37 0.281044 Spence.2007.1 prepost-msds PWB-EM 20 17 37 0.141044 Spence.2007.1 prepost-msds PWB-PR 20 17 37 0.121044 Spence.2007.1 prepost-msds PWB-PL 20 17 37 0.4369Table 2. Effect sizes determined by the current study, for each well-being measure and each study included in Sin and Lyubomirsky (2009) well-being meta-analysis.ID Study Available  Data Measureª Nt Nc N.total r1044 Spence.2007.1 prepost-msds PWB-PG 20 17 37 0.31044 Spence.2007.1 prepost-msds PWB-SA 20 17 37 0.331045 Tkach.2005.1 prepost-msds SHS 10 47 57 -0.131045 Tkach.2005.1 prepost-msds FBR-PA 10 47 57 -0.311045 Tkach.2005.1 prepost-msds SWLS 10 47 57 -0.351045 Tkach.2005.1 prepost-msds PWB-SA 10 47 57 -0.191045 Tkach.2005.1 prepost-msds PWB-PR 10 47 57 -0.251045 Tkach.2005.1 prepost-msds SHS 13 47 60 -0.111045 Tkach.2005.1 prepost-msds FBR-PA 13 47 60 0.041045 Tkach.2005.1 prepost-msds SWLS 13 47 60 0.021045 Tkach.2005.1 prepost-msds PWB-SA 13 47 60 0.151045 Tkach.2005.1 prepost-msds PWB-PR 13 47 60 01045 Tkach.2005.1 prepost-msds SHS 36 47 83 0.151045 Tkach.2005.1 prepost-msds FBR-PA 36 47 83 0.051045 Tkach.2005.1 prepost-msds SWLS 36 47 83 -0.051045 Tkach.2005.1 prepost-msds PWB-SA 36 47 83 0.031045 Tkach.2005.1 prepost-msds PWB-PR 36 47 83 -0.031045 Tkach.2005.1 prepost-msds SHS 34 47 81 0.031045 Tkach.2005.1 prepost-msds FBR-PA 34 47 81 0.071045 Tkach.2005.1 prepost-msds SWLS 34 47 81 0.091045 Tkach.2005.1 prepost-msds PWB-SA 34 47 81 0.171045 Tkach.2005.1 prepost-msds PWB-PR 34 47 81 0.141045 Tkach.2005.1 prepost-msds SHS 48 47 95 -0.041045 Tkach.2005.1 prepost-msds FBR-PA 48 47 95 0.0570Table 2. Effect sizes determined by the current study, for each well-being measure and each study included in Sin and Lyubomirsky (2009) well-being meta-analysis.ID Study Available  Data Measureª Nt Nc N.total r1045 Tkach.2005.1 prepost-msds SWLS 48 47 95 -0.141045 Tkach.2005.1 prepost-msds PWB-SA 48 47 95 -0.131045 Tkach.2005.1 prepost-msds PWB-PR 48 47 95 -0.041045 Tkach.2005.1 prepost-msds SHS 50 47 97 0.11045 Tkach.2005.1 prepost-msds FBR-PA 50 47 97 0.071045 Tkach.2005.1 prepost-msds SWLS 50 47 97 0.031045 Tkach.2005.1 prepost-msds PWB-SA 50 47 97 0.141045 Tkach.2005.1 prepost-msds PWB-PR 50 47 97 0.161047 Wing.2006.1 prepost-msds SWLS 58 55 113 -0.111047 Wing.2006.1 prepost-msds SWLS 62 55 117 -0.051048 Zautra.2008.1a prepost-msds PANAS-P 41 30 71 0.151049 Zautra.2008.1b prepost-msds PANAS-P 6 14 20 0.09Note. Nt = treatment sample size; Nc = control sample size; prepost-msds = pre and post means and standard deviations; prepost-ancovaF = Ancova F statistic from pre and post data; prepost-difmsds = pre and post mean differences and standard deviations; post-msds = means and standard deviations from post data only; post-anovaF = anova F statistic from post data only; post-tpvalue = t statistic and p value from post data only; post-cohend – Cohen's d from post dataonly.  ªB-PA = Bradburn - Positive Affect; BMSLSS = Brief Multidimensional Student Life Satisfaction Scale - school experience; DS-S = Domain Satisfactions (Sum); D&E-P = Diener & Emmons Positive Affect; D&E-NP = Diener & Emmons Net Positive Mood; FBR-PA = Feldman-Garret & Russells - Positive Affect; GS (lately) = Global Satisfaction - 'past few weeks'; GS (next week) = Global Satisfaction - 'next week'; HAP-AFFECT = Happiness - Affectometer 1; HM = Happiness Measure - 'in general' scale; HS = Hope Scale; HTS-C = Hope Trait Scale composite; JSHS = Japanese Subjective Happiness Scale; LSI-A = Life Satisfaction Index A; LSI-Z = Life Satisfaction Index Z; PANAS-P = Positive and Negative Affect Schedule – Positive; PANAS-P-observer = Positive and Negative Affect Schedule - Positive – Observer; PA-NS = Positive Affect, not specified; PHAHB = Pro-Happy and Anti-Happy Beliefs; PHI - Psychap Inventory; SHS = Subjective Happiness Scale; PIL = Purpose In Life; UPL+PL+SWLS+SHS = unpleasant affect, pleasant affect, SWLS, and SHS combined; 71Table 2. Effect sizes determined by the current study, for each well-being measure and each study included in Sin and Lyubomirsky (2009) well-being meta-analysis.PMS-A = Profile Mood States - positive mood; PPTI = Positive Psychotherapy Inventory; PPTI-C = Positive Psychotherapy Inventory - Children's Version; PWB = Ryff's Psychological Well Being; PWB-AU =  Ryff's Psychological Well Being – Autonomy, PWB-EM = Ryff's Scale of Psychological Well-Being - Environmental mastery; PWB-PG = Ryff's Scale of Psychological Well-Being - Personal growth; PWB-PR = Ryff's Scale of Psychological Well-Being - Positive relations; PWB-PL = Ryff's Scale of Psychological Well-Being - Purpose in life; PWB-SA = Ryff's Scale of Psychological Well-Being – Self-acceptance; QOL-PA = Quality of Life - positiveaffect; SDL-AH = Self Description Inventory - achieved happiness; SDL-AV = Self Description Inventory - attitudes and values; SDL-LS = Self Description Inventory - life style; SDL-P = Self Description Inventory – personality; SDL-TS = Self Description Inventory - total score; SF-36-MH = Health Survey (SF-36) Mental Health;  SHI = Steen Happiness Index; SPFILS = Social Production Function Index Level Scale; SQ-CON = Kellner's Symptom Questionnaire – Contentment; SQ-FRN = Kellner's Symptom Questionnaire – Friendliness; SQ-PHS = Kellner's Symptom Questionnaire - Physical well-being; SQ-RLX = Kellner's Symptom Questionnaire – Relaxation; SWLS = Satisfaction with Life Scale; SWLS - observer = Satisfaction with Life Scale – observer; TSHS = The State Hope Scale.    72Table 3. Effect sizes determined by the current study, for each depression measure and each study included in Sin and Lyubomirsky (2009) depression meta-analysis.ID Study Available Data Measureª Nt Nc N.total r1001 Bedard.2003.1 prepost-msds BDI-II 10 3 13 0.241003 Cheavens.2006.1 prepost-msds CES-D 16 16 32 0.231005 Davis.2004.1 post-msds SZD 7 7 14 0.811010 Fava.1998.1 prepost-msds CID-DEP 10 10 20 0.531010 Fava.1998.1 prepost-msds SQ-DEP 10 10 20 0.041011 Fava.2005.1 prepost-msds CID-DEP 8 8 16 0.281011 Fava.2005.1 prepost-msds SQ-DEP 8 8 16 0.221015 Fordyce.1983.4 post-msds DAC 64 39 103 0.141016 Fordyce.1983.6 prepost-msds DAC 14 13 27 0.051016 Fordyce.1983.6 prepost-msds DAC 10 13 23 01016 Fordyce.1983.6 prepost-msds DAC 12 13 25 0.181016 Fordyce.1983.6 prepost-msds DAC 8 13 21 0.261017 Freedman.1996.1 prepost-msds BDI 6 6 12 0.521021 Grossman.2007.1 prepost-msds HADS-D 39 13 52 0.211026 Lichter.1980.2 prepost-msds BDI 25 23 48 0.21050 Lin.2004.1 prepost-msds BDI-II 14 14 28 0.661035 Reed.2006.1 prepost-msds BDI-II 10 10 20 0.611036 Ruini.2006.1 prepost-msds SQ-DEP 57 54 111 -0.121037 Seligman.2004.1 post-cohend CES-D 102 83 185 -0.151038 Seligman.2005.1 prepost-msds CES-D 80 70 150 0.161038 Seligman.2005.1 prepost-msds CES-D 59 70 129 0.11038 Seligman.2005.1 prepost-msds CES-D 68 70 138 0.11038 Seligman.2005.1 prepost-msds CES-D 66 70 136 0.071038 Seligman.2005.1 prepost-msds CES-D 68 70 138 0.0373Table 3. Effect sizes determined by the current study, for each depression measure and each study included in Sin and Lyubomirsky (2009) depression meta-analysis.ID Study Available Data Measureª Nt Nc N.total r1039 Seligman.2006.1 prepost-msds BDI-II 14 20 34 0.221040 Seligman.2006.2 prepost-msds ZSRS 11 9 20 0.471040 Seligman.2006.2 post-msds HRSD 11 9 20 0.591043 Smith.1995.1 prepost-difmsds BDI 17 12 29 0.391043 Smith.1995.1 prepost-difmsds BDI 7 12 19 0.61051 Surawy.2005.1 prepost-msds HAD-D 9 8 17 0.191048 Zautra.2008.1a prepost-msds DEPS-NS 41 30 71 -0.031049 Zautra.2008.1b prepost-msds DEPS-NS 6 14 20 0.31Note. Nt = treatment sample size; Nc = control sample size; prepost-msds = pre and post means and standard deviations; prepost-difmsds = pre and post mean differences and standard deviations; post-msds = means and standard deviations from post data only; post-cohend – Cohen's d from post data only. ªBDI = Beck Depression Inventory; CES-D = Center for Epidemiologic Studies Depression Scale; CID-DEP = Clinical Interview for Depression; DAC = Depression Adjective Checklist; DEPS-NS = Depressive Symptoms - not specified; HADS-D = Hospital Anxiety & Depression Scale – Depression; HRSD = Hamilton Rating Scale for Depression; SQ-DEP = Kellner's Symptom Questionnaire – Depression; SZD = Zung Scale for Depression; ZSRS = Zung Self-Rating Scale for Depression 74Table 4. Effect sizes determined by the current study, for each subjective well-being measure and each study included in Bolier et al. (2013) subjective well-being meta-analysis.ID Study Available Data Measureª Nt Nc N.total r2003 Buchanan.2010.1 post-msds SWLS 28 28 56 0.342004 Burton.2004.1 post-msds PA-NS 48 42 90 0.542006 Emmons.2003.1 post-msds PA-NS  65 67 132 0.12007 Emmons.2003.3 post-anovaF PA-NS 33 32 65 0.272007 Emmons.2003.3 post-anovaF global life appraisals 33 32 65 0.422007 Emmons.2003.3 post-anovaFexpectations - upcoming week 33 32 65 0.282007 Emmons.2003.3 post-tpvalue PANAS-P-observer 26 26 52 0.262011 Friesqijk.2006.1 prepost-msds SPF-IL 79 86 165 0.132014 Grant.2009.1 prepost-msds WWBI 20 21 41 0.162015 Grant.2012.1 prepost-msds PANAS-P 117 108 225 0.192016 Green.2006.1 prepost-msds SWLS 23 25 48 0.452016 Green.2006.1 prepost-msds PANAS-P 25 25 50 0.392017 Hurley.2012.1 prepost-msds PANAS-X-P 94 99 193 0.072018 King.2001.1 post-msds D&E-NP 19 16 35 -0.042018 King.2001.1 post-msds D&E-NP 22 16 38 0.252019 Kremers.2006.1 prepost-msds SPFILS 46 73 119 0.132020 Layous.2013.1 prepost-difmsds AAS-P 80 37 117 0.132021 Lichter.1980.2 prepost-msds HAP-AFFECT 25 23 48 0.192024 Lyubomirsky.2006.2 post-msds SWLS 24 36 60 -0.142024 Lyubomirsky.2006.2 post-msds PANAS-P 24 36 60 -0.042024 Lyubomirsky.2006.2 post-msds SWLS 25 36 61 -0.322024 Lyubomirsky.2006.2 post-msds PANAS-P 25 36 61 -0.032024 Lyubomirsky.2006.2 post-msds SWLS 26 36 62 0.122024 Lyubomirsky.2006.2 post-msds PANAS-P 26 36 62 -0.175Table 4. Effect sizes determined by the current study, for each subjective well-being measure and each study included in Bolier et al. (2013) subjective well-being meta-analysis.ID Study Available Data Measureª Nt Nc N.total r2025 Lyubomirsky.2011.1 prepost-difmsds UPL+PL+SWLS+SHS 107 101 208 0.082025 Lyubomirsky.2011.1 prepost-difmsds UPL+PL+SWLS+SHS 111 101 212 0.032026Martinez-Marti.2010.1 prepost-msds PA-NS 41 34 75 0.152027 Mitchell.2009.1 prepost-msds PWI-A 17 23 40 0.092027 Mitchell.2009.1 prepost-msds SWLS 17 23 40 -0.062027 Mitchell.2009.1 prepost-msds PANAS-P 17 23 40 0.052028 Page.2013.1 prepost-msdsSWLS + PANAS-P – PANAS-N 23 14 37 0.162028 Page.2013.1 prepost-msds AWB 23 14 37 0.572029 Peters.2010.1 prepost-msds PANAS-Short-P 44 38 82 0.492033 Seligman.2006.1 prepost-msds SWLS 14 20 34 -0.012034 Seligman.2006.2 prepost-msds SWLS 11 9 20 0.232035 Shapira.2010.1 prepost-msds SHI 63 70 133 0.012035 Shapira.2010.1 prepost-msds SHI 55 70 125 0.112040 Sheldon.2006.1 prepost-msds PANAS-P 21 23 44 -0.082040 Sheldon.2006.1 prepost-msds PANAS-P 23 23 46 0.32041 Spence.2007.1 prepost-msds SWLS 20 17 37 0.382041 Spence.2007.1 prepost-msds B-PA 20 17 37 0.162041 Spence.2007.1 prepost-msds SWLS 20 17 37 0.382041 Spence.2007.1 prepost-msds B-PA 20 17 37 0.252042 Wing.2006.1 prepost-msds SWLS 58 55 113 -0.112042 Wing.2006.1 prepost-msds SWLS 62 55 117-0.05Note. Nt = treatment sample size; Nc = control sample size; prepost-msds = pre and post means and standard deviations; prepost-difmsds = pre and post mean differences and standard deviations; post-msds = means and standard deviations from post data only; post-anovaF = anovaF statistic from post data only; post-tpvalue = t statistic and p value from post data only.  76Table 4. Effect sizes determined by the current study, for each subjective well-being measure andeach study included in Bolier et al. (2013) subjective well-being meta-analysis.ªANAS-X-P = Positive and Negative Affect Schedule - Expanded - Positive Affect; AWB = The Affective Well-Being; B-PA = Bradburn - Positive Affect; D&E-NP = Diener & Emmons Net Positive Mood; Global SWB = Global Appraisals of Subjective Well-Being; Global SWB-O = Global Appraisals of Subjective Well-Being by observer; HAP-AFFECT = Happiness - Affectometer 1; PANAS-P = Positive and Negative Affect Schedule – Positive; PANAS-P-observer = Positive and Negative Affect Schedule - Positive – Observer; PANAS-Short-P = Positive and Negative Affect Schedule - Short - Positive Affect; PWI-A = Personal Well-Being Index; SHI = Steen Happiness Index; SPFILS = Social Production Function Index Level Scale; AAS-P = Affect-Adjective Scale - positive affect; SPF-IL = Subjective Well-being; SWLS = Satisfaction with Life Scale; PA-NS = Positive Affect, not specified; UPL+PL+SWLS+SHS = unpleasant affect, pleasant affect, SWLS, and SHS combined; WWBI = Workplace Well-being Index. 77Table 5. Effect sizes determined by the current study, for each psychological well-being meas-ure and each study included in Bolier et al. (2013) psychological well-being meta-analysis.ID Study Available Data Measureª Nt Nc N.total r2001 Abbott.2009.1 prepost-msds AHI 26 27 53 -0.022005 Cheavens.2006.1 prepost-msds TSHS 16 16 32 0.172007 Emmons.2003.3 post-anovaF connection with others 33 32 65 0.392008 Fava.1998.1 prepost-msds PWB-AU 10 10 20 0.122008 Fava.1998.1 prepost-msds PWB-EM 10 10 20 0.202008 Fava.1998.1 prepost-msds PWB-PG 10 10 20 0.222008 Fava.1998.1 prepost-msds PWB-PR 10 10 20 0.222008 Fava.1998.1 prepost-msds PWB-PL 10 10 20 0.012008 Fava.1998.1 prepost-msds PWB-SA 10 10 20 0.182009 Fava.2005.1 prepost-msds PWB-AU 8 8 16 0.512009 Fava.2005.1 prepost-msds PWB-EM 8 8 16 0.542009 Fava.2005.1 prepost-msds PWB-PG 8 8 16 0.632009 Fava.2005.1 prepost-msds PWB-PR 8 8 16 0.402009 Fava.2005.1 prepost-msds PWB-PL 8 8 16 0.622009 Fava.2005.1 prepost-msds PWB-SA 8 8 16 0.582010 Feldman.2012.1 prepost-msds GSHS-A 32 32 64 -0.012010 Feldman.2012.1 prepost-msds GSHS-P 32 32 64 0.132010 Feldman.2012.1 prepost-msds PIL 32 32 64 0.042011 Friesqijk.2006.1 prepost-msds MAS 79 86 165 0.062012 Gander.2013.1 prepost-msds AHI 61 63 124 0.052012 Gander.2013.1 prepost-msds AHI 87 63 150 0.032012 Gander.2013.1 prepost-msds AHI 73 63 136 0.052012 Gander.2013.1 prepost-msds AHI 64 63 127 0.122012 Gander.2013.1 prepost-msds AHI 60 63 123 0.1578Table 5. Effect sizes determined by the current study, for each psychological well-being meas-ure and each study included in Bolier et al. (2013) psychological well-being meta-analysis.ID Study Available Data Measureª Nt Nc N.total r2012 Gander.2013.1 prepost-msds AHI 55 63 118 -0.012012 Gander.2013.1 prepost-msds AHI 62 63 125 0.062012 Gander.2013.1 prepost-msds AHI 55 63 118 -0.012012 Gander.2013.1 prepost-msds AHI 42 63 105 0.032016 Green.2006.1 prepost-msds HTS-C 25 24 49 0.182016 Green.2006.1 prepost-msds PWB-PG 25 25 50 0.132016 Green.2006.1 prepost-msds PWB-EM 25 25 50 0.342016 Green.2006.1 prepost-msds PWB-AU 25 25 50 0.032016 Green.2006.1 prepost-msds PWB-PR 25 25 50 0.352016 Green.2006.1 prepost-msds PWB-PL 25 25 50 0.502020 Layous.2013.1 prepost-difmsds NS-NS 81 38 119 0.062022 Luthans.2008.1 prepost-msds PCQ 187 177 364 0.052023 Luthans.2010.1 prepost-ancovaF PCQ 153 89 242 0.212027 Mitchell.2009.1 prepost-msds OTH-P 14 23 37 0.262027 Mitchell.2009.1 prepost-msds OTH-E 17 23 40 0.182027 Mitchell.2009.1 prepost-msds OTH-M 17 23 40 -0.022035 Mongrain.2011.1 prepost-msds SHI 237 237 474 0.022035 Mongrain.2012.1 prepost-msds SHI 87 81 168 -0.012035 Mongrain.2012.1 prepost-msds SHI 102 81 183 0.022035 Mongrain.2012.1 prepost-msds SHI 74 81 155 0.062028 Page.2013.1 prepost-msds PWB 23 14 37 0.112034 Seligman.2006.2 prepost-msds PPTI 11 9 20 0.402041 Spence.2007.1 prepost-msds PWB-AU 20 17 37 0.402041 Spence.2007.1 prepost-msds PWB-EM 20 17 37 0.1379Table 5. Effect sizes determined by the current study, for each psychological well-being meas-ure and each study included in Bolier et al. (2013) psychological well-being meta-analysis.ID Study Available Data Measureª Nt Nc N.total r2041 Spence.2007.1 prepost-msds PWB-PR 20 17 37 0.072041 Spence.2007.1 prepost-msds PWB-PL 20 17 37 0.352041 Spence.2007.1 prepost-msds PWB-PG 20 17 37 0.352041 Spence.2007.1 prepost-msds PWB-SA 20 17 37 0.282041 Spence.2007.1 prepost-msds PWB-AU 20 17 37 0.282041 Spence.2007.1 prepost-msds PWB-EM 20 17 37 0.142041 Spence.2007.1 prepost-msds PWB-PR 20 17 37 0.122041 Spence.2007.1 prepost-msds PWB-PL 20 17 37 0.432041 Spence.2007.1 prepost-msds PWB-PG 20 17 37 0.302041 Spence.2007.1 prepost-msds PWB-SA 20 17 37 0.33Note. Nt = treatment sample size; Nc = control sample size; prepost-msds = pre and post means and standard deviations; prepost-nomsnods = pre and post no means and no standard deviations; prepost-difmsds = pre and post mean differences and standard deviations; prepost-ancovaF = Ancova F statistic from pre and post data; post-anovaF = anova F statistic from post data only. ªAHI = Authentic Happiness Inventory; GSHS-A = Goal-Specific Hope Scale – agency; GSHS-P = Goal-Specific Hope Scale – pathways; HTS-C = Hope Trait Scale composite; NS-NS = NeedSatisfaction - not specified; OTH-E = Orientations to Happiness – engagement; OTH-M = Orientations to Happiness – meaning; OTH-P = Orientations to Happiness – pleasure; PCQ = Psychological Capital Questionnaire; PIL = Purpose in Life Test; MAS = Mastery Scale; PPTI = Positive Psychotherapy Inventory; PWB = Ryff's Psychological Well Being; PWB-AU =  Ryff's Psychological Well Being – Autonomy, PWB-EM = Ryff's Scale of Psychological Well-Being - Environmental mastery; PWB-PG = Ryff's Scale of Psychological Well-Being - Personal growth;PWB-PL = Ryff's Scale of Psychological Well-Being - Purpose in life; PWB-PR = Ryff's Scale of Psychological Well-Being - Positive relations; PWB-SA = Ryff's Scale of Psychological Well-Being – Self-acceptance; SHI = Steen Happiness Index; TSHS = The State Hope Scale. 80Table 6. Effect sizes determined by the current study, for each depression measure and each study included in Bolier et al. (2013) depression meta-analysis.ID Study Available Data Measureª Nt Nc N.total r2001 Abbott.2009.1 prepost-msds DASS-D 26 27 53 -0.102005 Cheavens.2006.1 prepost-msds CES-D 16 16 32 0.232008 Fava.1998.1 prepost-msds CID-DEP 10 10 20 0.532008 Fava.1998.1 prepost-msds SQ-DEP 10 10 20 0.042009 Fava.2005.1 prepost-msds CID-DEP 8 8 16 0.282009 Fava.2005.1 prepost-msds SQ-DEP 8 8 16 0.222012 Gander.2013.1 prepost-msds CES-D 61 63 124 -0.102012 Gander.2013.1 prepost-msds CES-D 87 63 150 -0.012012 Gander.2013.1 prepost-msds CES-D 73 63 136 -0.062012 Gander.2013.1 prepost-msds CES-D 64 63 127 0.002012 Gander.2013.1 prepost-msds CES-D 60 63 123 0.102012 Gander.2013.1 prepost-msds CES-D 55 63 118 -0.062012 Gander.2013.1 prepost-msds CES-D 62 63 125 -0.032012 Gander.2013.1 prepost-msds CES-D 55 63 118 -0.102012 Gander.2013.1 prepost-msds CES-D 42 63 105 0.042014 Grant.2009.1 prepost-msds DASS-D 20 21 41 0.242017 Hurley.2012.1 prepost-msds BDI-II 94 99 193 0.202021 Lichter.1980.2 prepost-msds BDI 25 23 48 0.202027 Mitchell.2009.1 prepost-msds DASS-D 17 23 40 -0.082035 Mongrain.2011.1 prepost-msds CES-D 237 237 474 0.152035 Mongrain.2012.1 prepost-msds CES-D 90 84 174 0.062035 Mongrain.2012.1 prepost-msds CES-D 106 84 190 0.122035 Mongrain.2012.1 prepost-msds CES-D 75 84 159 0.142031 Schueller.2012.1 prepost-msds CES-D 326 355 681 0.1681Table 6. Effect sizes determined by the current study, for each depression measure and each study included in Bolier et al. (2013) depression meta-analysis.ID Study Available Data Measureª Nt Nc N.total r2031 Schueller.2012.1 prepost-msds CES-D 364 355 719 0.182031 Schueller.2012.1 prepost-msds CES-D 319 355 674 0.052032 Seligman.2005.1 prepost-msds CES-D 80 70 150 0.162032 Seligman.2005.1 prepost-msds CES-D 59 70 129 0.102032 Seligman.2005.1 prepost-msds CES-D 68 70 138 0.102032 Seligman.2005.1 prepost-msds CES-D 66 70 136 0.072032 Seligman.2005.1 prepost-msds CES-D 68 70 138 0.032033 Seligman.2006.1 prepost-msds BDI-II 14 20 34 0.222034 Seligman.2006.2 prepost-msds ZSRS 11 9 20 0.472034 Seligman.2006.2 post-msds HRSD 11 9 20 0.592035 Sergeant.2011.1 prepost-nomsnosds CES-D NA NA NA NA2035 Shapira.2010.1 prepost-msds CES-D 63 70 133 0.062035 Shapira.2010.1 prepost-msds CES-D 55 70 125 0.17Note. Nt = treatment sample size; Nc = control sample size; prepost-msds = pre and post means and standard deviations; prepost-nomsnods = pre and post no means and no standard deviations; post-msds =  post means and standard deviations only.ªBDI = Beck Depression Inventory; CES-D = Center for Epidemiologic Studies Depression Scale; CID-DEP = Clinical Interview for Depression; DASS-D = Depression Anxiety Stress Scale – Depression; HRSD = Hamilton Rating Scale for Depression; SQ-DEP = Kellner's Symptom Questionnaire – Depression; ZSRS = Zung Self-Rating Scale for Depression. 82Table 7. Effect sizes determined by the current study, for each life satisfaction measure and each study included in Weis and Speridakos (2011) life satisfaction meta-analysis.ID Study Available Data Measureª Nt Nc N.total r3001 Buchanan.2007.1 prepost-msds MSLSS 8 12 20 -0.073002 Cheavens.2006.1 prepost-msds ISE 16 16 32 0.023004 Duggleby.2007.1 prepost-msds MQOL 28 30 58 0.223005 Irving.2004.1 prepost-none SWB 18 27 45 NA3006 Pretorius.2008.1 prepost-msds SWLS 8 8 16 -0.103007 Ripley.2002.1 prepost -msds DAS 30 28 58 -0.093007 Ripley.2002.1 prepost -msds CARE 30 28 58 0.033007 Ripley.2002.1 prepost -msds DAS 30 28 58 0.373007 Ripley.2002.1 prepost -msds CARE 30 28 58 0.383008 Rustoen.1998.1 prepost -msds QLI-Global 30 40 70 0.053009 Trump.1997.1 prepost -msds SSES 22 20 42 0.233009 Trump.1997.1 prepost -msds PANAS-P 22 20 42 0.403010 Ziv.2011.1 post-msds PANAS-P 30 30 60 0.03Note. Nt = treatment sample size; Nc = control sample size; prepost-msds = pre and post means and standard deviations; post-msds =  post means and standard deviations only; prepost-none = nothing available for pre and post; - = not available at all. ª CARE = Couples Assessment of Relationship Elements; DAS = Dyadic Adjustment Scale; ISE = Index of Self-Esteem; MSLSS = Multidimensional Students Life Satisfaction;  MQOL = McGill Quality of Life Questionnaire; PANAS-P = Positive and Negative Affect Schedule – Positive; QLI – Gloab = Ferrans and Powera Quality of Life;  SSES = State Self-Esteem Scale; SWB = Subjective Well-Being; SWLS = Satisfaction with Life Scale.83Table 8. Summary of reanalyses of the previous meta-analyses.k r (C.I.) RE (C.I.) FAT(p)TF (#) LMT (C.I.)Sin & Lyubomirsky (2009)Well-being 49 .29 (.21, .37) .24 (.18, .30) <.001 .13 (16) .08 (.00, .15)Depression 25 .31 (.17, .43) .25 (.14, .34) <.005 .10 (8) .04 (-.05, .13)Bolier et al (2013)Subjective Well-being 28 .17 .17 (.11, .22) .299 .17 (0) .13 (.02, .24)Psychological Well-being 20 .09 .09 (.04, .14) .015 .06 (5) .02 (-.04, .08)Psychological Well-being (w/o outliers).08 .06 (.03, .10) .048 .05 (5) .01 (-.05, .08)Depression 14 .11 .10 (.03, .16) .019 .07 (5) .02 (-.04, .07)Depression (w/o outliers) 12 .09 .07 (.02, .12) .152 .06 (4) .03 (-.03, .09)Weis & Speridakos (2011)Life Satisfaction 10 .08 .07 (-.01, .16) .151 .08 (2) .17 (-.07, .39)Note. FAT (p) = Funnel plots of asymmetry p value;  LMT = Limit Meta-analysis effect size estimate; RE = random effects model estimate; TF (#) = Trim-and-Fill effect size estimate with number of imputed studies in parentheses; boldface = significant findings. 84Table 9. Summary of replications of the previous meta-analyses.k RE (C.I.) FAT (p) TF (#) LMT (C.I.)Sin & Lyubomirsky (2009)Well-being 40 .23 (.17, .30) <.005 .14 (12) .10 (-.01, .20)Depression 21 .26 (.14, .38) <.001 .06 (10) .03 (-.17, .11)Bolier et al (2013)Subjective Well-being 24 .19 (.11, .26) .166 .10 (6) .12 (-.01, .25)Psychological Well-being 16 .16 (.08, .24) .028 .10 (5) .03 (-.08, .15)Depression 13 .14 (.07, .22) .623 .12 (2) .09 (-.01, .18)Depression (w/o outliers) 8 .17 (.12, .23) - .17 (4) .14 (.04, .23)Weis & Speridakos (2011)Life Satisfaction 8 .13 (.02, .23) - .15 (2) .31 (-.12, .64)Note. FAT (p) = Funnel plots of asymmetry p value;  LMT = Limit Meta-analysis effect size es-timate; RE = random effects model estimate; TF (#) = Trim-and-Fill effect size estimate with number of imputed studies in parentheses; boldface = significant findings.85Table 10. Summary of meta-analyses using all studies in the previous meta-analyses.k RE (C.I.) FAT (p)TF (#) LMT (C.I.) Res. Q(df)I² (C.I)Well-being 62 .19 (.15, .24)< .005 .13 (14).10 (.03, .17)110.64(60)52.5% (36.5, 64.5)Depression 27 .19 (.10, .27).030 .09 (8)-.03 (-.11, .05)63.19(25)66%(49.1, 77.3)Depression (w/o outlier)23 .11 (.04, .18).357 .08 (4).00 (-.08, .08)40.25(21)47.6% (14.7, 67.7)SWLS 13 .10 (-.01, .22).759 .10 (0).09 (-.12, .28)27.67(11)57%(20.2, 76.8)Well-being in controlled settings29 .23 (.16, .31).133 .15(8).13 (0, .23)56.17(27)54.2%(30.3, 69.9)Well-being at home and online15 .17 (.09, .24).207 .09 (4).11 (-.02, .24)29.90(13)58.9%(27.4, 76.8)Note. RE = random effects model estimate; FAT (p) = Funnel plots of asymmetry p value; TF (#) = Trim-and-Fill effect size estimate with number of imputed studies in parentheses; LMT = LimitMeta-analysis effect size estimate; Res. Q (df) = Q statistic (test of residual heterogeneity) and degrees of freedom after conducting limit meta-analysis; I² = percentage of total between-study variability found among the effects sizes after conducting limit meta-analysis; boldface = significant findings.   86Figures Figure 1. Funnel plot of well-being effect sizes from Sin and Lyubomirsky (2009).87Figure 2. Trim-and-fill plot of well-being effect sizes from Sin and Lyubomirsky (2009).88Figure 3. Forest plot of well-being effect sizes from Sin and Lyubomirsky (2009).89Figure 4. Cumulative meta-analysis of well-being effect sizes from Sin and Lyubomirsky (2009).90Figure 5. Limit meta analysis of well-being effect sizes from Sin and Lyubomirsky (2009).91Figure 6. Reanalysis of Sin and Lyubomirsky (2009) well-being effect sizes: Forest plot.92Figure 7. Reanalysis of Sin and Lyubomirsky (2009) well-being effect sizes. Top left panel the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.93Figure 8. Scatterplot of well-being effect sizes determined in the current replication vs. Sin and Lyubomirsky (2009) effect sizes.94Figure 9. Replication of Sin and Lyubomirsky (2009) meta-analysis for well-being: Forest plot.95Figure 10. Replication of Sin and Lyubomirsky (2009) meta-analysis for well-being. Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect sizeand study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.96Figure 11. Reanalysis of Sin and Lyubomirsky (2009) depression effect sizes: Forest plot.97Figure 12. Reanalysis of Sin and Lyubomirsky (2009) depression effect sizes. Top left panel the distribution of study sizes. Top right panel shows the relationship between effect size and study   size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.98Figure 13. Scatterplot of depression effect sizes determined in the current replication vs. Sin and Lyubomirsky (2009) effect sizes.99Figure 14. Replication of Sin and Lyubomirsky (2009) depression effect sizes: Forest plot.100Figure 15. Replication of Sin and Lyubomirsky (2009) meta-analysis for depression. Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect sizeand study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.101Figure 16. Reanalysis of Bolier et al. (2013) SWB effect sizes: Forest plot.102Figure 17. Reanalysis of Bolier et al. (2013) SWB effect sizes. Top left panel the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.103Figure 18. Scatterplot of SWB effect sizes determined in the current replication vs. Bolier et al. (2013) effect sizes.104Figure 19. Replication of Bolier et al. (2013) meta-analysis for SWB: Forest plot.105Figure 20. Replication of Bolier et al. (2013) meta-analysis for SWB. Top left panel is the distri-bution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.106Figure 21. Reanalysis of Bolier et al. (2013) PWB effect sizes: Forest plot.107Figure 22. Reanalysis of Bolier et al. (2013) PWB effect sizes. Top left panel the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.108Figure 23. Scatterplot of PWB effect sizes determined in the current replication vs. Bolier et al. (2013) effect sizes.109Figure 24. Replication of Bolier et al. (2013) meta-analysis for PWB: Forest plot.110Figure 25. Replication of Bolier et al. (2013) meta-analysis for PWB. Top left panel is the distri-bution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.111Figure 26. Reanalysis of Bolier et al. (2013) depression effect sizes: Forest plot.112Figure 27. Reanalysis of Bolier et al. (2013) depression effect sizes. Top left panel the distribu-tion of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.113Figure 28. Scatterplot of depression effect sizes determined in the current replication vs. Bolier etal. (2013) effect sizes.114Figure 29. Replication of Bolier et al. (2013) meta-analysis for depression: Forest plot.115Figure 30. Replication of Bolier et al. (2013) meta-analysis for depression. Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.116Figure 31. Reanalysis of Weis and Speridakos (2011) life satisfaction effect sizes: Forest plot.117Figure 32. Reanalysis of Weis and Speridakos (2011) life satisfaction effect sizes. Top left panel the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.118Figure 33. Scatterplot of life satisfaction effect sizes determined in the current replication vs. Weis and Speridakos (2011) effect sizes.119Figure 34. Replication of Weis and Speridakos (2011) meta-analysis for life satisfaction: Forest plot.120Figure 35. Replication of Weis and Speridakos (2011) meta-analysis for life satisfaction. Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect sizeand study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.121Figure 36. Replication of all previous meta-analyses for well-being, combined (Bolier et al., 2013; Sin & Lyubomirsky, 2009; Weis & Speridakos, 2011): Forest plot.122Figure 37. Replication of all previous meta-analysis for well-being, combined (Bolier et al., 2013; Sin & Lyubomirsky, 2009; Weis & Speridakos, 2011). Top left panel is the distribution of study sizes. Top right panel shows the relationship between effect size and study size. Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.123Figure 38. Replication of all previous meta-analysis for depression (Bolier et al., 2013; Sin & Ly-ubomirsky, 2009): Forest plot.124Figure 39. Replication of all previous meta-analysis for depression (Bolier et al., 2013; Sin & Ly-ubomirsky, 2009). Top left panel is the distribution of study sizes. Top right panel shows the rela-tionship between effect size and study size. Bottom left panel shows the funnel plot. Bottom rightpanel shows the results of the limit meta analysis.125Figure 40. Well-being effect sizes that were calculated from SWLS only: Forest plot.126Figure 41. Well-being effect sizes that were calculated from SWLS only. Top left panel is the dis-tribution of study sizes. Top right panel shows the relationship between effect size and study size.Bottom left panel shows the funnel plot. Bottom right panel shows the results of the limit meta analysis.127

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0308654/manifest

Comment

Related Items