Open Collections

UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

The effect of survey design on response rates, costs, and sampling representativeness in the British… Guo, Yimeng 2014

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Item Metadata

Download

Media
24-ubc_2014_september_guo_yimeng.pdf [ 1.78MB ]
Metadata
JSON: 24-1.0167507.json
JSON-LD: 24-1.0167507-ld.json
RDF/XML (Pretty): 24-1.0167507-rdf.xml
RDF/JSON: 24-1.0167507-rdf.json
Turtle: 24-1.0167507-turtle.txt
N-Triples: 24-1.0167507-rdf-ntriples.txt
Original Record: 24-1.0167507-source.json
Full Text
24-1.0167507-fulltext.txt
Citation
24-1.0167507.ris

Full Text

	  	  	  	   The	  Effect	  of	  Survey	  Design	  on	  Response	  Rates,	  Costs,	  and	  Sampling	  Representativeness	  in	  the	  British	  Columbia	  Health	  Survey:	  A	  Randomized	  Experiment	  	  	   By	  	  Yimeng	  Guo	  	  B.Sc.(Hons).,	  McGill	  University,	  2012	  	  	  	  A	  THESIS	  SUMBITTED	  IN	  PARTIAL	  FULFILLMENT	  	  OF	  THE	  REQUIREMENTS	  FOR	  THE	  DEGREE	  OF	  	  	  MASTER	  OF	  SCIENCE	  	  in	  	  	  THE	  FACULTY	  OF	  GRADUATE	  AND	  POSTDOCTORAL	  STUDIES	  (Population	  and	  Public	  Health)	  	  	  	  THE	  UNIVERSITY	  OF	  BRITISH	  COLUMBIA	  (Vancouver)	  	  	  	  	  	  	  	  September	  2014	  	  	  ©Yimeng	  Guo,	  2014	  	  	  	  	   ii	  Abstract	  Background:	  Population-­‐based	  survey	  is	  an	  essential	  surveillance	  tool	  applicable	  to	  various	  settings,	  including	  collecting	  information	  regarding	  community	  health	  and	  public	  living	  standards.	  	  In	  the	  recent	  decades,	  there	  have	  been	  numerous	  reports	  of	  decreasing	  response	  rates	  in	  population-­‐based	  data	  collection.	  There	  is	  a	  need	  to	  redesign	  surveys	  in	  a	  way	  that	  is	  both	  more	  appealing	  to	  participants	  and	  maximizes	  response	  rats.	  	  Objectives:	  The	  current	  study	  explored	  the	  effects	  of	  several	  survey	  design	  features	  on	  participant	  response	  rates,	  costs,	  and	  data	  representativeness	  in	  a	  general	  population	  health	  survey	  in	  British	  Columbia.	  	  Methods:	  	  The	  British	  Columbia	  Health	  Survey	  was	  conducted	  by	  the	  Arthritis	  Research	  Centre	  of	  Canada	  and	  was	  designed	  to	  target	  all	  non-­‐institutionalized	  adults	  in	  BC.	  Seven	  variants	  of	  the	  survey,	  each	  contained	  a	  different	  combination	  of	  survey	  design	  features,	  were	  developed.	  Survey	  features	  under	  examination	  were	  survey	  mode	  of	  administration	  (paper	  vs.	  online),	  prepaid	  cash	  incentive	  ($2	  vs.	  none),	  lottery	  (instant	  vs.	  end-­‐of-­‐study	  lottery),	  questionnaire	  length	  (10	  min	  vs.	  30	  min),	  and	  sampling	  frame	  (Info	  Canada	  vs.	  Canada	  Post).	  	  8000	  households	  in	  BC	  were	  randomly	  allocated	  to	  one	  of	  the	  seven	  sample	  groups	  (Table	  6.1).	  	  Results:	  The	  overall	  response	  rate	  was	  27.9%	  (min-­‐max:	  17.1-­‐43.4).	  The	  survey	  mode	  elicited	  the	  largest	  effect	  on	  the	  odds	  of	  response	  (OR	  2.04,	  95%	  CI	  1.61-­‐2.59),	  while	  the	  sampling	  frame	  showed	  the	  least	  effect	  (OR	  1.14,	  95%	  CI	  0.98-­‐1.34).	  With	  the	  exception	  of	  the	  Info	  Canada	  sampling	  frame,	  all	  survey	  features	  under	  examination	  led	  to	  statistically	  significant	  differences	  in	  response	  rate.	  Cost	  analysis	  	   iii	  for	  the	  seven	  groups	  showed	  a	  negative	  association	  between	  the	  number	  of	  survey	  features	  and	  the	  resulting	  cost	  per	  response.	  The	  baseline	  survey	  (no	  incentives	  attached)	  exhibited	  the	  lowest	  cost	  per	  surveys	  sent	  ($12.76),	  while	  the	  paper	  survey	  group	  (including	  all	  possible	  incentives)	  showed	  the	  highest	  cost	  per	  survey	  sent	  ($17.87).	  	  Data	  representativeness	  results	  showed	  significant	  differences	  between	  our	  survey	  and	  the	  population-­‐weighted	  Canadian	  Community	  Health	  Survey	  (CCHS)	  in	  terms	  of	  socio-­‐demographic	  variables,	  but	  similar	  distributions	  for	  health	  variables.	  Findings	  from	  this	  study	  provided	  further	  insight	  into	  ways	  to	  improve	  response	  rates	  as	  well	  as	  cost-­‐efficiency	  in	  self-­‐administered	  general	  population	  health	  surveys.	  	  	  	  	  	  	  	  	  	  	  	  	   iv	  Preface	  This	  thesis	  mainly	  involves	  secondary	  analyses	  of	  a	  pre-­‐existing	  dataset	  from	  the	  British	  Columba	  Health	  Survey	  (BCHS),	  the	  subject	  of	  a	  larger	  study.	  	  It	  is	  important	  to	  note	  that	  the	  work	  involved	  with	  survey	  administration	  and	  data	  collection	  is	  not	  counted	  towards	  thesis	  work.	  	  	  	  Ethics	  approval	  was	  not	  required	  due	  to	  the	  secondary	  analysis	  nature	  of	  this	  thesis	  work.	  All	  work	  presented	  henceforth	  were	  conducted	  in	  the	  Arthritis	  Research	  Centre	  of	  Canada.	  	  	  	  	  	  	  	  	  	  	  	   v	  	  Table	  of	  Contents	  Abstract.…………………………………………..…………………………….…………………………………ii	  Preface…………………………………………………………………………………………………………….iv	  Table	  of	  Contents……….…………………………………………………………………………..…………v	  List	  of	  Tables…………………………………………………………………………..…………..….……..viii	  List	  of	  Figures………………………………………………………………………..………………….………x	  List	  of	  Abbreviations……………………………………………………………..………………….……..xi	  Acknowledgements……………………………………………………………..………………….………xii	  1	  	  	  	  Problem	  Statement…………………………………………………………………………………...…1	  2	  	  	  	  Rationale	  …………………………………………………………….…………………………………...…1	  	  3	  	  	  Background	  and	  Literature	  Review……..………….………..…………………………..........4	  3.1	  Questionnaire	  Mode	  of	  Delivery………….……………….……………………….………......…6	  3.2	  Questionnaire	  Length………………….……….……………………………….….......……….....…7	  3.3	  Monetary	  Incentive………………………….…………………………………….....…………......…8	  3.4	  Sampling	  Frame…………….…………………………………....…………………….…………......12	  3.5	  Personalized	  Address.……………………...…………....…………………………….…..............13	  	  3.6	  Survey	  Cost.…………………………………........………………….…………………………............14	  3.7	  Survey	  Representativeness.…………………………………....……………………..................16	  4	  	  	  	  Study	  Objectives.……….……………………....…………………….……………………..............18	  5	  	  	  	  Research	  Hypotheses………………..…………………….……………………………...............18	  6	  	  	  	  Methods…………….….………………………………….……………………..……………...............19	  6.1	  Questionnaire	  Development……………………..……………………………………...............19	  6.2	  Survey	  Groups……………………………………..…………..........................................................22	  6.3	  Data	  Collection…………..............................................................................................................23	  6.4	  Ethical	  Considerations..............................................................................................................25	  6.5	  Methods	  to	  Analyze	  Demographics	  of	  Sampling	  Group	  Respondents…………..25	  6.6	  Methods	  to	  Analyze	  Response	  Rates..................................................................................26	  6.6.1	  Calculation	  of	  response	  rates.......................................................................................26	  	   vi	  6.6.2	  Pairwise	  comparisons	  of	  experimental	  groups...................................................26	  6.6.3	  Marascuilo	  procedure.....................................................................................................27	  6.6.4	  Multivariable	  analysis	  of	  the	  effects	  of	  survey	  design......................................27	  6.7	  Methods	  to	  Analyze	  Survey	  Costs..................................................................................29	  6.7.1	  Survey	  costs	  for	  BCHS	  sampling	  groups.................................................................29	  6.7.2	  Multiple	  linear	  regression.............................................................................................30	  6.8	  Methods	  to	  Analyze	  Data	  Representativeness	  of	  BCHS.............................................31	  6.8.1	  Description	  of	  CCHS	  2010...................................................................................................31	  6.8.2	  CCHS	  data	  adjustments........................................................................................................33	  6.8.3	  Analysis	  of	  data	  representativeness...............................................................................33	  7	  Results....................................................................................................................................35	  7.1	  Demographic	  Characteristics	  of	  Sampling	  Groups.......................................................35	  7.2	  Response	  Rate	  Analyses	  results...........................................................................................37	  7.2.1	  Response	  rates...................................................................................................................37	  7.2.2	  Comparison	  of	  response	  rates	  between	  survey	  groups..................................39	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  7.2.3	  Multivariable	  analysis	  of	  the	  effects	  of	  survey	  design	  factors	  on	  	  response	  rate.....................................................................................................................44	  	  	  	  	  7.2.4	  Using	  logistic	  regression	  coefficients	  to	  estimate	  the	  expected	  	  probabilities	  of	  response	  within	  the	  model	  due	  to	  survey	  factors...……..46	  7.3	  Cost	  Analyses	  Results...............................................................................................................47	  7.3.1	  Cost	  per	  survey	  sent	  for	  individual	  sampling	  group..........................................49	  7.3.2	  Cost	  per	  response	  for	  individual	  sampling	  group...............................................50	  7.3.3	  Effects	  of	  survey	  design	  factors	  on	  cost	  per	  surveys	  sent...............................52	  7.3.4	  Effects	  of	  survey	  design	  factors	  on	  cost	  per	  response......................................53	  7.4	  Data	  Representativeness	  Analyses	  Results.....................................................................54	  7.4.1	  Socio-­‐demographic	  variables.......................................................................................54	  7.4.2	  Health	  Variables.................................................................................................................62	  7.5	  The	  Effect	  of	  Survey	  Features	  on	  Respondent	  Characteristics	  ..............................69	  7.5.1	  The	  effect	  of	  sampling	  frame	  on	  respondent	  characteristics	  ........................69	  7.5.2	  The	  effect	  of	  survey	  form	  on	  respondent	  characteristics................................73	  8	  	  	  	  	  Discussion..........................................................................................................................76	  	   vii	  8.1	  Overall	  Response	  Rates.............................................................................................................76	  8.2	  The	  Effect	  of	  Survey	  Factors	  on	  Response	  Rate.............................................................79	  8.2.1	  The	  effect	  of	  survey	  mode	  on	  response	  rate.........................................................79	  8.2.2	  The	  effects	  of	  monetary	  incentives	  on	  survey	  response	  .................................80	  8.2.3	  The	  effect	  of	  length	  on	  survey	  response	  ................................................................83	  8.2.4	  Personalization	  and	  Info	  Canada	  sampling	  frame..............................................85	  8.3	  Survey	  Costs..................................................................................................................................87	  8.3.1	  Cost/survey	  sent...............................................................................................................87	  8.3.2	  Cost/response....................................................................................................................88	  8.4	  BCHS	  Data	  Representativeness............................................................................................91	  8.4.1	  Gender....................................................................................................................................92	  8.4.2	  Age...........................................................................................................................................92	  8.4.3	  Marital	  status......................................................................................................................94	  8.4.4	  Total	  annual	  household	  income..................................................................................94	  8.4.5	  General	  health.....................................................................................................................95	  8.4.6	  Chronic	  diseases................................................................................................................95	  	  	  8.4.7	  Effect	  of	  sampling	  frame	  and	  survey	  mode	  on	  	  respondent	  characteristics……………………….………………….…………….………..96	  8.5	  Generalizability	  of	  Study	  Results……………………………………….……..……………...101	  9	  	  	  	  	  	  Limitations.......................................................................................................................99	  10	  	  	  	  Strengths........................................................................................................................101	  11	  	  	  	  Implications..................................................................................................................103	  12	  	  	  	  Future	  Studies..............................................................................................................105	  13	  	  	  	  Conclusion.....................................................................................................................107	  References............................................................................................................................109	  Appendices..............................................................................................................................12	  Appendix	  A	  Demographic	  Analysis.........................................................................................122	  Appendix	  B	  Multivariable	  Logistic	  Regression	  with	  Interaction...............................126	  Appendix	  C	  Data	  Representativeness	  Subgroup	  Analysis............................................128	  	  	   viii	  	  List	  of	  Tables	   	  Table	  6.1	  	  	  BCHS	  Mail-­‐out	  Group……….…………..……………….………………………..……………23	  Table	  6.2	  	  	  	  Coding	  for	  logistic	  regression….................….…………..…………...………..…………28	  Table	  7.1	  	  	  	  Demographics	  of	  sampling	  groups…...………………………………..……..…….……37	  Table	  7.2	  	  	  	  Initial	  and	  adjusted	  response	  rate	  and	  frequency………….………..……….……38	  Table	  7.3	  	  	  	  Pairwise	  comparisons	  of	  response	  rates	  for	  the	  experimental	  groups…...43	  Table	  7.4	  	  	  	  Pairwise	  comparisons	  of	  response	  rates	  	  using	  the	  Marascuilo	  procedure………….………………..……………………………..44	  Table	  7.5	  	  	  Estimated	  odds	  ratio	  (OR)	  and	  95%	  confidence	  interval	  (CI)………........…...45	  Table	  7.6	  	  	  	  Expected	  probabilities	  of	  response	  and	  95%	  confidence	  interval	  for	  individual	  survey	  factors	  while	  keeping	  other	  factors	  at	  the	  	  reference	  level…………………………………………………………………………………….47	  Table	  7.7	  	  	  	  Cost	  table	  for	  all	  sampling	  groups……………………………….…………….…………48	  Table	  7.8	  	  	  	  Cost	  per	  survey	  sent……………….…………………..……………..…….………….………49	  Table	  7.9	  	  	  	  Cost	  per	  response.…………….…………………………………..……………….……………51	  Table	  7.10	  	  Multiple	  linear	  regression	  coefficients	  for	  cost	  per	  survey	  sent………….…52	  Table	  7.11	  	  Multiple	  linear	  regression	  coefficients	  for	  cost	  per	  response.…………….…53	  Table	  7.12	  	  	  Percentage	  distribution	  of	  socio-­‐demographic	  and	  general	  health	  	  variables	  between	  CCHS	  and	  BCHS	  sampling	  frames.…………….………….…71	  Table	  7.13	  	  	  Analysis	  of	  differences	  in	  the	  percentage	  of	  persons	  in	  selected	  categories	  of	  socio-­‐demographic	  and	  general	  health	  variables	  	  	  	  	  	  	  	  	  between	  the	  CCHS	  and	  two	  BCHS	  sampling	  frame.………….…………………..72	  Table	  7.14	  	  	  	  Percentage	  distribution	  of	  socio-­‐demographic	  and	  general	  health	  	  variables	  between	  CCHS	  and	  BCHS	  sampling	  modes………….…………..……74	  Table	  7.15	  	  	  	  Analysis	  of	  differences	  in	  the	  percentage	  of	  persons	  in	  selected	  categories	  of	  socio-­‐demographic	  and	  general	  health	  variables	  	  between	  the	  CCHS	  and	  two	  BCHS	  survey	  administration	  methods….......75	  	  	  	   ix	  	  	  Table	  A1	  	  	  	  	  	  	  Difference	  in	  mean	  levels	  of	  age	  between	  survey	  groups,	  	  95%	  CI	  and	  p	  values………………..…..........................................................................123	  Table	  A2	  	  	  	  	  	  Analysis	  of	  pairwise	  differences	  in	  the	  distribution	  of	  gender	  	  	  	  	  among	  the	  7	  survey	  groups	  (p-­‐values	  from	  a	  Chi-­‐square	  test	  for	  	  	  	  	  independence).……......….……………............................................................................124	  Table	  A3	  	  	  	  	  	  Analysis	  of	  pairwise	  differences	  in	  the	  distribution	  of	  education	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  among	  the	  7	  survey	  groups	  (p-­‐values	  from	  a	  Chi-­‐square	  test	  for	  	  	  	  	  	  	  	  	  	  independence).................................................................................................................125	  Table	  B1	  	  	  	  	  	  	  Logistic	  regression	  analysis	  of	  the	  effect	  of	  5	  survey	  design	  factors	  on	  survey	  response	  with	  an	  interaction	  term	  between	  prepaid	  cash	  and	  instant	  lottery	  (coefficients	  and	  95%	  CI)	  ............................................................126	  Table	  B2	  	  	  	  	  	  	  Logistic	  regression	  analysis	  of	  the	  effect	  of	  5	  survey	  design	  factors	  on	  survey	  response	  with	  an	  interaction	  term	  between	  prepaid	  cash	  and	  instant	  lottery	  (odds	  ratios	  and	  95%	  CI).............................................................126	  Table	  C1	  	  	  	  	  	  	  Analysis	  of	  differences	  in	  the	  percentage	  of	  persons	  ≤	  29	  years	  of	  age	  	  between	  the	  CCHS	  and	  7	  BCHS	  sampling	  groups.............................................128	  Table	  C2	  	  	  	  	  	  Analysis	  of	  differences	  in	  the	  percentage	  of	  married	  individuals	  between	  	  	  	  	  the	  CCHS	  and	  7	  BCHS	  sampling	  groups.................................................................128	  Table	  C3	  	  	  	  	  	  	  Analysis	  of	  differences	  in	  the	  percentage	  of	  single/never	  married	  	  	  individuals	  	  between	  the	  CCHS	  and	  7	  BCHS	  sampling	  groups………..……129	  Table	  C4	  	  	  	  	  	  	  Analysis	  of	  differences	  in	  the	  percentage	  of	  persons	  reporting	  	  excellent	  health	  between	  the	  CCHS	  and	  7	  BCHS	  sampling	  groups..........129	  Table	  C5	  	  	  	  	  	  	  	  Analysis	  of	  differences	  in	  the	  percentage	  of	  persons	  reporting	  total	  	  	  	  	  annual	  income	  ≥$80,000	  between	  the	  CCHS	  and	  7	  BCHS	  	  sampling	  groups…………………………………………………………………………......129	  	  	  	  	  	  	   x	  	  	  List	  of	  Figures	   	  Figure	  6.1	  	  	  	  British	  Columbia	  Health	  Survey	  (BCHS)	  Study	  Design……………..…..………24	  Figure	  7.1	  	  	  	  Response	  Rates	  of	  BCHS	  Sampling	  Groups	  …………………………………………39	  Figure	  7.2	  	  	  	  Logistic	  regression	  estimated	  odds	  for	  individual	  survey	  factors…………45	  Figure	  7.3	  	  	  	  Expected	  probability	  of	  response	  for	  survey	  factors……………………………47	  Figure	  7.4	  	  	  	  Cost/survey	  sent	  for	  individual	  sampling	  groups……...............………………..50	  Figure	  7.5	  	  	  	  Cost/response	  for	  individual	  survey	  groups……………....…………..…………..51	  Figure	  7.6	  	  	  	  	  Multiple	  linear	  regression	  coefficient	  for	  the	  effects	  of	  survey	  design	  	  factors	  on	  cost	  per	  survey	  sent……................…………...……………..................…..53	  Figure	  7.7	  	  	  	  Multiple	  linear	  regression	  coefficient	  for	  the	  effects	  of	  survey	  design	  	  factors	  on	  cost	  per	  response……...…………..………….……………..……….…...…..54	  Figure	  7.8	  	  	  	  Percentage	  distribution	  of	  gender	  in	  CCHS	  and	  BCHS……...…………..………55	  Figure	  7.9	  	  	  	  Percentage	  distribution	  of	  age	  in	  CCHS	  and	  BCHS…….....................……..……57	  Figure	  7.10	  	  Percentage	  distribution	  of	  marital	  status	  in	  CCHS	  and	  BCHS…...............…59	  Figure	  7.11	  	  	  Percentage	  distribution	  of	  total	  annual	  household	  income	  in	  	  	  	  CCHS	  and	  BCHS……...…………..……………………..……….……….…………..…....…..61	  Figure	  7.12	  	  	  Percentage	  distribution	  of	  general	  health	  in	  CCHS	  and	  BCHS………..……63	  Figure	  7.13	  	  	  Prevalence	  of	  arthritis	  in	  CCHS	  and	  BCHS	  sampling	  groups…….................64	  Figure	  7.14	  	  	  Prevalence	  of	  asthma	  in	  CCHS	  and	  BCHS	  sampling	  groups…..……….........65	  Figure	  7.15	  	  	  Prevalence	  of	  diabetes	  in	  CCHS	  and	  BCHS	  sampling	  groups…....................66	  Figure	  7.16	  	  	  Prevalence	  of	  heart	  disease	  in	  CCHS	  and	  BCHS	  sampling	  groups…..……67	  Figure	  7.17	  	  	  Prevalence	  of	  hypertension	  in	  CCHS	  and	  BCHS	  sampling	  groups.............68	  	  Figure	  A1	  	  	  	  	  	  	  Differences	  in	  mean	  age	  between	  the	  7	  survey	  groups	  	  (95%	  confidence	  intervals	  based	  on	  Tukey’s	  Honest	  	  Significant	  Difference	  Test)	  ……...…………..………………….….…….……….....123	  	  	  	  	   xi	  	  List	  of	  Abbreviations	  ARC	  –	  Arthritis	  Research	  Centre	  ANOVA	  –	  Analysis	  of	  Variance	  BCBREB	  –	  British	  Columbia	  Behavioral	  Research	  Ethics	  Board	  BCHS	  –	  British	  Columbia	  Health	  Survey	  CCHS	  –	  Canadian	  Community	  Health	  Survey	  CI	  –	  Confidence	  Internal	  Div	  –	  Divorced	  GP	  –	  General	  Physician	  ID	  –	  Identification	  NTFS	  –	  New	  Technology	  Final	  System	  OA	  –	  Osteoarthritis	  	  QNHS	  –	  Quarterly	  National	  Health	  Survey	  PIN	  –	  Personal	  Identification	  Number	  RCT	  –	  Randomized	  Controlled	  Trial	  RDD	  –	  Random	  Digit	  Dialing	  RSS	  –	  Online	  Survey	  System	  Sep	  –	  Separated	  	  SNHS	  –	  Spanish	  National	  Health	  Survey	  SSL	  –	  Secure	  Sockets	  Layer	  SPPH	  –	  School	  of	  Population	  and	  Public	  Health	  SQL	  –	  Structural	  Query	  Language	  	   xii	  	  	  Acknowledgements	  	  My	  thesis	  could	  not	  be	  possible	  without	  tremendous	  amount	  of	  guidance	  from	  my	  thesis	  committee,	  great	  academic	  support	  and	  help	  from	  my	  fellow	  classmates	  and	  colleagues,	  and	  countless	  words	  of	  encouragements	  and	  well	  wishes	  from	  my	  friends	  and	  family.	  I	  feel	  so	  grateful	  to	  be	  surrounded	  by	  such	  a	  wonderful	  community,	  which	  has	  made	  my	  graduate	  career	  much	  more	  enjoyable.	  	  	  I	  would	  like	  start	  off	  by	  thanking	  Dr.	  Jacek	  Kopec,	  my	  thesis	  advisor.	  Dr.	  Kopec,	  you	  have	  positively	  influenced	  me	  with	  your	  kindness	  and	  enthusiasm	  towards	  my	  work.	  I	  am	  thankful	  that	  you	  have	  always	  promptly	  responded	  to	  my	  emails	  answering	  each	  question	  in	  great	  detail.	  Most	  importantly,	  thank	  you	  for	  seeing	  the	  potential	  in	  me	  and	  introducing	  me	  to	  the	  field	  of	  public	  health.	  I	  am	  greatly	  inspired	  by	  your	  kindness	  and	  indebted	  to	  you	  for	  your	  countless	  nights	  of	  revising	  my	  drafts.	  	  I	  honestly	  could	  not	  have	  asked	  for	  a	  better	  advisor.	  	  	  Dr.	  Jolanda	  Cibere,	  thank	  you	  for	  being	  a	  caring	  and	  understanding	  mentor.	  Thank	  you	  for	  giving	  me	  an	  opportunity	  to	  work	  in	  the	  public	  health	  field.	  The	  IMAPKT-­‐HiP	  Natural	  History	  study	  has	  opened	  up	  my	  eyes	  to	  the	  research	  aspect	  of	  public	  health.	  I	  greatly	  appreciate	  your	  comments	  and	  feedbacks	  with	  my	  drafts	  and	  encouraging	  me	  to	  publish	  my	  results,	  which	  I	  most	  definitely	  will.	  Lastly,	  thank	  you	  for	  always	  checking	  that	  I	  am	  not	  overwhelmed	  with	  work	  while	  completing	  my	  thesis.	  	  	  Dr.	  Linda	  Li,	  thank	  you	  for	  your	  insightful	  comments	  and	  inputs	  during	  our	  committee	  meetings.	  Your	  methodological	  expertise	  was	  essential	  for	  the	  completion	  of	  this	  thesis	  and	  is	  greatly	  appreciated.	  Despite	  your	  busy	  schedule,	  thank	  you	  for	  taking	  the	  time	  to	  edit	  my	  work	  and	  leaving	  feedback	  (even	  when	  traveling	  on	  a	  plane).	  	  	  Dr.	  Charlie	  Goldsmith,	  I	  can	  easily	  say	  that	  without	  your	  guidance	  in	  the	  statistical	  aspect	  of	  my	  thesis,	  this	  work	  would	  be	  not	  be	  possible.	  I	  could	  not	  have	  found	  a	  better	  advisor	  who	  is	  more	  knowledgeable	  in	  R.	  I	  have	  learned	  so	  much	  regarding	  statistical	  concepts	  and	  analyses	  and	  I	  am	  so	  grateful	  for	  your	  presence	  on	  my	  advisory	  committee.	  	  	  I	  would	  like	  to	  thank	  my	  fellow	  colleagues	  at	  ARC	  and	  fellow	  classmates	  at	  SPPH.	  Thank	  you	  to	  my	  colleagues	  for	  regularly	  asking	  me	  about	  my	  progress	  and	  keeping	  me	  on	  track	  with	  the	  timeline.	  Thank	  you	  to	  fellow	  ARC	  trainees	  for	  sharing	  your	  expertise	  on	  various	  aspects	  of	  my	  thesis.	  You	  all	  made	  my	  time	  at	  ARC	  a	  wonderful	  and	  memorable	  experience.	  To	  my	  friends	  and	  classmates	  at	  SPPH,	  we	  spent	  countless	  hours	  digging	  into	  epidemiology	  and	  biostatistics	  books,	  trying	  to	  understand	  the	  meanings	  behind	  p-­‐values	  and	  95%	  confidence	  intervals,	  while	  finding	  out	  everything	  there	  is	  to	  know	  about	  RCT’s.	  Thank	  you	  all	  for	  the	  times	  we	  shared	  together	  and	  the	  great	  input	  for	  the	  mock	  defense.	  	  	   xiii	  	  My	  friends	  and	  family	  at	  CECBC,	  thank	  you	  all	  for	  the	  continuous	  support	  and	  prayers	  that	  has	  been	  the	  backbone	  for	  my	  determination.	  	  	  	  To	  my	  loving	  parents,	  Kathy	  and	  Jack,	  I	  cannot	  express	  how	  grateful	  I	  am	  for	  having	  you	  as	  my	  family.	  	  Thank	  you	  for	  raising	  me	  to	  be	  the	  person	  I	  am	  today,	  and	  for	  that	  I	  am	  forever	  grateful.	  You	  mean	  the	  world	  to	  me	  and	  I	  could	  not	  have	  asked	  for	  more	  loving	  and	  supportive	  parents.	  	  	  Finally,	  to	  my	  dearest	  Ariah,	  thank	  you	  for	  embracing	  my	  flaws,	  loving	  me	  for	  who	  I	  am,	  and	  sticking	  by	  my	  side	  through	  peaks	  and	  valleys.	  You	  are	  my	  source	  of	  motivation	  for	  living	  everyday	  to	  its	  fullest	  and	  to	  always	  look	  forward	  to	  the	  future.	  	   1	  1	  Problem	  Statement	  In	  population	  health	  surveys,	  a	  poor	  response	  rate	  may	  introduce	  bias	  and,	  if	  the	  sample	  size	  is	  not	  properly	  adjusted,	  reduce	  study	  precision.	  Differences	  between	  respondents	  and	  non-­‐respondents	  may	  limit	  the	  generalizability	  of	  the	  findings	  as	  well	  as	  overall	  usability	  of	  the	  data.	  Reports	  show	  that	  survey	  response	  rates	  have	  been	  consistently	  decreasing	  during	  the	  last	  few	  decades,	  making	  survey-­‐based	  research	  increasingly	  difficult	  1,2.	  Since	  a	  high	  response	  rate	  is	  necessary	  to	  make	  a	  generalizable	  and	  unbiased	  inference,	  there	  is	  a	  need	  to	  redesign	  surveys	  in	  a	  way	  so	  that	  they	  may	  appear	  more	  attractive	  to	  the	  general	  public,	  thus	  encouraging	  higher	  response	  rate.	  In	  this	  thesis,	  I	  propose	  to	  examine	  the	  influence	  of	  several	  survey	  design	  features	  on	  response	  rate,	  cost	  efficiency,	  and	  survey	  representativeness.	  	  	  	  	  	  	  	  	  	  	   2	  2	  Rationale	  Population-­‐based	  survey	  is	  an	  essential	  surveillance	  tool	  and	  is	  often	  used	  to	  collect	  information	  regarding	  community	  health	  as	  well	  as	  public	  living	  standards,	  among	  many	  other	  uses.	  However,	  it	  has	  been	  consistently	  noted	  that	  there	  is	  a	  steady	  trend	  of	  decreasing	  response	  rates	  from	  all	  forms	  of	  population-­‐based	  data	  collection	  in	  the	  recent	  decades1-­‐8.	  This	  phenomenon	  may	  be	  due	  to	  the	  changing	  public	  opinion	  on	  survey	  participation,	  growing	  concerns	  regarding	  privacy	  and	  increasing	  number	  of	  unsolicited	  mails,	  phone	  calls,	  and	  emails9.	  Bias	  may	  arise	  from	  poor	  response	  rates	  due	  to	  demographic	  differences	  such	  as	  age,	  sex	  and	  area	  of	  residence,	  between	  the	  respondents	  and	  non-­‐respondents10,11.	  Non-­‐response	  bias	  is	  becoming	  a	  topic	  of	  increasing	  concern	  because	  poor	  response	  may	  limit	  the	  generalizability	  of	  findings	  and	  bias	  the	  result	  leading	  to	  erroneous	  conclusions.	  Baines	  et	  al.	  (2007)	  states	  that	  researchers	  must	  take	  account	  of	  differences	  between	  responders	  and	  non-­‐responders	  to	  draw	  inference	  from	  the	  obtained	  data11.	  Therefore,	  there	  is	  a	  need	  for	  exploring	  design	  methods	  that	  maximize	  the	  response	  rates.	  Due	  to	  the	  rapid	  advancement	  of	  computer	  technology	  in	  the	  current	  society,	  the	  use	  of	  internet-­‐based	  population	  health	  surveys	  is	  becoming	  more	  prominent	  among	  health	  research	  professionals12.	  Conventional	  postal	  surveys	  pose	  a	  number	  of	  limitations.	  The	  validity	  of	  findings	  is	  often	  affected	  by	  participant	  non-­‐response13.	  Furthermore,	  missing	  data	  due	  to	  questionnaire	  design	  or	  participant’s	  unwillingness	  to	  disclose	  information	  poses	  a	  threat	  to	  the	  generalizability	  of	  the	  results14.	  To	  date,	  there	  are	  large	  numbers	  of	  studies	  concerning	  self-­‐reported	  mail	  surveys,	  which	  	   3	  explore	  various	  effects	  of	  survey	  factors	  on	  response	  rates	  and	  how	  well	  these	  surveys	  can	  target	  the	  intended	  population.	  In	  addition,	  a	  number	  of	  survey	  guidelines	  have	  been	  established,	  which	  outline	  various	  mail	  survey	  implementation	  methods	  that	  would	  achieve	  optimal	  response	  rates15-­‐18.	  However,	  there	  is	  a	  lack	  of	  a	  similar	  set	  of	  guidelines	  outlining	  the	  effects	  of	  various	  survey	  incentives	  and	  other	  factors	  on	  web-­‐based	  survey	  response12,18.	  Furthermore,	  although	  many	  past	  studies	  examined	  the	  effects	  of	  individual	  survey	  design	  aspects	  on	  response	  rates,	  there	  is	  scarce	  evidence	  regarding	  the	  effects	  of	  combinations	  of	  multiple	  factors	  on	  survey	  response	  19-­‐21.	  Studies	  have	  mainly	  focused	  on	  the	  effects	  of	  survey	  design	  in	  postal	  surveys.	  As	  technology	  progresses	  and	  the	  use	  of	  Internet-­‐based	  surveys	  becomes	  more	  prominent,	  there	  is	  a	  need	  to	  evaluate	  the	  effectiveness	  of	  known	  postal	  survey	  design	  features	  in	  web-­‐based	  surveys.	  	  The	  British	  Columbia	  Health	  Survey	  (BCHS)	  was	  conducted	  by	  the	  Arthritis	  Research	  Centre	  of	  Canada	  between	  September	  of	  2012	  and	  February	  2013.	  The	  main	  objective	  of	  the	  survey	  was	  to	  determine	  the	  prevalence	  of	  musculoskeletal	  pain,	  physician-­‐diagnosed	  osteoarthritis,	  risk	  factors	  for	  these	  conditions,	  and	  the	  use	  of	  health	  services.	  In	  addition,	  an	  experiment	  was	  implemented	  within	  the	  BCHS	  survey	  design,	  in	  which	  seven	  different	  sampling	  groups	  were	  created,	  each	  containing	  a	  different	  combination	  of	  survey	  design	  factors.	  	  Five	  factors	  under	  examination	  are:	  1. Survey	  mode	  (paper	  vs.	  online)	  2. Provision	  of	  cash	  incentive	  (Prepaid	  cash	  incentive	  vs.	  no	  cash	  incentive)	  	   4	  3. Methods	  of	  lottery	  incentive	  (Instant	  lottery	  vs.	  post-­‐study	  lottery)	  4. Questionnaire	  length	  (10	  minutes	  vs.	  30	  minutes)	  5. Sampling	  frame	  (Info	  Canada	  vs.	  Canada	  Post)	  	  Using	  the	  data	  collected	  from	  the	  BCHS,	  my	  thesis	  focuses	  on	  evaluating	  the	  impact	  of	  the	  five	  factors	  on	  response	  rate,	  cost	  effectiveness	  and	  data	  representativeness	  in	  a	  sample	  drawn	  from	  the	  BC	  population	  (n	  =	  8000).	  	  	  	  	  	  	  	  	  	  	  	  	  	   5	  3	  Background	  and	  Literature	  Review	  Compared	  to	  more	  conventional	  telephone	  and	  mail	  survey	  modes,	  online	  surveys	  present	  many	  advantages.	  Factors	  such	  as	  reduced	  cost,	  convenience,	  geographic	  access	  and	  improved	  timeliness	  offer	  much	  more	  flexibility	  for	  both	  the	  researchers	  and	  the	  participants20,22,23.	  However,	  there	  are	  still	  a	  number	  of	  methodological	  issues	  in	  conducting	  online	  surveys.	  These	  include	  subject	  recruitment,	  retention,	  degree	  of	  accuracy	  when	  answering,	  as	  well	  as	  subject’s	  access	  to	  the	  Internet.	  The	  lack	  of	  an	  online	  sampling	  frame	  may	  also	  affect	  data	  representativeness.	  Response	  rates	  from	  online	  surveys	  are	  often	  lower	  than	  those	  of	  postal	  surveys20,24,25.	  Plausible	  explanations	  for	  the	  observed	  lower	  response	  rates	  in	  online	  surveys	  include	  a	  lack	  of	  familiarity	  with	  online	  tools	  in	  certain	  demographic	  groups20,26,	  the	  use	  of	  email	  management	  software	  to	  store	  potential	  spam	  mail	  in	  a	  separate	  folder,	  as	  well	  as	  losing	  older	  emails	  to	  the	  bottom	  of	  the	  inbox	  as	  newer	  emails	  are	  received24.	  Socio-­‐demographic	  and	  geographic	  elements	  also	  play	  an	  important	  role	  in	  participant’s	  access	  to	  Internet.	  The	  2012	  Canadian	  Internet	  User	  Survey	  has	  shown	  that	  rural	  residents	  (75%)	  have	  less	  access	  compared	  to	  urban	  residents	  (80-­‐85%).	  In	  addition,	  access	  by	  the	  elderly	  population	  is	  less	  than	  the	  young.	  Income	  and	  education	  were	  also	  found	  to	  be	  associated	  with	  Internet	  accessibility	  at	  home27,28.	  Past	  studies	  that	  assessed	  familiarity	  and	  comfort	  with	  web	  survey	  participation	  suggested	  that	  these	  issues	  would	  decrease	  over	  time29,30.	  However,	  such	  problems	  still	  exist	  and	  have	  not	  been	  sufficiently	  elucidated	  31-­‐33.	  	   6	  Due	  to	  the	  flaws	  of	  both	  paper	  and	  online	  modes	  of	  survey	  delivery,	  it	  has	  been	  found	  that	  using	  a	  mixed-­‐mode	  delivery	  method	  may	  increase	  the	  overall	  response	  as	  compared	  to	  using	  a	  single-­‐mode	  method34.	  In	  a	  study	  conducted	  by	  Greenlaw	  and	  Brown-­‐Welty,	  a	  significant	  difference	  in	  response	  rate	  between	  mixed	  mode	  surveys	  and	  single	  mode	  survey	  suggests	  that	  participants	  are	  more	  likely	  to	  respond	  when	  given	  a	  choice	  of	  the	  available	  survey	  mediums	  (paper:	  42.03%,	  web:	  52.46%,	  mixed-­‐mode:	  60.27%)34.	  Interestingly,	  a	  2011	  study	  concluded	  that	  response	  rate	  is	  improved	  when	  different	  survey	  modes	  are	  offered	  sequentially	  (i.e.	  web	  followed	  by	  paper)	  rather	  than	  concurrently35.	  Target	  populations	  that	  are	  unreachable	  by	  a	  single-­‐mode	  method	  may	  be	  reached	  by	  making	  available	  additional	  modes,	  thus	  increasing	  response	  rate.	  Furthermore,	  respondents	  to	  the	  second	  mode	  may	  have	  similar	  qualitative	  traits	  as	  non-­‐respondents11.	  A	  number	  of	  studies	  have	  examined	  the	  effects	  of	  various	  survey	  aspects	  on	  response	  rates.	  In	  a	  meta-­‐analysis	  of	  surveys	  designs	  to	  increase	  response	  in	  mail	  surveys,	  Yammarino	  et	  al.	  states	  that	  postal	  surveys	  features	  which	  increase	  response	  rate	  includes	  repeated	  contact,	  inclusion	  of	  a	  paid	  return	  envelope,	  shorter	  survey	  length,	  monetary	  incentive,	  and	  high	  topic	  salience.	  Of	  those,	  survey	  length,	  monetary	  incentive,	  and	  number	  of	  repeated	  contacts	  were	  found	  to	  be	  associated	  with	  maximum	  response	  rate36.	  In	  a	  Cochrane	  systematic	  review	  by	  Edwards	  et	  al.,	  factors	  associated	  with	  increased	  response	  rate	  to	  postal	  surveys	  were:	  short	  questionnaire	  length,	  personalized	  cover	  letters,	  the	  use	  of	  colored	  ink,	  monetary	  incentive,	  prepaid	  incentive,	  first	  class	  post	  with	  tracking,	  inclusion	  of	  return	  postage	  with	  stamp,	  prior	  contact	  with	  participant,	  follow-­‐up	  contacts,	  and	  provision	  of	  a	  second	  questionnaire	  	   7	  18.	  Factors	  that	  were	  found	  to	  increase	  response	  rate	  in	  web-­‐based	  surveys	  include	  cash	  lotteries,	  shorter	  questionnaires,	  inclusion	  of	  visual	  elements	  (such	  as	  diagrams	  and	  progress	  indicators),	  ease	  of	  login,	  and	  speed	  of	  Internet37,38.	  3.1	  Questionnaire	  Mode	  of	  Delivery	  In	  this	  era	  of	  technological	  boom,	  web	  surveys	  appear	  to	  be	  a	  promising	  medium	  for	  survey	  administration	  due	  to	  lower	  costs	  and	  improved	  timeliness17.	  In	  the	  late	  20th	  century,	  marketing	  researchers	  claimed	  that	  the	  use	  of	  the	  web	  as	  a	  replacement	  for	  telephone	  survey	  may	  be	  similar	  to	  that	  when	  telephone	  surveys	  replaced	  in-­‐person	  interviews39-­‐42.	  Several	  challenges,	  however,	  exist	  including	  low	  computer	  literacy	  and	  a	  lack	  of	  familiarity	  with	  online	  and	  email	  management	  tools,	  poor	  web	  connection,	  outdated	  operating	  systems,	  and	  non-­‐coverage	  for	  certain	  geographic	  regions	  and	  demographic	  groups17,20,24.	  Several	  studies	  assessed	  the	  response	  between	  mail	  and	  web-­‐based	  surveys,	  with	  findings	  consistently	  showed	  a	  higher	  response	  rate	  associated	  with	  mail	  surveys43-­‐48.	  For	  example,	  Leece	  et	  al.	  conducted	  a	  randomized	  study	  on	  the	  response	  rates	  between	  mailed	  vs.	  online	  survey	  in	  a	  group	  of	  442	  surgeons.	  The	  result	  showed	  that	  response	  rate	  in	  the	  online	  survey	  arm	  was	  statistically	  significantly	  lower	  compared	  to	  the	  mail	  survey	  arm	  (58%)	  (absolute	  difference:	  13%,	  95%	  CI	  4%-­‐22%,	  P	  <	  0.01).43.	  One	  explanation	  is	  that	  an	  increased	  awareness	  and	  fear	  toward	  computer	  virus	  and	  spam	  emails	  lead	  participants	  to	  disregard	  unsolicited	  emails.	  Strategies	  such	  as	  the	  use	  of	  a	  postal	  invitation	  and	  inclusion	  of	  a	  user	  ID	  and	  PIN	  to	  access	  the	  web	  questionnaire	  have	  been	  suggested	  as	  an	  effective	  way	  to	  increase	  participation	  	   8	  rate25,43.	  Another	  limitation	  of	  online	  questionnaire	  is	  the	  participant’s	  access	  to	  Internet.	  It	  is	  possible	  that	  elderly	  participants	  and	  those	  who	  live	  with	  low	  income	  are	  more	  likely	  to	  lack	  readily	  available	  Internet	  access.	  In	  addition,	  elderly	  individuals	  may	  also	  have	  functional	  impairments,	  which	  may	  prevent	  them	  from	  accessing	  the	  web49.	  The	  2012	  Canadian	  Internet	  User	  Survey	  reported	  that	  while	  50%	  of	  users	  aged	  16	  to	  24	  years	  access	  the	  web	  more	  than	  10	  hours	  each	  week,	  21%	  of	  users	  aged	  over	  65	  years	  reported	  similar	  level	  of	  use28.	  Thus,	  it	  is	  crucial	  to	  consider	  the	  socio-­‐demographic	  characteristics	  of	  the	  target	  population	  when	  administering	  a	  web-­‐based	  survey.	  3.2	  Questionnaire	  Length	  A	  number	  of	  studies	  reported	  that	  web	  survey	  participants	  were	  more	  likely	  to	  respond	  to	  shorter	  questionnaires50-­‐52.	  Edwards	  et	  al.	  used	  a	  factorial	  design	  to	  examine	  the	  effect	  of	  topic	  salience	  and	  survey	  length	  on	  response	  rate18.	  The	  study	  found	  a	  statistically	  higher	  response	  rate	  associated	  with	  shorter	  survey	  length	  (30.8%	  vs.	  18.6%).	  In	  another	  study,	  conducted	  by	  Kalantar	  et	  al.,	  a	  short	  questionnaire	  was	  more	  favorable	  when	  compared	  to	  a	  long	  questionnaire	  (75.6%	  vs.	  67.7%)53.	  In	  a	  third	  trial	  conducted	  by	  Sahlqvist	  et	  al	  (2011),	  shortening	  a	  survey	  resulted	  in	  a	  statistically	  significant	  increase	  in	  response	  rates	  (24	  vs.	  15	  pages)9.	  Although	  these	  results	  consistently	  suggested	  that	  survey	  length	  has	  a	  large	  influence	  on	  response	  rates,	  conclusions	  from	  these	  findings	  remained	  inconsistent.	  Earlier	  findings	  suggested	  that	  there	  was	  no	  statistically	  significant	  difference	  in	  response	  rate	  when	  comparing	  questionnaires	  8	  and14	  pages	  long54;	  however,	  later	  studies	  	   9	  indicated	  that	  the	  response	  rate	  begins	  to	  decrease	  when	  the	  questionnaire	  is	  over	  12	  pages17.	  In	  addition,	  although	  the	  use	  of	  a	  short	  questionnaire	  may	  produce	  higher	  response	  rates,	  less	  information	  may	  be	  obtained	  as	  a	  result	  of	  the	  reduction	  in	  length.	  Therefore,	  the	  decision	  to	  introduce	  a	  short	  or	  long	  survey	  is	  often	  situation	  specific.	  The	  researcher	  needs	  to	  consider	  the	  trade-­‐off	  between	  the	  amount	  of	  information	  needed	  and	  the	  response	  rate	  required	  to	  reduce	  non-­‐response	  error.	  3.3	  Monetary	  Incentive	  Monetary	  incentives	  are	  defined	  as	  rewards	  offered	  as	  compensation	  for	  study	  participation	  that	  carry	  a	  monetary	  value.	  Common	  monetary	  incentives	  include,	  but	  are	  not	  restricted	  to,	  cash	  rewards,	  gift	  certificate,	  and	  cash	  lotteries.	  Incentives	  use	  in	  papers	  surveys	  are	  widely	  recognized	  as	  an	  effective	  means	  to	  increase	  response	  rate.	  Material	  incentives	  in	  the	  forms	  of	  electronic	  gift	  certificates	  have	  a	  positive	  impact	  on	  response	  rates55,56.	  Paul	  and	  colleagues	  conducted	  a	  study	  in	  which	  the	  intervention	  group	  received	  an	  AU$20	  gift	  voucher	  at	  the	  end	  of	  the	  survey57.	  The	  difference	  in	  response	  rate	  between	  the	  intervention	  and	  control	  group	  was	  statistically	  significant	  (65.9%	  vs.	  53.5%).	  	  However,	  these	  material	  incentives	  appeared	  to	  have	  a	  weaker	  effect	  on	  response	  rates	  compared	  to	  monetary	  incentives.	  	  In	  a	  randomized	  controlled	  trial,	  Birnholtz	  et	  al	  (2004)	  found	  that	  the	  response	  rate	  of	  participants	  who	  received	  a	  $5	  cash	  incentive	  by	  mail	  (57%)	  was	  significantly	  higher	  than	  those	  who	  received	  a	  $5	  Amazon	  gift	  card	  via	  mail	  (40%)	  or	  a	  $5	  Amazon	  gift	  certificate	  via	  email	  (32%)58.	  	   10	  Similar	  to	  the	  effect	  of	  shortened	  survey	  length,	  the	  inclusion	  of	  a	  monetary	  incentive	  has	  also	  been	  adopted	  in	  various	  studies	  to	  encourage	  a	  higher	  response	  rate.	  Monetary	  incentive	  can	  take	  forms	  such	  as	  post-­‐paid	  cash	  incentive,	  prepaid	  cash	  incentive,	  and	  lottery	  incentive.	  Results	  from	  studies	  conducted	  using	  postpaid	  monetary	  rewards	  tend	  to	  show	  an	  increase	  in	  response	  rate.	  A	  study	  conducted	  in	  Switzerland	  showed	  that	  the	  response	  rate	  was	  statistically	  different	  when	  participants	  were	  offered	  10	  francs	  upon	  receiving	  the	  completed	  questionnaire	  (83.9%	  vs.	  78.1%)59.	  Prepaid	  cash	  incentive	  that	  is	  included	  with	  initial	  mailing	  of	  surveys	  has	  been	  found	  to	  increase	  response	  rates	  in	  a	  number	  of	  studies	  1,10,12,15,60,61.	  	  Evidence	  suggests	  that	  the	  inclusion	  of	  a	  pre-­‐paid	  incentive	  may	  result	  in	  higher	  response	  rate	  when	  compared	  to	  a	  conditional	  incentive	  that	  is	  given	  only	  after	  the	  questionnaire	  has	  been	  completed.	  A	  1993	  meta-­‐analysis	  of	  38	  studies	  showed	  that	  mail	  surveys	  that	  included	  a	  prepaid	  award	  produced	  an	  average	  response	  increase	  of	  19.1%,	  while	  surveys	  that	  contained	  a	  post-­‐paid	  award	  showed	  an	  average	  increase	  of	  7.9%62.	  In	  2009,	  Dillman	  and	  colleagues	  reiterated	  the	  importance	  of	  using	  a	  prepaid	  incentive	  rather	  than	  a	  post-­‐paid	  incentive	  conditional	  upon	  survey	  response12.	  One	  plausible	  explanation	  is	  that	  pre-­‐paid	  cash	  incentives	  “promotes	  social	  exchange	  and	  a	  sense	  of	  reciprocal	  obligation”,	  in	  which	  participants	  may	  feel	  a	  sense	  of	  responsibility	  to	  complete	  the	  survey10.	  The	  inclusion	  of	  a	  $2	  cash	  incentive	  not	  only	  increases	  the	  response	  rate,	  but	  also	  encourages	  earlier	  response63.	  Similarly,	  King	  and	  Vaughan	  (2004)	  showed	  that	  the	  inclusion	  of	  a	  $1	  incentive	  resulted	  in	  higher	  response	  rate	  compared	  to	  the	  non-­‐incentive	  group	  in	  their	  trial	  (86%	  vs.	  63%).	  	  Interestingly,	  an	  analysis	  of	  23	  random	  digit	  dialing	  (RDD)	  studies	  	   11	  found	  payments	  of	  $1	  to	  $5	  increased	  response	  rates	  from	  2%	  to	  12%	  over	  no	  incentives,	  however	  this	  positive	  relationship	  declined	  as	  larger	  incentives	  were	  offered,	  showing	  a	  plateau	  effect64.	  One	  explanation	  for	  the	  plateau	  effect	  is	  that	  surveys	  including	  larger	  prepaid	  monetary	  incentives	  may	  be	  viewed	  as	  a	  commercial	  exchange	  (rather	  than	  social	  exchange),	  such	  that	  the	  reward	  provided	  may	  be	  interpreted	  as	  compensation	  for	  time	  spent	  on	  completing	  the	  survey38.	  In	  addition	  to	  this	  theory,	  altruism	  may	  also	  be	  a	  motive	  for	  study	  participation3,65,66.	  In	  this	  case,	  participants	  feel	  a	  sense	  of	  accomplishment	  and	  satisfaction	  when	  committing	  to	  taking	  part	  in	  what	  is	  believed	  to	  be	  a	  positive	  impact	  on	  the	  community.	  Therefore,	  it	  is	  important	  to	  consider	  respondent	  characteristics	  before	  implementing	  incentives.	  Singer	  (2012)	  suggested	  three	  reasons	  for	  why	  participants	  respond	  to	  surveys:	  1)	  Altruistic	  reasons,	  which	  is	  the	  respondent’s	  willingness	  to	  help	  in	  surveys	  and	  research.	  In	  this	  case,	  implementing	  incentives	  may	  actually	  discourage	  a	  potential	  respondent’s	  propensity	  to	  respond.	  2)	  Egoistic	  reasons,	  in	  which	  respondents	  are	  driven	  to	  respond	  to	  receive	  the	  associated	  rewards.	  3)	  Personal	  reasons,	  in	  which	  respondents	  may	  take	  interest	  in	  a	  particular	  topic	  or	  have	  established	  positivity	  and	  trust	  in	  the	  research	  organization67.	  	  In	  the	  last	  two	  categories,	  researchers	  may	  consider	  implementing	  prepaid	  cash	  incentives	  as	  a	  reward.	  	  Overall,	  these	  results	  showed	  that	  offering	  cash	  incentive	  in	  advance	  is	  crucial	  for	  establishing	  surveyor-­‐participant	  relationship,	  which	  may	  lead	  to	  higher	  response	  rates.	  Lottery	  incentives	  have	  been	  deemed	  to	  be	  useful	  for	  providing	  incentive	  online.	  In	  contrast	  to	  cash	  incentives,	  lottery	  incentives	  produced	  mixed	  results	  in	  previous	  	   12	  studies.	  Kalantar	  and	  Talley	  (1999)	  found	  a	  statistically	  greater	  response	  rate	  in	  the	  intervention	  group	  (instant	  lottery	  ticket	  with	  chance	  to	  win	  up	  to	  $25,000)	  compared	  to	  the	  control	  group	  (75.0%	  vs.	  68.2%)53.	  A	  later	  study	  conducted	  by	  Cobanoglu	  and	  Cobanoglu	  (2003)	  found	  similar	  response	  rates	  of	  20.5%	  between	  control	  and	  intervention	  groups	  when	  a	  raffle	  for	  a	  personal	  digital	  assistant	  was	  offered	  in	  a	  web	  survey	  to	  1,006	  American	  management	  association	  members68.	  Interestingly,	  Deutskens	  et	  al.	  (2004)	  showed	  that	  the	  use	  of	  lottery	  incentive	  was	  effective	  in	  short	  surveys	  ($38)	  as	  opposed	  to	  long	  surveys	  ($76)	  when	  sampling	  Dutch	  clients	  on	  attitude	  and	  usage	  of	  brand	  name	  products50.	  However	  in	  2007,	  Marcus	  et	  al.	  surveyed	  2174	  owners	  of	  personal	  websites	  found	  that	  when	  combined	  with	  a	  shorter	  questionnaire,	  the	  increase	  in	  response	  rate	  associated	  with	  the	  use	  of	  $38	  voucher	  lottery	  was	  not	  statistically	  significant	  (32.5%	  vs.	  29.0%)52.	  More	  recently,	  Doerfling	  et	  al.	  (2010)	  has	  shown	  that	  the	  use	  of	  cash	  lottery	  (five	  $100	  prizes	  and	  a	  $500	  grand	  prize)	  in	  an	  online	  health	  survey	  led	  to	  a	  40%	  relative	  increase	  when	  compared	  to	  the	  non-­‐incentive	  group37	  (14.6%	  vs.	  10.3%).	  In	  addition	  to	  increasing	  response	  rates,	  cash	  lotteries	  may	  also	  increase	  participant’s	  tendency	  to	  remain	  on	  the	  survey	  after	  arriving	  at	  the	  URL	  link21,69,70.	  The	  notion	  of	  the	  use	  of	  an	  instant	  lottery	  was	  supported	  by	  evidence,	  which	  suggested	  that	  the	  effect	  of	  lottery	  incentives	  can	  be	  further	  improved	  when	  participants	  were	  notified	  of	  the	  results	  of	  the	  lottery	  or	  prize	  draw	  immediately	  upon	  the	  completion	  of	  the	  survey21.	  Since	  instant	  lottery	  is	  a	  relatively	  new	  concept,	  to	  date,	  there	  is	  no	  study	  that	  has	  compared	  the	  effects	  of	  prepaid	  cash	  incentives	  and	  instant	  lottery	  on	  response	  rate.	  	  	   13	  3.4	  Sampling	  Frame	  It	  is	  crucial	  for	  survey	  respondents	  to	  be	  representative	  of	  the	  target	  population.	  Survey	  coverage	  can	  be	  defined	  as	  how	  well	  the	  survey	  can	  reach	  all	  individuals	  in	  an	  intended	  target	  population.	  The	  usefulness	  and	  accuracy	  of	  the	  results	  rely	  on	  the	  coverage	  of	  its	  sampling	  frame71.	  A	  good	  sampling	  frame	  covers	  the	  entire	  target	  population	  and	  thus	  reduces	  the	  level	  of	  coverage	  error	  (later	  discussed).	  In	  contrast,	  if	  a	  sampling	  frame	  is	  incomplete,	  the	  sample	  may	  not	  be	  representative	  of	  the	  larger	  population.	  Sampling	  frames	  that	  target	  specific	  groups	  who	  are	  systematically	  different	  in	  demographic	  characteristics	  from	  the	  intended	  population	  may	  result	  in	  biased	  samples.	  In	  a	  study	  examining	  the	  effect	  of	  sampling	  frames	  on	  response	  rates,	  the	  representativeness	  of	  the	  survey	  was	  found	  to	  be	  influenced	  by	  the	  quality	  of	  the	  sampling	  frame19.	  Therefore,	  selecting	  a	  sampling	  frame	  that	  maximizes	  coverage	  within	  the	  target	  population	  is	  key	  to	  collecting	  accurate	  public	  health	  information,	  for	  example,	  in	  studies	  aimed	  to	  estimate	  the	  prevalence	  of	  diseases.	  	  	  Area	  frame,	  address	  frame,	  telephone	  frame,	  and	  random	  digit	  dialing	  (RDD)	  are	  four	  main	  types	  of	  sampling	  frame	  used	  in	  past	  studies.	  	  Area	  frames	  are	  typically	  composed	  of	  a	  list	  of	  residences	  located	  within	  a	  geographical	  area.	  The	  most	  representative	  sampling	  is	  done	  using	  door-­‐to-­‐door	  interviews	  for	  everyone	  in	  a	  given	  area	  frame.	  For	  example,	  the	  interviewer	  may	  choose	  to	  sample	  everyone	  who	  was	  over	  the	  age	  of	  49	  years	  in	  a	  single	  postcode	  area72.	  An	  address	  frame	  is	  a	  list	  of	  postal	  addresses	  within	  a	  specified	  geographic	  location,	  which	  usually	  covers	  a	  larger	  area	  compared	  to	  an	  area	  frame.	  Another	  alternative	  is	  to	  use	  an	  electronic	  telephone	  	   14	  directory	  due	  to	  its	  simplicity	  and	  low	  cost73.	  Commercial	  companies	  can	  readily	  provide	  a	  directory	  gathered	  from	  various	  sources,	  such	  as	  phone	  books,	  public	  records,	  and	  government	  data74,75.	  Lastly,	  RDD	  is	  often	  employed	  by	  telemarketing	  companies	  and	  also	  used	  as	  a	  sampling	  frame	  for	  Statistics	  Canada	  surveys,	  such	  as	  the	  Canadian	  Community	  Health	  Survey	  (CCHS).	  Unlike	  telephone	  directory	  frames,	  lists	  provided	  for	  RRD	  purposes	  may	  also	  include	  cellphone	  numbers.	  In	  the	  existing	  literature,	  there	  are	  limited	  data,	  but	  no	  recent	  studies,	  that	  compare	  the	  response	  and	  representativeness	  of	  identical	  surveys	  using	  different	  sampling	  frames.	  Smith	  et	  al.	  (1997)	  conducted	  a	  study	  to	  examine	  the	  data	  representativeness	  between	  participants	  recruited	  from	  a	  telephone	  directory	  vs.	  electoral	  roll72.	  Using	  a	  door-­‐to-­‐door	  census	  as	  a	  comparison	  standard,	  variables	  under	  examination	  included	  socio-­‐demographic,	  disease-­‐state,	  and	  risk	  factors.	  Results	  from	  the	  study	  showed	  that	  the	  telephone	  directory	  was	  more	  likely	  to	  exclude	  participants	  with	  higher	  occupational	  prestige	  and	  the	  electoral	  roll	  was	  more	  likely	  to	  exclude	  unmarried	  individuals.	  	  3.5	  Personalized	  Address	  Inclusion	  of	  a	  personalized	  survey	  invitation	  has	  been	  noted	  as	  a	  worthwhile	  strategy	  to	  carry	  out	  when	  a	  low	  response	  rate	  is	  expected9,76-­‐78.	  Field	  et	  al.	  (2002)	  classifies	  a	  number	  of	  survey	  methods	  as	  personalization.	  These	  include	  direct	  telephone	  contact,	  inclusion	  of	  handwritten	  notes	  and	  personalization	  of	  the	  cover	  letter	  and	  envelope79.	  Personalized	  salutation	  is	  recommended	  as	  a	  part	  of	  the	  Dillman	  total	  design	  approach16.	  The	  underlining	  implication	  is	  that	  personalization	  invokes	  the	  necessary	  social	  exchange	  that	  can	  facilitate	  survey	  response.	  Kaner	  et	  al.	  (1998)	  	   15	  conducted	  a	  postal	  survey	  of	  general	  practitioners,	  in	  which	  she	  concluded	  that	  GPs	  are	  more	  likely	  to	  respond	  to	  a	  postal	  survey	  when	  approached	  with	  a	  follow-­‐up	  phone	  call80.	  In	  a	  randomized	  trial,	  a	  statistically	  significant	  increase	  in	  response	  rate	  was	  reported	  in	  subjects	  who	  received	  a	  personally	  addressed	  cover	  letter,	  along	  with	  a	  hand-­‐written	  note	  “Dear	  Dr	  XX,	  We’d	  greatly	  appreciate	  your	  participation.”	  compared	  to	  those	  who	  did	  not	  receive	  a	  hand-­‐written	  note	  (60.9%	  vs.	  50.9%)81.	  A	  larger	  scale	  study	  (n	  =	  3000)	  assessed	  whether	  the	  inclusion	  of	  a	  personally	  addressed	  cover	  letter	  signed	  by	  the	  principal	  investigator	  showed	  a	  higher	  response	  rate	  among	  physicians82.	  The	  response	  rate	  for	  the	  personalized-­‐letter	  group	  was	  17.8%	  higher	  than	  the	  control	  group	  (45.3%	  vs.	  27.5%).	  These	  finding	  suggests	  that	  personalization	  of	  the	  survey	  invitation	  is	  an	  important	  factor	  in	  subject’s	  decision	  to	  participate	  in	  the	  survey.	  However,	  when	  the	  questionnaire	  contains	  sensitive	  topics,	  such	  as	  questions	  regarding	  experience	  with	  discrimination,	  the	  rate	  of	  non-­‐response	  to	  these	  questions	  was	  found	  to	  be	  higher	  among	  the	  group	  that	  received	  personalized	  survey	  invitation83.	  Therefore,	  survey	  topic	  and	  characteristic	  of	  the	  target	  population	  should	  be	  thoroughly	  considered	  before	  applying	  a	  personalized	  approach.	  3.6	  Survey	  Cost	  When	  running	  public	  health	  research	  under	  Dillman’s	  tailored	  design	  method12,	  recommended	  survey	  features	  such	  as	  inclusion	  of	  a	  monetary	  incentive,	  using	  first	  class	  mail,	  and	  inclusion	  of	  return	  postage	  greatly	  increase	  the	  cost	  of	  the	  study.	  Greenlaw	  and	  Brown-­‐Welty	  tested	  and	  affirmed	  the	  hypothesis	  that	  although	  mixed	  	   16	  mode	  surveys	  encourage	  higher	  response	  rate,	  they	  are	  more	  costly	  than	  a	  single	  mode	  survey34.	  Cost	  per	  survey	  sent	  is	  a	  useful	  unit	  of	  reporting	  for	  researchers	  who	  would	  like	  to	  project	  the	  cost	  expenditure	  for	  a	  certain	  sample	  size.	  This	  measure	  is	  advantageous	  when	  the	  response	  rate	  cannot	  be	  predicted	  or	  that	  a	  certain	  response	  level	  is	  not	  required.	  Another	  measure	  of	  cost	  effectiveness	  is	  cost	  per	  response.	  According	  to	  Greenlaw	  and	  Brown-­‐Welty	  (2009),	  this	  method	  of	  reporting	  “provides	  a	  compelling	  blend	  of	  both	  response	  rate	  data	  and	  the	  calculation	  of	  the	  cost	  required	  to	  obtain	  each	  response.”	  34	  The	  latter	  measure	  is	  of	  more	  value	  when	  the	  response	  rate	  can	  be	  estimated	  or	  when	  an	  expectation	  is	  in	  place	  for	  the	  response	  level.	  Often	  times,	  the	  investigator	  must	  consider	  the	  expense	  of	  the	  study	  and	  operate	  within	  the	  confines	  of	  its	  budget.	  Thus,	  obtaining	  maximum	  response	  rate	  may	  not	  be	  of	  the	  highest	  importance,	  but	  it	  is	  finding	  the	  most	  cost	  effective	  method	  to	  do	  so	  that	  becomes	  a	  greater	  objective.	  Expenses	  such	  as	  administration	  cost	  of	  material	  (postage,	  paper,	  printing	  fees,	  monetary	  incentives,	  and	  lottery	  prize)	  and	  labor	  (development	  of	  questionnaire,	  preparing	  envelopes,	  and	  data	  entry)	  are	  taken	  into	  account	  when	  calculating	  the	  total	  expense	  of	  the	  study.	  	  Moreover,	  cost	  analyses	  are	  necessary	  and	  used	  to	  assess	  the	  feasibility	  of	  introducing	  a	  financial	  incentive	  within	  a	  large	  study	  cohort25.	  Sahlqvist	  et	  al.	  (2011)	  conducted	  a	  randomized	  controlled	  trial	  using	  a	  2x2	  factorial	  design	  on	  1000	  participants	  randomly	  selected	  from	  the	  UK	  edited	  electoral	  register.	  The	  two	  survey	  factors	  tested	  were	  questionnaire	  length	  (15	  vs.	  24	  pages)	  and	  personalization	  using	  a	  personally	  addressed	  survey	  pack.	  Cost	  effectiveness	  was	  defined	  as	  cost	  per	  response	  and	  was	  subsequently	  calculated	  for	  each	  trial	  arm.	  Results	  showed	  a	  cost	  per	  response	  of	  £40.7	  ($74.4)	  for	  the	  long	  	   17	  questionnaire	  and	  £22.4	  ($50.0)	  for	  the	  short	  questionnaire9.	  With	  regards	  to	  personalization,	  the	  cost	  per	  response	  was	  £23.1	  ($42.1)	  for	  the	  personalized	  survey	  arm	  as	  compared	  to	  £11.3	  ($20.7)	  for	  the	  non-­‐personalized	  arm.	  No	  interaction	  between	  survey	  length	  and	  personalization	  was	  found9.	  Overall,	  cost	  per	  response	  for	  a	  particular	  survey	  design	  depends	  on	  a	  number	  of	  factors.	  These	  include	  amount	  of	  time	  required	  for	  questionnaire	  design,	  questionnaire	  length,	  monetary	  incentives,	  geographic	  distribution	  of	  sampling	  groups	  (mail	  surveys)	  and	  topic	  salience	  (later	  discussed).	  3.7	  Survey	  Representativeness	  Aside	  from	  documentation	  of	  response	  rate	  and	  cost	  analyses	  of	  surveys,	  it	  is	  crucial	  that	  the	  overall	  response	  is	  representative	  and	  generalizable	  to	  the	  intended	  population9.	  Different	  types	  of	  survey	  errors	  may	  contribute	  to	  non-­‐representativeness	  of	  a	  survey:	  coverage	  error,	  sampling	  error,	  and	  non-­‐response	  error.	  Coverage	  error	  may	  arise	  due	  to	  a	  poor	  sampling	  frame	  that	  cannot	  adequately	  cover	  the	  target	  population.	  	  It	  is	  known	  that	  a	  mixed	  mode	  survey	  can	  improve	  coverage	  when	  certain	  demographic	  groups	  cannot	  be	  reached	  by	  a	  single	  mode	  (e.g.	  web	  survey)12.	  Sampling	  error	  may	  occur	  when	  a	  portion	  of	  the	  population	  is	  sampled	  rather	  than	  the	  entire	  population.	  Lastly,	  non-­‐response	  error	  stems	  from	  a	  lack	  of	  response	  from	  some	  individuals	  in	  the	  sampled	  population.	  As	  mentioned,	  non-­‐response	  error	  may	  lead	  to	  erroneous	  conclusions	  due	  to	  the	  possible	  differences	  between	  respondents	  and	  non-­‐respondents.	  	   18	  Past	  studies	  have	  shown	  that	  the	  representativeness	  of	  a	  survey	  improves	  when	  using	  a	  mixed-­‐mode	  survey	  rather	  than	  a	  single	  mode	  approach11,35.	  This	  is	  most	  likely	  due	  to	  a	  higher	  response	  rate,	  resulting	  in	  a	  reduction	  of	  non-­‐response	  bias12.	  One	  method	  of	  survey	  validation	  is	  to	  compare	  the	  distribution	  of	  key	  respondent	  characteristics	  in	  the	  survey	  to	  a	  local	  census.	  Barrett	  and	  Kelly	  (2008)	  used	  the	  2006	  Irish	  census	  to	  validate	  and	  examine	  the	  accuracy	  of	  the	  immigrant’s	  profile	  collected	  in	  the	  Quarterly	  National	  Household	  Survey	  (QNHS)83.	  Similarity,	  another	  study	  conducted	  in	  Spain	  aimed	  to	  analyze	  representation	  of	  the	  immigration	  population	  in	  the	  Spanish	  National	  Health	  Survey	  (SNHS)	  through	  comparison	  to	  the	  population	  registry84.	  Lastly,	  to	  examine	  the	  data	  representativeness	  of	  adding	  a	  postal	  contact	  in	  a	  web	  survey,	  Partin	  et	  al.	  (2013)	  compared	  the	  percentage	  distribution	  of	  respondent	  characteristics	  of	  before	  and	  after	  the	  postal	  follow-­‐up	  to	  those	  of	  the	  known	  population85.	  National	  surveys,	  such	  as	  the	  Canadian	  Community	  Health	  Survey	  (CCHS)	  employs	  complex,	  multistage	  probability	  samples.	  Therefore	  survey	  weights	  are	  provided	  to	  make	  the	  survey	  sample	  representative	  of	  the	  target	  population.	  	  Person	  weights	  can	  be	  defined	  as	  the	  number	  of	  persons	  in	  the	  target	  population	  represented	  by	  the	  respective	  respondent.	  For	  example,	  when	  sampling	  5%	  of	  the	  total	  population,	  each	  person	  within	  the	  sample	  represents	  20	  persons	  in	  the	  actual	  population.	  Therefore,	  a	  sampling	  weight	  of	  20	  would	  be	  given	  to	  each	  sampled	  individual.	  Person	  weights	  can	  also	  be	  calculated	  as	  N/n,	  where	  N	  represents	  the	  number	  of	  individuals	  in	  the	  target	  population,	  and	  n	  represents	  the	  number	  of	  individuals	  represented	  in	  the	  sample86.	  Weights	  are	  incorporated	  into	  the	  analyses	  to	  ensure	  that	  the	  weight-­‐	   19	  adjusted	  estimates	  are	  comparable	  to	  that	  of	  the	  entire	  target	  population	  and	  avoid	  biased	  statistics	  with	  un-­‐weighted	  samples87.	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   20	  4	  Study	  Objectives	  The	  overall	  objective	  of	  this	  thesis	  is	  to	  examine	  the	  effects	  of	  several	  aspects	  of	  survey	  design	  on	  response	  rates,	  costs,	  and	  data	  representativeness	  in	  a	  mixed-­‐mode	  general	  population	  survey.	  Specific	  survey	  design	  features	  of	  interest	  are	  survey	  mode,	  prepaid	  cash	  incentive,	  instant	  lottery,	  questionnaires	  length,	  and	  sampling	  frame.	  	  	  	  	  	  	  	  	  	  	  	  	  	   21	  5	  Research	  Hypotheses	  1. The	  use	  of	  paper	  survey	  will	  generate	  a	  higher	  response	  rate	  compared	  to	  the	  online	  survey;	  2. A	  short	  questionnaire	  (39	  items)	  will	  result	  in	  a	  higher	  response	  rate	  compared	  to	  the	  longer	  questionnaire	  (219	  items);	  3. The	  inclusion	  of	  an	  instant	  $100	  lottery	  will	  generate	  a	  higher	  response	  rate	  compared	  to	  an	  end-­‐of-­‐study	  lottery;	  4. The	  inclusion	  of	  a	  $2	  pre-­‐paid	  cash	  incentive	  will	  generate	  a	  higher	  response	  rate	  compared	  no	  cash	  incentive;	  5. The	  survey	  group	  selected	  from	  the	  Info	  Canada	  sampling	  frame	  (telephone-­‐based,	  including	  personalized	  salutation)	  will	  generate	  a	  higher	  response	  rate	  compared	  to	  those	  selected	  from	  the	  Canada	  Post	  sampling	  frame	  (address-­‐based).	  	  	  	  	  	  	  	  	  	  	   22	  6	  Methods	  6.1	  Questionnaire	  Development	  The	  British	  Columbia	  Health	  Survey	  (BCHS),	  conducted	  by	  the	  Arthritis	  Research	  Centre	  of	  Canada	  (ARC),	  was	  designed	  to	  target	  all	  community-­‐dwelling	  adults	  in	  BC.	  The	  objective	  of	  BCHS	  was	  to	  determine	  the	  prevalence	  of	  musculoskeletal	  pain,	  physician-­‐diagnosed	  osteoarthritis,	  risk	  factors	  for	  these	  conditions,	  and	  the	  use	  of	  health	  services	  in	  British	  Columbia.	  	  A	  methodological	  objective	  was	  to	  assess	  the	  effect	  of	  various	  survey	  factors	  on	  response	  rates.	  Therefore,	  seven	  surveys,	  each	  containing	  differing	  combinations	  of	  survey	  factors,	  were	  developed	  (Table	  6.1).	  The	  design	  features	  under	  examination	  included	  different	  modes	  of	  administration,	  different	  monetary	  incentives,	  and	  different	  sampling	  frames.	  The	  five	  survey	  factors	  were:	  A. Survey	  mode	  Two	  modes	  of	  delivery	  were	  used.	  a)	  Paper:	  The	  paper	  survey	  was	  delivered	  along	  with	  the	  invitation	  letter	  to	  the	  selected	  BC	  households.	  The	  paper	  questionnaire	  was	  sent	  along	  with	  a	  pre-­‐stamped	  envelope	  with	  the	  return	  address.	  Participants	  were	  instructed	  to	  return	  the	  completed	  survey	  using	  this	  return	  envelope.	  	  	   23	  b)	  Online:	  Participants	  were	  invited	  to	  respond	  to	  the	  online	  survey	  through	  a	  mailed	  invitation,	  which	  included	  the	  survey	  URL	  as	  well	  as	  the	  password	  to	  start	  the	  online	  survey.	  	  Both	  invitation	  letters	  asked	  that	  the	  household	  adult	  with	  the	  most	  recent	  birthday	  to	  complete	  the	  survey.	  B. Prepaid	  cash	  incentive	  The	  cash	  incentive	  was	  offered	  in	  the	  form	  of	  a	  prepaid	  $2	  coin.	  The	  coin	  was	  glued	  to	  the	  invitation	  letter	  such	  that	  the	  coin	  would	  be	  visible	  as	  the	  recipient	  unfolds	  the	  invitation	  letter.	  	  C. Instant	  lottery	  incentive	  All	  survey	  groups	  were	  entered	  into	  a	  cash	  draw	  that	  included	  10	  prizes	  of	  $100	  and	  a	  grand	  prize	  of	  $1000.	  	  Two	  types	  of	  lottery	  incentive	  were	  offered:	  a) Instant	  lottery	  groups	  received	  the	  results	  of	  the	  $100	  lottery	  immediately	  at	  the	  completion	  of	  the	  survey	  b) 	  Non-­‐instant	  lottery	  groups	  would	  find	  out	  about	  the	  results	  at	  the	  end	  of	  the	  study	  (after	  3	  months).	  Note	  that	  the	  instant	  lottery	  incentive	  could	  only	  be	  offered	  to	  the	  online	  survey	  groups.	  	  	  	   24	  D. Survey	  length	  Two	  forms	  of	  surveys	  of	  different	  length	  were	  used.	  	  a) The	  shorter	  survey	  contained	  39	  items,	  which	  included	  questions	  such	  as	  age,	  gender,	  OA	  screening,	  general	  health	  and	  co-­‐morbidity.	  Respondents	  were	  told	  that	  completion	  time	  is	  around	  ten	  minutes.	  	  b) The	  longer	  survey	  was	  composed	  a	  total	  of	  219	  items,	  which	  asked	  more	  detailed	  questions	  on	  osteoarthritis	  of	  different	  sites,	  healthcare	  utilization	  and	  quality	  of	  life.	  However	  due	  to	  the	  skip	  logic,	  the	  number	  of	  questions	  presented	  to	  the	  respondents	  varied	  depending	  on	  the	  response	  picked	  for	  earlier	  questions.	  Respondents	  who	  were	  allocated	  to	  the	  longer	  survey	  were	  informed	  through	  the	  invitation	  letter	  that	  the	  estimated	  completion	  time	  is	  thirty	  minutes.	  	  E.	  	  	  Sampling	  Frame	  Two	  sampling	  frames	  were	  used	  in	  BCHS.	  	  a) Canada	  Post:	  Household	  addresses	  provided	  by	  Canada	  Post	  were	  selected	  from	  an	  address	  database.	  Information	  provided	  by	  Canada	  Post	  did	  not	  include	  the	  name	  of	  the	  head	  of	  household.	  	  As	  such,	  the	  invitation	  letter	  was	  addressed	  to	  “Dear	  British	  Columbia	  Resident”.	  b) Info	  Canada:	  Info	  Canada	  is	  a	  private	  company	  that	  collects	  demographic	  information	  from	  various	  sources.	  The	  address	  list	  provided	  by	  Info	  Canada	  was	  mainly	  obtained	  through	  a	  phone	  directory.	  The	  household	  information	  	   25	  provided	  by	  Info	  Canada	  contains	  the	  name	  of	  the	  head	  of	  household.	  Using	  this	  information,	  the	  invitation	  letter	  was	  personalized	  and	  addressed	  as	  “Dear	  Mr/Ms	  [Last	  Name]”.	  6.2	  Survey	  Groups	  	  Survey	  factors	  included	  in	  each	  group	  are	  shown	  in	  table	  6.1	  Group	  A:	  Participants	  were	  asked	  to	  respond	  to	  the	  online,	  no	  prepaid	  cash,	  no	  instant	  lottery,	  long	  survey,	  and	  were	  selected	  from	  the	  Canada	  Post	  sampling	  frame.	  	  Group	  B:	  Participants	  were	  asked	  to	  respond	  to	  the	  online,	  no	  prepaid	  cash,	  instant	  lottery,	  long	  survey,	  and	  were	  selected	  from	  the	  Canada	  Post	  sampling	  frame.	  Group	  C:	  Participants	  were	  asked	  to	  respond	  to	  the	  online,	  prepaid	  cash,	  no	  instant	  lottery,	  long	  survey,	  and	  were	  selected	  from	  the	  Canada	  Post	  sampling	  frame.	  Group	  D:	  Participants	  were	  asked	  to	  respond	  to	  the	  online,	  prepaid	  cash,	  instant	  lottery,	  long	  survey,	  and	  were	  selected	  from	  the	  Canada	  Post	  sampling	  frame.	  Group	  E:	  Participants	  were	  asked	  to	  respond	  to	  the	  online,	  prepaid	  cash,	  instant	  lottery,	  long	  survey,	  and	  were	  selected	  from	  the	  Info	  Canada	  sampling	  frame.	  Group	  F:	  Participants	  were	  asked	  to	  respond	  to	  the	  online,	  prepaid	  cash,	  instant	  lottery,	  short	  survey,	  and	  were	  selected	  from	  the	  Canada	  Post	  sampling	  frame.	  Group	  G:	  Participants	  were	  asked	  to	  respond	  to	  the	  paper,	  prepaid	  cash,	  no	  instant	  lottery,	  short	  survey,	  and	  were	  selected	  from	  the	  Canada	  Post	  sampling	  frame.	  	   26	  Table	  6.1	  –	  BCHS	  mail-­‐out	  groups	  Groups	   A	   B	   C	   D	   E	   F	   G	  Sample	  Size	  	   1000	   1000	   1000	   1000	   2000	   1000	   1000	  Survey	  Mode	   Online	   Online	   Online	   Online	   Online	   Online	   Paper	  Prepaid	  Cash	  Incentive	   No	   No	   Yes	   Yes	   Yes	   Yes	   Yes	  Instant	  Lottery	   No	   Yes	   No	   Yes	   Yes	   Yes	   No	  Survey	  Length	   Long	   Long	   Long	   Long	   Long	   Short	   Short	  Sampling	  Frame	   Canada	  Post	   Canada	  Post	   Canada	  Post	   Canada	  Post	   Info	  Canada	   Canada	  Post	   Canada	  Post	  	  6.3	  Data	  Collection	  Invitation	  letters	  were	  mailed	  to	  8000	  randomly	  selected	  households	  in	  BC.	  All	  households	  were	  randomly	  allocated	  to	  one	  of	  the	  seven	  experimental	  groups.	  Canada	  Post	  provided	  6000	  residential	  addresses,	  while	  2000	  were	  provided	  by	  Info	  Canada	  (Figure	  6.1).	  Letters	  to	  the	  addresses	  provided	  by	  Info	  Canada	  were	  personally	  addressed,	  while	  letters	  to	  addresses	  provided	  by	  Canada	  Post	  were	  not.	  The	  household	  adult	  (above	  18	  years	  of	  age)	  with	  the	  most	  recent	  birthday	  was	  asked	  to	  complete	  the	  survey.	  The	  administration	  of	  the	  survey	  was	  designed	  to	  include	  four	  contacts:	  1)	  the	  initial	  invitation	  was	  mailed	  out	  at	  week	  zero.	  2)	  First	  reminder	  mail	  was	  sent	  at	  week	  one.	  3)	  Second	  reminder	  was	  mailed	  out	  at	  week	  three,	  along	  with	  a	  second	  copy	  of	  the	  survey	  for	  the	  paper	  questionnaire	  group.	  	  4)	  Last	  reminder	  mail	  was	  sent	  at	  week	  five.	  Individuals	  who	  have	  already	  responded	  	   27	  were	  excluded	  from	  receiving	  the	  reminder	  mails.	  Each	  of	  the	  six	  online	  surveys	  had	  their	  respective	  landing	  page	  with	  a	  different	  web	  address.	  In	  addition,	  different	  login	  keywords	  were	  provided	  for	  each	  experimental	  group	  to	  prevent	  participation	  from	  uninvited	  individuals.	  Paper	  surveys	  were	  mailed	  together	  with	  the	  invitation	  letter	  and	  a	  prepaid	  return	  envelope	  was	  included.	  Participants	  allocated	  to	  a	  web-­‐based	  group,	  who	  were	  unable	  or	  preferred	  not	  to	  complete	  the	  survey	  online,	  were	  offered	  to	  complete	  a	  paper	  survey	  as	  an	  alternative.	  	  	  Figure	  6.1	  –	  British	  Columbia	  Health	  Survey	  (BCHS)	  Study	  Design	  To	  complete	  the	  online	  survey,	  respondents	  were	  directed	  to	  the	  Arthritis	  Research	  Centre	  online	  research	  survey	  system	  (RSS).	  	  The	  BCHS	  online	  data	  collection	  system	  was	  hosted	  on	  a	  dedicated	  server	  running	  Windows	  Server	  2003	  located	  at	  ARC.	  	  The	  data	  were	  collected	  and	  downloaded	  into	  an	  Excel	  file.	  Access	  to	  the	  server	  is	  secured	  by	  the	  New	  Technology	  File	  System	  (NTFS	  file	  system),	  Microsoft	  Active	  Directory,	  	   28	  and	  Structural	  Query	  Language	  (SQL)	  built-­‐in	  security	  features.	  The	  server	  itself	  is	  secured	  from	  the	  Internet	  by	  a	  SonicWALL	  firewall.	  A	  128-­‐bit	  SSL	  encryption	  is	  used	  to	  assure	  confidentiality	  of	  survey	  data.	  Respondents	  who	  completed	  the	  paper	  survey	  had	  their	  responses	  entered	  into	  an	  Excel	  database	  by	  a	  data	  entry	  company.	  6.4	  Ethical	  Considerations	  The	  invitation	  letters	  informed	  the	  subjects	  that	  by	  responding	  to	  the	  survey	  they	  are	  consenting	  to	  participating	  in	  the	  study.	  Ethics	  for	  administration	  and	  collection	  of	  survey	  has	  been	  submitted	  and	  approved	  by	  the	  University	  of	  British	  Columbia	  Behavioral	  Research	  Ethics	  Board	  (BCBREB).	  	  6.5	  Methods	  to	  Analyze	  Demographics	  of	  Sampling	  Group	  Respondents	  Demographic	  variables	  of	  age,	  gender,	  and	  education	  level	  were	  first	  examined	  across	  all	  seven	  groups.	  	  This	  was	  used	  to	  detect	  any	  noticeable	  discrepancies	  within	  respondent	  characteristics.	  Differences	  in	  mean	  age	  across	  the	  groups	  were	  examined	  using	  one-­‐way	  analysis	  of	  variance	  (ANOVA).	  Gender	  and	  education	  level	  differences	  were	  assessed	  using	  Chi-­‐square	  test	  for	  independence.	  	  	  6.6	  Methods	  to	  Analyze	  Response	  Rates	  	  6.6.1	  Calculation	  of	  response	  rates	  The	  raw	  response	  rates	  were	  calculated	  by	  dividing	  the	  number	  of	  responses	  by	  the	  total	  number	  of	  surveys	  sent.	  Using	  the	  intention-­‐to-­‐treat	  principle88,	  participants	  	   29	  who	  were	  allocated	  to	  the	  online	  surveys,	  but	  later	  requested	  to	  complete	  the	  paper	  form	  were	  analyzed	  with	  their	  originally	  assigned	  groups.	  	  6.6.2	  Pairwise	  comparisons	  of	  experimental	  groups	  	  Six	  a	  priori	  specific	  pairwise	  comparisons	  were	  made	  to	  examine	  whether	  the	  inclusion	  of	  specific	  survey	  factor(s)	  will	  significantly	  increase	  survey	  participation.	  	  This	  includes	  examining	  the	  differences	  in	  response	  rates	  under	  the	  following	  conditions:	  1)	  the	  inclusion	  of	  an	  instant	  lottery	  incentive	  (Groups	  A	  vs.	  B),	  2)	  the	  use	  of	  a	  prepaid	  cash	  incentive	  (Group	  A	  vs.	  C),	  3)	  the	  use	  of	  both	  prepaid	  coin	  and	  instant	  lottery	  incentives	  (Group	  A	  vs.	  D),	  4)	  the	  choice	  of	  sampling	  frames	  with	  all	  incentives	  included	  (Groups	  D	  vs.	  E),	  5)	  the	  length	  of	  questionnaires	  with	  all	  incentives	  included	  (Groups	  D	  vs.	  F),	  and	  6)	  the	  survey	  formats	  with	  all	  possible	  incentives	  included	  (Group	  F	  vs.	  G).	  Additional	  exploratory	  comparisons	  were	  made	  to	  examine	  the	  effect	  of	  monetary	  incentives	  in	  various	  circumstances,	  such	  as	  offering	  prepaid	  cash	  incentive	  in	  place	  of	  instant	  lottery	  (Groups	  C	  vs.	  B),	  the	  addition	  of	  instant	  lottery	  to	  prepaid	  cash	  incentive	  survey	  (Groups	  D	  vs.	  C),	  and	  vice	  versa	  (Groups	  D	  vs.	  B).	  The	  combined	  effects	  of	  the	  short	  questionnaire,	  instant	  lottery,	  and	  prepaid	  coin	  incentive	  were	  also	  examined	  (Groups	  F	  vs.	  A).	  The	  chi	  square	  test	  of	  independence	  was	  calculated	  using	  an	  alpha	  level	  of	  0.05	  for	  each	  comparison.	  	  We	  specified	  the	  following	  hypotheses:	  H0:	  Response	  rates	  in	  the	  comparison	  groups	  are	  not	  different	  HA:	  Response	  rates	  in	  the	  comparison	  groups	  are	  different	  	  	  	   30	  6.6.3	  Marascuilo	  procedure	  	  One	  problem	  that	  exists	  for	  multiple	  comparisons	  is	  that	  as	  more	  pairwise	  comparisons	  are	  made,	  there	  is	  a	  higher	  probability	  that	  a	  significant	  difference	  in	  response	  rate	  is	  produced	  between	  two	  sampling	  groups	  solely	  due	  to	  chance.	  Hence,	  we	  used	  the	  Marascuilo	  procedure	  to	  adjust	  for	  multiple	  comparisons,	  and	  to	  make	  comparisons	  between	  all	  possible	  groups	  within	  this	  study	  design.	  Two	  values	  were	  separately	  computed	  –	  absolute	  difference	  and	  critical	  range.	  The	  difference	  in	  response	  rate	  between	  two	  groups	  was	  deemed	  to	  be	  significant	  when	  the	  absolute	  difference	  is	  greater	  than	  the	  critical	  range	  (α=0.05	  for	  the	  family	  of	  comparisons).	  This	  statistical	  test	  was	  able	  to	  account	  for	  any	  potential	  type	  1	  errors	  (false	  positive)89.	  	  	  6.6.4	  Multivariable	  analysis	  of	  the	  effects	  of	  survey	  design	  	  The	  chi	  square	  test	  allowed	  one	  to	  assess	  whether	  there	  is	  a	  significant	  difference	  in	  response	  rate	  between	  pairs	  of	  survey	  groups.	  However,	  the	  survey	  was	  not	  designed	  to	  assess	  the	  effect	  of	  each	  survey	  design	  factor	  in	  the	  entire	  sample	  (such	  a	  design	  was	  not	  logistically	  feasible),	  but	  rather	  for	  a	  priori	  specific	  pair-­‐wise	  comparisons.	  When	  using	  the	  full	  sample	  to	  analyze	  the	  effects	  of	  each	  factor,	  the	  issue	  of	  confounding	  may	  arise	  since	  multiple	  survey	  factors	  might	  be	  correlated	  with	  one	  another.	  To	  adjust	  for	  such	  correlations	  when	  estimating	  the	  effect	  of	  individual	  factors	  on	  response	  rate,	  we	  used	  a	  logistic	  regression	  model	  to	  determine	  the	  odds	  of	  response	  for	  treatment	  factors	  over	  the	  reference	  category.	  	  	  	  	   31	  Table	  6.2	  –	  Coding	  for	  logistics	  regression	  Survey	  Groups	   Mode	   Length	   Lottery	   Coin	   Sampling	  Frame	  A	  	   0	   0	   0	   0	   0	  B	  	   0	   0	   1	   0	   0	  C	   0	   0	   0	   1	   0	  D	   0	   0	   1	   1	   0	  E	   0	   0	   1	   1	   1	  F	   0	   1	   1	   1	   0	  G	   1	   1	   0	   1	   0	  1	  =	  treatment,	  0	  =	  reference	  Mode	  (1=Online,	  0=	  Paper)	  Length	  (1=Short,	  0=	  Long)	  Lottery	  (1=	  instant	  Lottery,	  0=	  post-­‐study	  lottery)	  Coin	  	  (1=	  Prepaid	  $2	  incentive,	  0=	  No	  cash	  incentive)	  Sampling	  Frame	  (1=	  Info	  Canada,	  0=	  Canada	  Post)	  	  An	  interaction	  term	  between	  prepaid	  cash	  incentive	  and	  instant	  lottery	  was	  subsequently	  incorporated	  into	  the	  multivariable	  logistic	  model	  (Table	  B2).	  However	  using	  the	  likelihood	  ratio	  test,	  results	  showed	  that	  the	  full	  model	  was	  not	  better	  than	  the	  original	  model.	  This	  suggests	  that	  the	  interaction	  term	  was	  not	  significant	  and	  thus	  was	  removed	  from	  the	  final	  logistic	  regression	  model	  (Appendix	  B).	  The	  regression	  estimates	  were	  converted	  to	  odds	  ratios	  (OR)	  using	  the	  following	  equation:	  Odds	  ratio	  =	  EXP	  (coefficient).	  	  A	  logit	  was	  constructed	  from	  the	  coefficients	  of	  the	  regression	  model:	  Logit	  (Survey	  response)=	  Intercept	  +	  Mode	  +	  Source	  +	  Length	  +	  Lottery	  +	  Coin	  	  The	  logit	  was	  then	  used	  to	  determine	  the	  probabilities	  of	  response	  for	  a	  specific	  combination	  of	  factors	  by	  substituting	  0’s	  and	  1’s	  into	  the	  logit	  equation.	  In	  this	  	   32	  study,	  response	  probabilities	  were	  examined	  individually	  for	  each	  treatment	  factor	  while	  keeping	  the	  rest	  at	  the	  reference	  level.	  	  	  	  The	  log	  odds	  of	  response	  were	  converted	  to	  expected	  odds	  or	  response	  for	  individual	  survey	  factors:	  Expected	  odds	  of	  response	  =	  EXP	  (log	  odds).	  	  Expected	  odds	  of	  response	  were	  then	  converted	  to	  probabilities	  of	  response	  using	  the	  following	  equation:	  	  	  Probability	  =	   ™???? ™?? .	  	  6.7	  Methods	  to	  Analyze	  survey	  costs	  6.7.1	  Survey	  costs	  for	  BCHS	  sampling	  groups	  The	  BCHS	  survey	  was	  conducted	  from	  October	  of	  2012	  to	  February	  of	  2013.	  All	  expenses	  for	  respective	  survey	  groups	  were	  summed	  to	  determine	  the	  total	  cost.	  	  Expenses	  included	  costs	  for	  mailing,	  lottery	  prize,	  coin	  incentives,	  mailing	  supply	  (mailing	  paper,	  office	  supply,	  and	  photocopying	  fees),	  coordinator	  salary	  (logged	  time),	  obtaining	  sampling	  frame	  addresses,	  programming,	  and	  data	  entry	  fees	  for	  the	  mail	  survey.	  	  Costs	  were	  determined	  from	  invoices	  and	  pre-­‐bill	  worksheets.	  Programming	  costs	  was	  estimated	  using	  hourly	  wages	  of	  the	  statistician.	  Survey	  administrations	  were	  calculated	  in	  two	  ways.	  First,	  total	  cost	  was	  divided	  by	  the	  sample	  size	  to	  determine	  the	  cost	  per	  surveys	  sent	  for	  each	  sampling	  group.	  Secondly,	  the	  cost	  per	  response	  was	  calculated	  by	  dividing	  the	  total	  cost	  by	  response	  frequency	  with	  respect	  to	  each	  sampling	  group.	  	  	  	   33	  	  Survey	  Cost	  Adjustments	  for	  Info	  Canada	  Sampling	  Frame	  The	  Info	  Canada	  group	  (Group	  E)	  had	  a	  sample	  size	  of	  2000,	  whereas	  the	  rest	  of	  BCHS	  sampling	  groups	  had	  1000	  participants.	  To	  avoid	  unfair	  comparison	  between	  sampling	  groups	  of	  different	  sizes,	  the	  info	  Canada	  group	  (Group	  E)	  was	  adjusted	  to	  match	  the	  sample	  size	  of	  1000.	  Specifically,	  we	  reduced	  group	  E	  costs	  for	  mailing	  and	  supply,	  coin	  incentive,	  and	  address	  list	  acquisition	  by	  one-­‐half.	  The	  adjusted	  total	  cost	  was	  then	  divided	  by	  one-­‐half	  of	  the	  group	  E	  sampling	  size	  to	  find	  the	  adjusted	  cost	  per	  surveys	  sent.	  Similarly,	  the	  adjusted	  cost	  per	  response	  was	  determined	  by	  dividing	  the	  adjusted	  total	  cost	  of	  group	  E	  by	  one-­‐half	  of	  the	  response	  frequency.	  	  6.7.2	  Multiple	  linear	  regression	  Cost	  per	  response	  calculated	  from	  the	  above	  method	  was	  useful	  in	  determining	  the	  actual	  survey	  costs	  for	  the	  specific	  combinations	  of	  survey	  factors	  used	  in	  the	  study.	  However,	  this	  method	  was	  ineffective	  in	  assessing	  the	  effect	  of	  individual	  survey	  factors	  on	  cost,	  while	  controlling	  for	  other	  factors.	  The	  use	  of	  a	  multiple	  linear	  regression	  allowed	  the	  effect	  on	  cost	  per	  response	  for	  each	  survey	  factor	  to	  be	  determined.	  Coding	  for	  the	  regression	  model	  was	  identical	  to	  that	  of	  table	  6.2.	  	  The	  resultant	  coefficients	  for	  each	  survey	  factors	  depicted	  the	  dollar	  amount	  per	  response	  expected	  for	  each	  individual	  survey	  factor.	  This	  is	  a	  unique	  regression	  model	  because	  the	  observations	  were	  the	  seven	  BCHS	  groups,	  each	  containing	  frequency	  of	  all	  respective	  respondents.	  However,	  since	  the	  regression	  was	  modeled	  	   34	  on	  a	  group	  level,	  the	  95%	  confidence	  interval	  was	  not	  reported	  for	  this	  model	  due	  to	  its	  large	  and	  an	  unreliable	  single	  degree	  of	  freedom	  variance	  estimate.	  	  6.8	  Methods	  to	  Analyze	  Data	  Representativeness	  of	  BCHS	  6.8.1	  Description	  of	  CCHS	  2010	  First	  established	  in	  2001,	  the	  Canadian	  Community	  Health	  Survey	  (CCHS)	  is	  a	  cross-­‐sectional	  survey	  that	  aims	  to	  collect	  health	  related	  information	  from	  the	  Canadian	  population.	  This	  information	  includes	  health	  status,	  health	  care	  utilization,	  and	  health	  determinants.	  CCHS	  originally	  collected	  data	  on	  a	  bi-­‐yearly	  basis;	  this	  has	  been	  changed	  since	  2007	  to	  an	  annual	  data	  collection.	  As	  a	  result,	  the	  data	  generated	  provide	  a	  platform	  for	  various	  government	  agencies,	  health	  researchers	  and	  non-­‐profit	  health	  organizations	  to	  carry	  out	  tasks,	  such	  as	  health	  surveillance	  and	  population	  health	  research86.	  	  	  The	  target	  population	  of	  CCHS	  is	  all	  Canadian	  residents	  over	  the	  age	  of	  12	  years.	  Exclusion	  criteria	  include	  individuals	  living	  on	  reserves	  and	  other	  aboriginal	  settlements,	  full	  time	  members	  of	  the	  Canadian	  Armed	  Forces,	  and	  the	  institutionalized	  population.	  It	  is	  estimated	  that	  these	  exclusions	  represent	  less	  than	  3%	  of	  the	  total	  Canadian	  population86.	  The	  CCHS	  utilizes	  a	  complex	  multi-­‐stage	  allocation	  strategy,	  which	  allows	  every	  eligible	  individual	  within	  the	  population	  to	  have	  equal	  probability	  of	  being	  selected.	  Three	  sampling	  frames	  are	  used	  to	  select	  samples	  of	  households.	  These	  include	  an	  area	  frame,	  telephone	  list	  frame,	  and	  RDD.	  	   35	  Additionally,	  different	  sampling	  methods	  are	  used	  in	  each	  frame	  to	  select	  specific	  individuals	  for	  data	  collection86.	  	  	  A	  stratified	  cluster	  sampling	  design	  is	  used	  in	  the	  area	  frame.	  First,	  each	  province	  is	  classified	  into	  major	  urban	  centers,	  cities,	  and	  rural	  regions.	  All	  households	  in	  major	  urban	  centers	  are	  stratified	  by	  geographic	  and	  socio-­‐demographic	  characteristics.	  This	  is	  followed	  by	  cluster	  sampling	  within	  each	  stratum.	  In	  cities	  and	  rural	  regions,	  households	  are	  stratified	  by	  geographic	  location	  and	  socio-­‐economic	  basis	  simultaneously.	  Final	  selection	  is	  done	  using	  clustered	  sampling.	  A	  list	  frame	  of	  telephone	  numbers	  is	  used	  to	  complement	  the	  area	  frame.	  The	  Canada	  phone	  directory	  is	  an	  external	  administrative	  frame	  of	  landline	  telephone	  numbers	  that	  is	  updated	  every	  six	  months.	  The	  selection	  of	  telephone	  numbers	  occurs	  through	  the	  process	  of	  stratification	  and	  simple	  random	  sampling86.	  	  	  The	  final	  person-­‐level	  weights	  were	  taken	  into	  account	  when	  examining	  respondent	  characteristics	  from	  the	  CCHS	  data.	  This	  weight	  can	  be	  seen	  as	  the	  number	  of	  persons	  the	  individual	  represents	  within	  the	  target	  population.	  	  Lastly,	  each	  person-­‐weight	  is	  divided	  by	  the	  overall	  mean	  weight	  to	  preserve	  the	  number	  of	  total	  respondents	  within	  the	  CCHS	  sample.	  Incorporating	  this	  weight	  allows	  the	  distribution	  of	  CCHS	  characteristics	  to	  be	  closely	  representative	  of	  the	  true	  population.	  	  	  	   36	  Socio-­‐demographic	  and	  health	  variable	  distributions	  of	  BCHS	  respondents	  were	  compared	  to	  that	  of	  the	  CCHS	  2010	  to	  access	  for	  comparability	  and	  data	  representativeness.	  	  6.8.2	  CCHS	  data	  adjustments	  The	  weight	  adjusted	  CCHS	  data	  were	  further	  modified	  to	  allow	  better	  comparison	  to	  the	  BCHS	  data.	  First,	  the	  geographic	  location	  of	  sampling	  was	  restricted	  to	  the	  province	  of	  British	  Columbia	  to	  mimic	  that	  of	  the	  BCHS	  survey.	  The	  age	  of	  respondents	  within	  the	  CCHS	  data	  was	  also	  restricted	  to	  above	  18	  years	  to	  satisfy	  the	  age	  criteria	  of	  the	  BCHS	  survey.	  Lastly,	  certain	  categories	  within	  socio-­‐demographic	  variables	  were	  collapsed	  to	  better	  assess	  the	  resulting	  distribution.	  The	  “Age”	  variable	  was	  collapsed	  into	  four	  categories:	  ≤	  29,	  30-­‐49,	  50-­‐64,	  and	  ≥	  65	  years.	  “Total	  annual	  household	  income”	  was	  collapsed	  into	  4	  categories:	  ≤	  $39,999,	  $40,000-­‐79,999,	  ≥	  $80,000	  and	  not	  stated.	  	  Lastly,	  “Perceived	  General	  Health”	  was	  collapsed	  into	  5	  categories:	  Excellent,	  Very	  good,	  Good,	  Fair/Poor,	  and	  Not	  stated.	  	  	  	  6.8.3	  Analysis	  of	  data	  representativeness	  The	  representativeness	  of	  the	  BCHS	  data	  was	  assessed	  by	  comparing	  its	  percentage	  distribution	  of	  various	  socio-­‐demographic	  and	  health	  variables	  with	  those	  of	  the	  2010	  CCHS	  data.	  It	  is	  important	  to	  note	  that	  the	  chosen	  comparison	  variables	  contain	  identical	  question	  and	  answer	  choices	  between	  the	  two	  surveys.	  	  Socio-­‐demographic	  variables	  included	  age,	  gender,	  martial	  status,	  and	  total	  annual	  household	  income.	  	   37	  Health	  variables	  examined	  included	  general	  health	  ratings	  and	  prevalence	  of	  chronic	  diseases	  such	  as	  diabetes,	  asthma,	  arthritis,	  heart	  disease,	  and	  hypertension.	  	  	  The	  comparability	  of	  BCHS	  data	  to	  CCHS	  2010	  was	  assessed	  using	  three	  methods.	  Distributions	  of	  all	  socio-­‐demographic	  and	  health	  variables	  within	  sampling	  groups	  were	  examined	  and	  compared	  to	  those	  of	  the	  CCHS	  distributions.	  This	  was	  used	  to	  detect	  any	  apparent	  under	  or	  over-­‐sampling	  of	  population	  groups	  for	  specific	  variables	  of	  interest.	  	  1) Chi-­‐square	  test	  of	  independence	  was	  used	  to	  examine	  comparability	  of	  the	  BCHS	  distributions	  with	  that	  of	  the	  weight	  adjusted	  CCHS	  data.	  Disparities	  between	  specific	  categories	  within	  variables	  were	  determined	  using	  additional	  sub-­‐group	  analyses	  (Appendix	  C)	  2) Aside	  from	  comparing	  respondent	  characteristics	  between	  individual	  BCHS	  sampling	  groups	  and	  the	  2010	  CCHS,	  distributions	  of	  the	  aforementioned	  variables	  were	  also	  compared	  between	  different	  BCHS	  sampling	  groups	  to	  assess	  the	  effects	  of	  survey	  methods	  on	  respondent	  characteristics.	  The	  two	  factors	  under	  examination	  were	  sampling	  frames	  (Canada	  Post	  vs.	  Info	  Canada)	  and	  survey	  modes	  (Online	  vs.	  Paper	  Survey).	  This	  was	  done	  by	  collapsing	  sampling	  groups	  containing	  these	  survey	  factors.	  When	  examining	  the	  respondent	  characteristics	  between	  survey	  modes,	  information	  from	  the	  Info	  Canada	  sampling	  group	  (E)	  was	  excluded	  from	  all	  calculations.	  This	  was	  done	  to	  keep	  all	  comparisons	  consistent,	  while	  introduced	  by	  the	  Info	  Canada	  sampling	  frame.	  The	  resultant	  distributions	  of	  socio-­‐demographic	  variables	  	   38	  were	  compared	  to	  the	  weight-­‐adjusted	  CCHS	  2010	  data.	  Subgroup	  analysis	  was	  done	  to	  examine	  statistically	  significant	  differences	  between	  categories	  (Appendix	  C).	  Missing	  values	  due	  to	  participant	  non-­‐response	  were	  excluded	  from	  analyses	  for	  response	  rate	  and	  data	  representativeness.	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   39	  7	  Results	  7.1	  Demographic	  Characteristics	  of	  Sampling	  Groups	  A	  total	  of	  8000	  BCHS	  surveys	  were	  sent	  and	  2231	  responses	  were	  received,	  yielding	  an	  overall	  response	  rate	  of	  27.9%.	  	  	  Demographic	  characteristics	  of	  respondents	  in	  all	  sampling	  groups	  are	  shown	  in	  Table	  7.1.	  For	  simplicity	  reasons,	  abbreviated	  names	  were	  assigned	  to	  each	  survey	  group	  (Table	  7.1).	  These	  names	  will	  be	  used	  throughout	  this	  thesis.	  	  	  The	  mean	  age	  of	  BCHS	  survey	  groups	  varied	  from	  50.4	  years	  (Group	  C)	  to	  57.3	  years	  (Group	  G).	  The	  comparison	  of	  age	  of	  respondents	  using	  one-­‐way	  analysis	  of	  variance	  (ANOVA)	  showed	  that	  the	  age	  was	  similar	  across	  all	  sampling	  groups,	  with	  the	  exception	  of	  groups	  E	  and	  G	  (Figure	  A1).	  	  Gender	  comparison	  showed	  that	  the	  LC	  Info	  Canada	  group	  (E)	  contained	  the	  highest	  percentage	  of	  male	  responders	  (58.4%),	  while	  the	  baseline	  group	  (A)	  contained	  the	  lowest	  percentage	  (33.9%,).	  	  With	  the	  exception	  of	  group	  E,	  the	  gender	  distribution	  was	  similar	  across	  all	  groups,	  with	  a	  higher	  percentage	  of	  female	  respondents	  than	  male	  respondents	  (Table	  A2).	  	  	  	  Comparison	  of	  education	  level	  showed	  that	  the	  LC	  short	  group	  (F)	  contained	  the	  highest	  percentage	  of	  persons	  with	  graduate	  level	  education	  (20.1%),	  while	  the	  lowest	  percentage	  was	  in	  the	  C	  incentive	  group	  (C,	  16.4%).	  	  Further	  examining	  	   40	  highest	  education	  achieved,	  the	  C	  short	  paper	  group	  (G)	  contained	  the	  highest	  percentage	  of	  secondary	  level	  graduates	  (20.6%),	  while	  group	  F	  contained	  the	  lowest	  percentage	  in	  this	  category	  (15.8%).	  The	  distributions	  of	  education	  levels	  of	  respondents	  were	  statistically	  comparable	  between	  all	  sample	  groups.	  (Table	  A3)	  	  Table	  7.1	  –	  Demographics	  of	  sampling	  groups	  Survey	  Groups	   A	   B	   C	   D	   E	   F	   G	  Sample	  size	   1000	   1000	   1000	   1000	   2000	   1000	   1000	  Response	  Freq	  (%)	   n=168	  (16.8)	   n=198	  (19.8)	   n=205	  (20.5)	   n=281	  (28.1)	   n=592	  (29.6)	   n=332	  (33.2)	   n=455	  (45.5)	  Name	   Baseline	   L	  Incentive	   C	  Incentive	   LC	  Incentive	   LC	  InfoCan	   LC	  Short	   C	  Short	  Paper	  Age,	  mean	   53.4	   51.0	   50.4	   51.9	   57.2	   51.2	   57.3	  	  SD	   16.0	   16.5	   16.4	   15.4	   14.5	   17.1	   17.1	  Gender	  	  Freq	  (%)	   	   	   	   	   	   	   	  Male	  	   57	  (33.9)	   79	  (40.0)	   86	  (41.7)	   116	  (41.3)	   346	  (58.4)	   147	  (44.3)	   189(41.9)*	  Female	   111	  (66.1)	   119	  (60.0)	   119	  (58.3)	   165	  (58.7)	   246	  (41.6)	   185	  (55.7)	   262(55.9)*	  Education	  	  Freq	  (%)	   	   	   	   	   	   	   	  No	  diploma	   25	  (14.9)	   20	  (10.1)	   19	  (9.3)	   29	  (10.3)	   66	  (11.1)	   33	  (9.9)	   40	  (8.8)	  Secondary	   27	  (16.1)	   39	  (19.6)	   32	  (15.6)	   47	  (16.7)	   91(15.4)	   43	  (13.0)	   83	  (18.2)	  Post-­‐	  Secondary	   83	  (49.4)	   104	  (52.5)	   117	  (57.1)	   151	  (53.7)	   320	  (54.1)	   175	  (52.7)	   247	  (54.3)	  Graduate	  Level	   27	  (16.1)	   28	  (14.1)	   26	  (12.7)	   46	  (16.4)	   92	  (15.5)	   55	  (16.6)	   73	  (16.0)	  Not	  Stated	   6	  (3.6)	   7	  (3.5)	   11	  (5.4)	   8	  (2.8)	   23	  (3.9)	   26	  (7.8)	   12	  (2.6)	  *	  Four	  NA’s	  in	  Group	  G	  Gender	  SD	  =	  Standard	  deviation	  Note:	  	  Description	  of	  Sampling	  Groups	  Group	  A	  –	  Long	  questionnaire,	  Canada	  Post	  sampling	  frame,	  web-­‐based	  survey	  Group	  B	  –	  Instant	  Lottery,	  long	  questionnaire,	  Canada	  Post	  sampling	  frame,	  web-­‐based	  survey	  Group	  C	  –	  Prepaid	  coin,	  instant	  lottery,	  long	  questionnaire,	  Canada	  Post	  sampling	  frame,	  web-­‐based	  survey	  Group	  D	  –	  Prepaid	  coin,	  instant	  lottery,	  long	  questionnaire,	  Canada	  Post	  sampling	  frame,	  web-­‐based	  survey	  	  Group	  E	  –	  Prepaid	  coin,	  instant	  lottery,	  long	  questionnaire,	  Info	  Canada	  sampling	  frame,	  web-­‐based	  survey	  Group	  F	  –	  Prepaid	  coin,	  instant	  lottery,	  short	  questionnaire,	  Canada	  Post	  sampling	  frame,	  web-­‐based	  survey	  Group	  G	  –	  Prepaid	  coin,	  short	  questionnaire,	  Canada	  Post	  sampling	  frame,	  paper	  survey	  	  	  	  	  	  	   41	  7.2	  Response	  Rate	  Analyses	  Results	  	  7.2.1	  Response	  rates	  The	  raw	  response	  rates	  are	  shown	  in	  table	  7.2.	  	  In	  total,	  21	  participants	  requested	  to	  complete	  a	  paper	  form	  of	  the	  survey	  instead	  of	  the	  online	  form,	  including	  three	  from	  Group	  A,	  three	  from	  Group	  C,	  one	  from	  Group	  D,	  nine	  from	  Group	  E,	  and	  five	  from	  group	  F.	  	  	  	  Adjusted	  response	  rates	  (Intention-­‐to-­‐treat	  analysis)	  were:	  17.1%	  in	  the	  baseline	  group,	  19.8%	  in	  the	  L	  incentive	  group,	  20.8	  %	  in	  the	  C	  incentive	  group,	  28.2%	  in	  the	  LC	  incentive	  group,	  30.1%	  in	  the	  LC	  Info	  Canada	  group,	  33.7%	  in	  the	  LC	  short	  group,	  and	  43.4	  in	  the	  C	  short	  paper	  group	  (Table	  7.3	  and	  Graph	  7.1).	  	  	  Table	  7.2	  –	  Initial	  and	  Adjusted	  Response	  Rate	  and	  Frequency	  Survey	  Groups	   Surveys	  sent	   Initial	  Freq	   Initial	  Response	  rates	  (%)	   Adjusted	  Freq	   Adjusted	  Response	  rates	  (%)	  Baseline	  (A)	   1000	   168	   16.8	   171	   17.1	  L	  Incentive	  (B)	   1000	   198	   19.8	   198	   19.8	  C	  Incentive	  (C)	   1000	   205	   20.5	   208	   20.8	  LC	  Incentive	  (D)	   1000	   281	   28.1	   282	   28.2	  LC	  InfoCan	  (E)	   2000	   592	   29.6	   601	   30.1	  LC	  Short	  (F)	   1000	   332	   33.2	   337	   33.7	  C	  Short	  Paper	  (G)	   1000	   455	   45.5	   434	   43.4	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   42	  	  Figure	  7.1	  –	  Response	  Rates	  of	  BCHS	  Sampling	  Groups	  Note:	  Description	  of	  sampling	  groups	  Baseline	  –	  Long	  questionnaire,	  Canada	  Post	  sampling	  frame,	  web-­‐based	  survey	  L	  Incentive	  –	  Instant	  Lottery,	  long	  questionnaire,	  Canada	  Post	  sampling	  frame,	  web-­‐based	  survey	  C	  Incentive	  -­‐	  Prepaid	  coin,	  long	  questionnaire,	  Canada	  Post	  sampling	  frame,	  web-­‐based	  survey	  LC	  Incentive	  –	  Prepaid	  coin,	  instant	  lottery,	  long	  questionnaire,	  Canada	  Post	  sampling	  frame,	  web-­‐based	  survey	  LC	  InfoCan	  –	  Prepaid	  coin,	  instant	  lottery,	  long	  questionnaire,	  Info	  Canada	  sampling	  frame,	  web-­‐based	  survey	  LC	  Short	  –	  Prepaid	  coin,	  instant	  lottery,	  short	  questionnaire,	  Canada	  Post	  sampling	  frame,	  web-­‐based	  survey	  C	  Short	  Paper	  –	  Prepaid	  coin,	  short	  questionnaire,	  Canada	  Post	  sampling	  frame,	  paper	  survey	  	  7.2.2	  Comparison	  of	  Response	  Rates	  between	  Survey	  Groups	  Six	  a	  priori	  comparisons	  were	  examined	  using	  Chi-­‐square	  test	  for	  independence	  and	  the	  Marascuilo	  procedure.	  	  	  1.	  Instant	  lottery	  The	  design	  of	  group	  B	  differed	  from	  group	  A	  by	  an	  additional	  instant	  $100	  lottery.	  	  	  The	  difference	  in	  response	  rate	  was	  2.7%	  (group	  B	  19.8%,	  A	  17.1%,	  p	  =	  0.13).	  This	  result,	  along	  with	  adjustment	  for	  multiple	  comparison	  (table	  7.5)	  suggested	  that	  instant	  lottery	  did	  not	  have	  a	  statistically	  significant	  impact	  on	  response	  rate	  compared	  to	  an	  end-­‐of-­‐study	  lottery.	  	  0	  5	  10	  15	  20	  25	  30	  35	  40	  45	  50	  0	  5	  10	  15	  20	  25	  30	  35	  40	  45	  50	  Baseline	   L	  Incentive	   C	  Incentive	   LC	  Incentive	   LC	  InfoCan	   LC	  Short	   C	  Short	  Paper	  Response	  Rates	  (%)	  Survey	  Groups	  Adjusted	  Response	  Rates	  of	  BCHS	  Sampling	  Groups	  	   43	  	  2.	  $2	  pre-­‐paid	  coin	  incentive	  The	  design	  of	  group	  C	  differed	  from	  group	  A	  by	  the	  inclusion	  of	  an	  additional	  $2	  pre-­‐paid	  coin	  incentive.	  The	  difference	  in	  response	  rate	  was	  3.9%	  (C	  20.8%,	  A	  17.1%,	  p	  =	  0.04).	  This	  result,	  supported	  by	  the	  Marascuilo	  procedure	  (table	  7.5),	  suggested	  that	  the	  implementation	  of	  a	  pre-­‐paid	  coin	  incentive	  elicited	  a	  significant	  effect	  on	  response	  rate.	  	  	  3.	  Coin	  incentive	  and	  instant	  lottery	  Group	  D	  survey	  contained	  both	  the	  prepaid	  coin	  incentive	  and	  the	  instant	  lottery,	  while	  group	  A	  contained	  neither.	  The	  inclusion	  of	  both	  monetary	  incentives	  resulted	  in	  an	  11.1%	  increase	  in	  response	  rate	  (D	  28.2%,	  A	  17.1%,	  p	  <	  0.001).	  This	  result,	  supported	  by	  the	  Marascuilo	  procedure	  (table	  7.5),	  suggested	  that	  using	  both	  instant	  lottery	  and	  coin	  incentive	  elicited	  a	  statistically	  significant	  difference	  in	  response	  rates.	  	  4.	  Info	  Canada	  sampling	  frame	  in	  the	  presence	  of	  pre-­‐existing	  monetary	  incentives	  Groups	  D	  and	  E	  surveys	  both	  contained	  the	  instant	  lottery	  and	  coin	  incentives.	  Group	  E	  survey	  recipients	  were	  selected	  from	  the	  Info	  Canada	  sampling	  frame	  and	  received	  personally	  addressed	  invitation	  mails.	  The	  observed	  difference	  in	  response	  rate	  was	  1.9%	  (E	  30.1%,	  D	  28.2%,	  p	  =	  0.315).	  The	  result,	  supported	  by	  adjustment	  for	  multiple	  comparisons	  (Table	  7.5),	  suggested	  that	  the	  implementation	  of	  the	  Info	  Canada	  	   44	  sampling	  frame	  in	  the	  presence	  of	  preexisting	  monetary	  incentives	  did	  not	  significantly	  impact	  the	  response	  rate.	  	  5.	  Short	  questionnaire	  in	  the	  presence	  of	  preexisting	  monetary	  incentives	  Both	  groups	  D	  and	  F	  surveys	  contained	  the	  instant	  lottery	  and	  coin	  incentives.	  Group	  F	  participants	  received	  the	  short	  questionnaire.	  The	  difference	  in	  response	  rate	  was	  5.5%(F	  33.7%,	  D	  28.2%,	  p	  =	  0.	  009).	  This	  result,	  supported	  by	  the	  Marascuilo	  procedure	  (table	  7.5),	  suggested	  that	  the	  implementation	  of	  the	  shorter	  length	  survey	  in	  the	  presence	  of	  preexisting	  monetary	  incentives	  significantly	  impacted	  the	  response	  rate.	  	  6.	  Paper	  survey,	  no	  instant	  lottery	  in	  the	  presence	  of	  preexisting	  shortened	  questionnaire	  and	  prepaid	  coin	  incentive	  Groups	  F	  and	  G	  both	  contained	  the	  short	  questionnaire	  and	  prepaid	  coin	  incentive.	  Group	  G	  survey	  was	  administered	  in	  the	  paper	  format	  while	  group	  F	  survey	  was	  administered	  in	  the	  online	  format	  with	  an	  additional	  instant	  lottery.	  The	  difference	  in	  response	  rate	  was	  9.7%	  (F	  33.7%,	  G	  43.4%,	  p	  <	  0.001).	  This	  result,	  supported	  by	  adjustment	  for	  multiple	  comparisons	  (table	  7.5),	  suggested	  that	  in	  the	  presence	  of	  preexisting	  shortened	  questionnaire	  and	  prepaid	  coin	  incentive,	  the	  implementation	  of	  paper	  survey	  elicited	  a	  statistically	  significant	  difference	  in	  response	  rate	  compared	  to	  offering	  instant	  lottery	  in	  a	  web-­‐based	  survey.	  	  	  	  	  	   45	  Exploratory	  comparisons	  	  	  Additional	  pair-­‐wise	  exploratory	  comparisons	  were	  made	  to	  examine	  the	  effect	  of	  instant	  lottery	  and	  prepaid	  coin	  incentive	  when	  used	  in	  addition	  to,	  or	  in	  place	  of	  another.	  The	  effect	  of	  the	  short	  questionnaire	  was	  also	  examined	  when	  used	  in	  combination	  with	  both	  forms	  of	  monetary	  incentives	  	  1.	  Coin	  incentive	  in	  the	  presence	  of	  preexisting	  instant	  lottery	  The	  use	  of	  coin	  incentive	  in	  group	  D	  resulted	  in	  8.4%	  increase	  in	  response	  rate	  compared	  to	  group	  B	  (D	  28.2%,	  B	  19.8%,	  p	  <	  0.001).	  This	  result,	  supported	  by	  adjustment	  for	  multiple	  comparisons	  (Table	  7.5),	  suggested	  that	  the	  inclusion	  of	  an	  additional	  coin	  incentive	  in	  addition	  to	  a	  pre-­‐existing	  lottery	  incentive	  significantly	  impacted	  the	  response	  rate.	  	  2.	  Instant	  Lottery	  in	  the	  presence	  of	  preexisting	  coin	  incentive	  The	  difference	  in	  response	  rate	  between	  groups	  D	  and	  C	  was	  7.4%	  (D	  28.2%,	  C	  20.8%,	  p	  <	  0.001).	  This	  result,	  supported	  by	  adjustment	  for	  multiple	  comparisons	  (Table	  7.5),	  suggested	  that	  the	  implementation	  of	  instant	  lottery	  in	  the	  presence	  of	  a	  pre-­‐existing	  coin	  incentive	  elicited	  a	  statistically	  significant	  increase	  in	  response	  rate.	  	  3.	  $2	  pre-­‐paid	  coin	  incentive	  in	  place	  of	  instant	  lottery	  Group	  C	  survey	  contained	  a	  coin	  incentive	  while	  group	  B	  contained	  an	  instant	  lottery.	  The	  use	  of	  the	  coin	  incentive	  in	  place	  of	  instant	  lottery	  resulted	  in	  a	  1%	  increase	  in	  response	  rate	  (20.8%	  for	  group	  B	  and	  19.8%	  for	  group	  C,	  p	  =	  0.617).	  	  This	  result,	  	   46	  supported	  by	  adjustment	  for	  multiple	  comparisons	  (Table	  7.5),	  suggested	  a	  non-­‐significant	  increase	  in	  response	  rate	  when	  the	  coin	  incentive	  was	  used	  instead	  of	  the	  instant	  lottery.	  	  4.	  Shortened	  questionnaire,	  instant	  lottery,	  and	  coin	  incentive	  Group	  F	  survey	  contained	  both	  instant	  lottery	  and	  the	  prepaid	  coin	  incentive.	  In	  addition,	  the	  questionnaire	  was	  shortened.	  The	  difference	  in	  response	  rates	  was	  16.6%	  (F	  33.7%,	  A	  17.1%,	  p	  <	  0.001).	  This	  result,	  supported	  by	  adjustment	  for	  multiple	  comparisons	  (Table	  7.5),	  suggested	  that	  the	  effect	  of	  implementation	  of	  both	  monetary	  incentives	  and	  shortened	  questionnaire	  elicited	  a	  statistically	  significant	  increase	  in	  response	  rate.	  	  	  	  	  	  	  	  	  	  	  	  	  	   47	  Table	  7.3	  –	  Pairwise	  comparisons	  of	  response	  rates	  for	  the	  experimental	  groups	  	  Groups	   Factors	  differed	  	   Common	  Factors	   Chi-­‐Sq	  values	   P	  -­‐	  value	  A	  priori	  Comparisons	  B	  –	  A	   Instant	  Lottery	   	   2.25	   0.134	  C	  –	  A	   Coin	  incentive	   	   4.22	   0.040	  D	  –	  A	   Coin	  incentive,	  Instant	  lottery	   	   34.53	   <	  0.001	  	  E	  –	  D	   Info	  Canada	   Long,	  Instant	  lottery,	  Coin	  incentive	   1.01	   0.315	  F	  –	  D	   Shortened	  survey	   Instant	  lottery,	  Coin	  incentive	   6.82	   <	  0.001	  	  F	  –	  G	   No	  instant	  Lottery	  and	  paper	  form	   Shortened	  survey,	  Coin	  incentive	   19.45	   <	  0.001	  	  Exploratory	  Comparisons	  D	  –	  B	   Coin	  incentive	   Instant	  lottery	   18.88	   <	  0.001	  	  D	  –	  C	   Instant	  lottery	   Coin	  incentive	   14.40	   <	  0.001	  C	  –	  B	   Coin	  incentive	  instead	  of	  instant	  lottery	   	   0.25	   0.617	  F	  –	  A	   Shortened	  Survey,	  Instant	  lottery,	  Coin	  incentive	   	   71.84	   <	  0.001	  	  Table	  7.4	  –	  Pairwise	  comparisons	  of	  response	  rates	  using	  the	  Marascuilo	  procedure	  Proportions	   Absolute	  Differences	   Critical	  Range	   Significant*	  A	  priori	  Comparisons	  P	  (B)	  -­‐	  P	  (A)	   0.027	   0.028	   No	  P(C)	  -­‐	  P	  (A)	   0.037	   0.028	   Yes	  P	  (D)	  -­‐	  P	  (A)	   0.111	   0.03	   Yes	  P	  (E)	  -­‐	  P	  (D)	   0.019	   0.033	   No	  P	  (F)	  -­‐	  P	  (D)	   0.055	   0.034	   Yes	  P	  (G)	  -­‐	  P	  (F)	   0.097	   0.035	   Yes	  Exploratory	  Comparisons	  P	  (D)	  -­‐	  P	  (B)	   0.084	   0.031	   Yes	  P	  (D)	  -­‐	  P(C)	   0.074	   0.031	   Yes	  P(C)	  -­‐	  P	  (B)	   0.01	   0.029	   No	  P	  (F)	  -­‐	  P	  (A)	   0.166	   0.031	   Yes	  	   48	  *	  A	  difference	  is	  statistically	  significant	  (p=0.05)	  if	  its	  absolute	  difference	  exceeds	  the	  critical	  range	  value.	  	  7.2.3	  Multivariable	  analysis	  of	  the	  effects	  of	  survey	  design	  factors	  on	  response	  rate	  With	  the	  exception	  of	  InfoCan,	  all	  coefficients	  in	  the	  multiple	  logistic	  regression	  model	  were	  statistically	  significant	  (Table	  7.5).	  Compared	  to	  the	  reference	  category	  of	  each	  survey	  factor,	  the	  survey	  factors	  of	  online	  mode,	  shorter	  questionnaire,	  instant	  lottery,	  and	  coin	  incentive	  were	  associated	  with	  higher	  response	  rates.	  There	  was	  no	  significant	  association	  between	  sampling	  frame	  and	  response	  rate.	  	  	  Table	  7.5	  –	  Estimated	  odds	  ratios	  (OR)	  and	  95%	  confidence	  interval	  (CI)	  	  	  Survey	  Factors	   OR	   95%	  CI	  	   	   2.5%	   97.5%	  InfoCan	  Canada	  Post	  (ref)	   1.14	  1.00	   0.98	  	   1.34	  Instant	  Lottery	  End-­‐of-­‐study	  lottery	  (ref)	   1.35	  1.00	   1.16	   1.58	  Short	  survey	  Long	  survey	  (ref)	   1.35	  1.00	   1.13	   1.62	  Coin	  No	  coin	  (ref)	   1.44	  1.00	   1.23	   1.67	  Paper	  survey	  Online	  survey	  (ref)	   2.04	  1.00	   1.61	   2.59	  	  	   49	  	  Figure	  7.2	  –	  Logistic	  regression	  estimated	  odds	  for	  individual	  survey	  factors.	  A	  horizontal	  line	  is	  placed	  at	  OR	  =	  1	  (no	  effect)	  	  From	  the	  lowest	  to	  the	  highest,	  the	  odds	  of	  responding	  were	  14%	  higher	  for	  Info	  Canada	  compared	  to	  Canada	  post	  (OR	  =	  1.14,	  95%	  CI	  0.98-­‐1.33).	  The	  shorter	  questionnaire	  had	  35%	  higher	  odds	  of	  response	  compared	  to	  the	  longer	  questionnaire	  (OR	  =	  1.35,	  1.13-­‐1.62).	  	  The	  odds	  or	  responding	  were	  also	  35%	  higher	  for	  instant	  lottery	  compared	  to	  no	  instant	  lottery	  (OR=1.35,	  1.16	  -­‐1.58).	  The	  prepaid	  $2	  coin	  incentive	  had	  44%	  higher	  odds	  of	  response	  compared	  to	  no	  incentive	  (OR=1.44,	  1.13-­‐1.67).	  Lastly,	  the	  paper	  survey	  had	  104%	  higher	  odds	  of	  responding	  compared	  to	  online	  survey	  (OR=2.04,	  1.61-­‐2.59).	  	  7.2.4	  Using	  logistic	  regression	  coefficients	  to	  estimate	  the	  expected	  probabilities	  of	  response	  within	  the	  model	  due	  to	  survey	  factors	  0.00	  0.50	  1.00	  1.50	  2.00	  2.50	  3.00	  0.00	  0.50	  1.00	  1.50	  2.00	  2.50	  3.00	  InfoCan	   Lottery	   Short	   Coin	   Paper	  Estimated	  Odds	  Ratios	  Suvey	  Factors	  Logistic	  Regression	  of	  Response	  Rates	  	  (Estimated	  Odds	  Ratio)	  	   50	  The	  estimated	  coefficients	  were	  used	  to	  construct	  the	  equation	  for	  the	  logit	  of	  survey	  response:	  	  Logit	  (Survey	  response)=	  -­‐0.93	  +	  (-­‐0.71)	  Mode	  +(0.14)	  Source+(0.30)	  Length+(0.30)	  Lottery+(0.36)	  Coin	  	  This	  equation	  was	  used	  to	  estimate	  the	  expected	  probabilities	  of	  response	  for	  an	  individual	  in	  a	  specific	  type	  of	  group	  (by	  substituting	  0’s	  and	  1’s	  for	  various	  factors)	  using	  coding	  shown	  in	  Table	  6.2.	  Results	  showed	  that	  under	  the	  reference	  conditions	  (Canada	  Post	  sampling	  frame,	  end-­‐of-­‐study	  lottery,	  no	  coin	  incentive,	  long	  questionnaire,	  and	  online	  mode	  of	  administration),	  the	  expected	  probability	  of	  response	  was	  16%	  (0.15%-­‐0.18%).	  	  The	  estimated	  probability	  of	  response	  was	  18%	  (0.17%–0.19%)	  when	  selecting	  participants	  from	  the	  Info	  Canada	  sampling	  frame	  (keeping	  other	  factors	  at	  the	  reference	  level),	  21%	  (0.19%–0.22%)	  when	  the	  instant	  lottery	  is	  offered,	  21%	  (0.19%–0.22%)	  when	  using	  the	  shorter	  questionnaire,	  22%	  (0.20%–0.24%)	  when	  offering	  the	  prepaid	  coin	  incentive,	  and	  28%	  (0.25%–0.31%)	  when	  using	  the	  paper	  format	  survey	  (Table	  7.6).	  	  	  	  	  	  	  	  	   51	  Table	  7.6	  –	  Expected	  probabilities	  of	  response	  and	  95%	  confidence	  intervals	  for	  individual	  survey	  factors	  while	  keeping	  other	  factors	  at	  the	  reference	  level	  	   	   Probability	  of	  Response	   95%	  CI	  	   	   2.5%	   97.5%	  InfoCan	   0.18	   0.17	   0.19	  Lottery	   0.21	   0.19	   0.23	  Short	   0.21	   0.19	   0.22	  Coin	   0.22	   0.20	   0.24	  Paper	   0.28	   0.25	   0.31	  	  	  Figure	  7.3	  	  –	  Expected	  probability	  of	  response	  for	  survey	  factors	  	  	  	  	  	  	  	  0.00	  0.05	  0.10	  0.15	  0.20	  0.25	  0.30	  0.35	  0.40	  0.00	  0.05	  0.10	  0.15	  0.20	  0.25	  0.30	  0.35	  0.40	  InfoCan	   Lottery	   Short	   Coin	   Paper	  Probabilities	  of	  Response	  Suvey	  Factors	  Logistic	  Regression	  of	  Response	  Rates	  	  (Expected	  Probabilities	  of	  Response)	  	   52	  7.3	  Cost	  Analyses	  Results	  Table	  7.7	  –	  Cost	  table	  for	  all	  sampling	  groups	  	  	   A	   B	   C	   D	   E	   F	   G	  	   	   	   	   	   	   	   	  Sample	  Size	  (n)	   1000	   1000	   1000	   1000	   2000	   1000	   1000	  	   	   	   	   	   	   	   	  Mailing	  costs	  ($)	   1559.23	   1542.42	   1527.25	   1674.13	   2985.32	   1444.84	   2782.05	  Mail	  Return	  costs	  ($)	   0.00	   0.00	   0.00	   0.00	   0.00	   0.00	   338.52	  	   	   	   	   	   	   	   	  Lottery	  ($)	   285.70	   285.70	   285.70	   285.70	   285.70	   285.70	   285.70	  	   	   	   	   	   	   	   	  Cash	  incentive	  ($)	   	   	   	   	   	   	   	  Twoonies	  ($)	   0.00	   0.00	   2000.00	   2000.00	   4000.00	   2000.00	   2000.00	  Volunteer	  food	  ($)	   0.00	   0.00	   18.80	   18.80	   18.80	   18.80	   18.80	  Coordinator	  salary	  ($)	   0.00	   0.00	   287.50	   287.50	   575.00	   287.50	   287.50	  	   	   	   	   	   	   	   	  Supplies	  ($)	   	   	   	   	   	   	   	  Paper	  Survey	  +	  invitation	  letter	  ($)	   0.00	   0.00	   0.00	   0.00	   0.00	   0.00	   1214.13	  Invitation	  and	  reminder	  mail	  for	  online	  ($)	   2021.01	   2021.01	   2021.01	   2021.01	   4042.02	   2021.01	   0.00	  Reminder	  mail	  for	  paper	  ($)	   0.00	   0.00	   0.00	   0.00	   0.00	   0.00	   1515.90	  Office	  supply	  +	  photocopying	  ($)	   35.46	   35.46	   35.46	   35.46	   35.46	   35.46	   35.46	  	   	   	   	   	   	   	   	  Coordinator	  Salary	  ($)	   8618.96	   8618.96	   8618.96	   8618.96	   8618.96	   8618.96	   8618.96	  	   	   	   	   	   	   	   	  Addresses	  ($)	   	   	   	   	   	   	   	  Canada	  Post	  ($)	   115.92	   115.92	   115.92	   115.92	   0.00	   115.92	   115.92	  Info	  Canada	  ($)	   0.00	   0.00	   0.00	   0.00	   316.53	   0.00	   0.00	  	   	   	   	   	   	   	   	  Survey	  Programming	  ($)	   	   	   	   	   	   	   	  General	  ($)	   57.14	   57.14	   57.14	   57.14	   57.14	   57.14	   57.14	  Programming	  +	  Debugging	  ($)	   66.67	   66.67	   66.67	   66.67	   66.67	   66.67	   0.00	  Programming	  of	  instant	  winner	  ($)	   0.00	   100.00	   0.00	   100.00	   100.00	   100.00	   0.00	  	   	   	   	   	   	   	   	  Data	  Entry	  ($)	   0.00	   0.00	   0.00	   0.00	   0.00	   0.00	   600.60	  	   	   	   	   	   	   	   	  Total	  Cost	  ($)	   12960.09	   12843.28	   15034.41	   15281.29	   21101.60	   15052.00	   17870.68	  	   	   	   	   	   	   	   	  Response	  frequency	   171	   198	   208	   282	   601	   337	   434	  	   	   	   	   	   	   	   	  Cost/Survey	  Sent	  ($/Surveys	  sent)	   12.76	   12.84	   15.03	   15.28	   10.55	   15.05	   17.87	  Cost/Response	  ($/response)	   74.62	   64.87	   72.28	   54.19	   35.11	   44.66	   41.18	  	   53	  7.3.1	  Cost	  per	  survey	  sent	  for	  individual	  sampling	  groups	  The	  final	  estimated	  cost	  per	  survey	  sent	  was	  as	  follows:	  $12.76/survey	  for	  the	  baseline	  group,	  $12.84/survey	  for	  the	  L	  incentive	  group,	  $15.03/survey	  for	  the	  C	  incentive	  group,	  $15.28/survey	  for	  the	  LC	  incentive	  group,	  $16.14/survey	  for	  the	  LC	  Info	  Canada	  group,	  $15.05/survey	  for	  the	  LC	  short	  group,	  and	  $17.87/survey	  for	  the	  C	  short	  paper	  group	  (Table	  7.8).	  	  Table	  7.8	  –	  Cost	  per	  survey	  sent	  	  Survey	  Groups	   Cost/Surveys	  Sent	  ($)	  Baseline	  (A)	   12.76	  L	  Incentive	  (B)	   12.84	  C	  Incentive	  (C)	   15.03	  LC	  Incentive	  (D)	   15.28	  LC	  InfoCan*	  (E)	   16.14	  LC	  Short	  (F)	   15.05	  C	  Short	  Paper	  (G)	   17.87	  *	  Adjusted	  cost/survey	  sent	  for	  Info	  Canada	  sampling	  group	  	   54	  	  *	  Adjusted	  cost/survey	  sent	  for	  Info	  Canada	  sampling	  group	  Figure	  7.4	  -­‐	  Cost/survey	  sent	  for	  individual	  survey	  groups	  	  7.3.2	  Cost	  per	  response	  for	  individual	  sampling	  group	  The	  final	  calculated	  cost	  per	  response	  was	  as	  follows:	  $74.62/response	  for	  the	  baseline	  group,	  	  $64.87/response	  for	  L	  incentive	  group,	  $72.28/response	  for	  the	  C	  incentive	  group,	  $54.19/response	  for	  the	  LC	  incentive	  group,	  $53.63/response	  for	  the	  adjusted	  LC	  Info	  Canada	  group,	  $44.66/response	  for	  the	  LC	  short	  group,	  and	  $41.18/response	  for	  the	  C	  short	  paper	  group	  (Table	  7.9).	  	  	  	  	  	  10	  11	  12	  13	  14	  15	  16	  17	  18	  19	  10	  11	  12	  13	  14	  15	  16	  17	  18	  19	  Baseline	  	   L	  Incentive	   C	  Incentive	   LC	  Incentive	   LC	  InfoCan*	  	   LC	  Short	  	   C	  Short	  Paper	  Cost	  per	  Surveys	  Sent	  ($)	  Survey	  Groups	  Cost	  Analysis	  -­‐	  Cost	  per	  Surveys	  Sent	  	   55	  Table	  7.9	  –	  Cost	  per	  response	  	  Survey	  Groups	   Cost/response	  ($)	  Baseline	  (A)	   74.62	  L	  Incentive	  (B)	   64.87	  C	  Incentive	  (C)	   72.28	  LC	  Incentive	  (D)	   54.19	  LC	  InfoCan*	  (E)	   53.63	  LC	  Short	  (F)	   44.66	  C	  Short	  Paper	  (G)	   41.18	  *	  Adjusted	  cost/response	  for	  Info	  Canada	  sampling	  group	  	  	  *	  Adjusted	  cost/response	  for	  Info	  Canada	  sampling	  group	  Figure	  7.5	  -­‐	  Cost/response	  for	  individual	  survey	  groups	  	  	  	  20	  30	  40	  50	  60	  70	  80	  90	  20	  30	  40	  50	  60	  70	  80	  90	  Baseline	  	   L	  Incentive	   C	  Incentive	   LC	  Incentive	   LC	  InfoCan*	  	   LC	  Short	  	   C	  Short	  Paper	  Cost	  per	  Response	  ($)	  	  Survey	  Forms	  	  Cost	  Analysis	  -­‐	  Cost	  per	  Response	  	  	   56	  The	  calculated	  cost	  per	  surveys	  sent	  and	  survey	  response	  were	  useful	  in	  determining	  the	  most	  cost-­‐saving	  methods	  in	  survey	  design	  specific	  to	  the	  combinations	  of	  factors	  in	  each	  of	  the	  survey	  groups.	  The	  use	  of	  logistic	  regression	  was	  used	  to	  observe	  the	  effects	  of	  each	  factor	  on	  cost	  while	  adjusting	  for	  others.	  	  	  7.3.3	  Effects	  of	  survey	  design	  factors	  on	  cost	  per	  surveys	  sent	  The	  results	  of	  the	  regression	  showed	  that	  two	  survey	  factors	  were	  associated	  with	  an	  increase	  in	  the	  cost	  per	  surveys	  sent.	  The	  difference	  in	  cost	  was	  $2.99	  for	  paper	  (as	  opposed	  to	  online)	  survey	  mode,	  $2.36	  for	  a	  $2	  coin,	  $0.90	  for	  Info	  Canada	  sampling	  frame,	  and	  $0.17	  for	  instant	  lottery.	  	  Reduced	  survey	  length	  was	  associated	  with	  a	  $0.19	  decrease	  in	  cost	  per	  surveys	  sent.	  (Table	  7.10)	  	  	  Table	  7.10–	  Multiple	  linear	  regression	  coefficients	  for	  cost	  per	  surveys	  sent	  Survey	  Factors	   β? 	  Paper	   2.99	  Coin	   2.36	  InfoCan	   0.90	  Lottery	   0.17	  Short	   -­‐0.19	  	  	   57	  	  Figure	  7.6–	  Multiple	  linear	  regression	  coefficients	  for	  the	  effects	  of	  survey	  design	  factors	  on	  cost	  per	  surveys	  sent.	  A	  horizontal	  line	  is	  placed	  at	  β?=	  0	  (no	  effect).	  	  	  	  7.3.4	  Effects	  of	  survey	  design	  factors	  on	  cost	  per	  response	  The	  results	  of	  the	  regression	  showed	  that	  the	  implementation	  of	  all	  survey	  factors	  studied	  were	  associated	  with	  a	  decrease	  in	  the	  cost	  per	  response.	  The	  reduction	  in	  cost	  was	  $2.65	  for	  Info	  Canada	  sampling	  frame,	  $6.51	  for	  adding	  $2	  coin,	  $11.62	  for	  reducing	  survey	  length,	  $13.92	  for	  implementing	  $100	  instant	  lottery,	  and	  $17.40	  for	  paper	  (as	  opposed	  to	  online)	  survey	  mode.	  (Table	  7.11)	  	  Table	  7.11–	  Multiple	  linear	  regression	  coefficients	  for	  cost	  per	  response	  Survey	  Factors	   β? 	  InfoCan	   -­‐2.65	  Coin	   -­‐6.51	  Short	   -­‐11.62	  Lottery	   -­‐13.92	  Paper	   -­‐17.40	  -­‐2	  -­‐1	  0	  1	  2	  3	  4	  5	  -­‐2	  -­‐1	  0	  1	  2	  3	  4	  5	  Paper	   Coin	   InfoCan	   Lottery	   Short	  Estimated	  Beta	  coef?icient	  β攤	  (cost/survey	  sent)	  Survey	  Factors	  Linear	  Regression	  of	  Cost/Survey	  Sent	  	   58	  	  	  Figure	  7.7–	  Multiple	  linear	  regression	  coefficients	  for	  the	  effects	  of	  survey	  design	  factors	  on	  cost	  per	  response.	  	  	  7.4	  Data	  Representativeness	  Analysis	  Results	  7.4.1	  Socio-­‐demographics	  Variables	  Gender	  In	  the	  weight-­‐adjusted	  CCHS,	  there	  was	  an	  approximately	  equal	  percentage	  of	  male	  and	  female	  respondents,	  whereas	  there	  were	  considerably	  more	  female	  respondents	  in	  all	  BCHS	  sampling	  groups,	  except	  for	  the	  Info	  Canada	  group	  (Group	  E),	  in	  which	  there	  was	  a	  higher	  percentage	  of	  male	  respondents	  than	  female	  (Figure	  7.8).	  All	  groups	  exhibited	  a	  significant	  difference	  in	  gender	  distribution	  compared	  to	  that	  of	  the	  2010	  CCHS.	  	  -­‐30	  -­‐25	  -­‐20	  -­‐15	  -­‐10	  -­‐5	  0	  5	  10	  15	  20	  -­‐30	  -­‐25	  -­‐20	  -­‐15	  -­‐10	  -­‐5	  0	  5	  10	  15	  20	  InfoCan	   Coin	   Short	   Lottery	   Paper	  Estimated	  Beta	  coef?icient	  β攤	  (cost/respone)	  Survey	  Factors	  Linear	  Regression	  of	  Cost/Response	  	   59	  	  Figure	  7.8	  –	  Percentage	  distribution	  of	  gender	  in	  CCHS	  and	  BCHS	  	  	  	   60	  Age	  When	  comparing	  the	  percentage	  distribution	  of	  age	  between	  BCHS	  and	  CCHS,	  it	  could	  be	  seen	  that	  the	  younger	  populations	  were	  under-­‐represented	  in	  all	  BCHS	  survey	  groups	  (Table	  C1).	  In	  groups	  E	  (Info	  Canada)	  and	  G	  (paper	  survey),	  the	  percentage	  of	  respondents	  under	  the	  age	  of	  30	  was	  4.7%	  and	  5.9%,	  respectively,	  compared	  to	  20.3%	  in	  the	  CCHS.	  All	  BCHS	  sampling	  groups	  exhibited	  a	  statistically	  significant	  difference	  in	  age	  distribution	  compared	  to	  the	  CCHS	  (Figure	  7.9).	  	  	  	   61	  	  Figure	  7.9	  –	  Percentage	  distribution	  of	  age	  in	  CCHS	  and	  BCHS	  	  	   62	  Marital	  Status	  Comparison	  of	  the	  distribution	  of	  marital	  status	  between	  BCHS	  and	  CCHS	  respondents	  suggested	  overall	  comparability	  between	  the	  two	  groups.	  The	  Info	  Canada	  sampling	  group	  (E)	  also	  contained	  a	  noticeably	  different	  percentage	  of	  married	  respondents	  compared	  to	  CCHS	  (Figure	  7.10),	  which	  was	  confirmed	  with	  subgroup	  analysis	  (Table	  C2).	  Groups	  E	  (Info	  Canada),	  F	  (shorten	  questionnaire),	  and	  G	  (paper	  survey)	  showed	  a	  statistically	  significant	  difference	  in	  the	  percentage	  of	  unmarried	  individuals	  compared	  to	  the	  CCHS	  data	  (Table	  C3).	  	   63	  	  Figure	  7.10–	  Percentage	  distribution	  of	  marital	  status	  for	  CCHS	  and	  BCHS	  	  	   64	  Total	  Annual	  Household	  Income	  While	  comparing	  the	  distribution	  of	  total	  annual	  household	  income	  between	  BCHS	  and	  CCHS,	  it	  was	  evident	  that	  group	  E	  (LC	  Info	  Canada)	  contained	  the	  highest	  percentage	  of	  high-­‐income	  respondents	  (40.5%),	  and	  the	  lowest	  percentage	  of	  low-­‐income	  respondents	  (17.2%).	  Subgroup	  analysis	  showed	  that	  group	  E	  had	  a	  statistically	  higher	  percentage	  of	  high-­‐income	  respondents	  (≥	  $80,000)	  compared	  to	  the	  CCHS	  distribution	  (Table	  C5).	  	  Furthermore,	  it	  appeared	  that	  group	  G	  (C	  short	  paper)	  contained	  the	  most	  comparable	  percentage	  of	  respondents	  for	  all	  income	  categories	  to	  CCHS	  distribution	  (Figure	  7.11).	  Significant	  differences	  in	  the	  distribution	  of	  income	  were	  produced	  for	  sampling	  groups	  A	  (baseline),	  B	  (L	  incentive),	  D	  (LC	  incentives),	  E	  (LC	  Info	  Canada),	  and	  F	  (LC	  short).	  	  	   65	  	  Figure	  7.11	  –	  Percentage	  distribution	  of	  total	  annual	  household	  income	  for	  CCHS	  and	  BCHS	  	   66	  7.4.2	  Health	  Variables	  Perceived	  Health	  Comparison	  of	  perceived	  general	  health	  distribution	  between	  BCHS	  and	  CCHS	  suggested	  a	  greater	  percentage	  of	  respondents	  with	  excellent	  health	  in	  the	  CCHS	  (22.0%)	  (Figure	  7.12).	  	  Subgroup	  analysis	  showed	  that	  with	  the	  exception	  of	  groups	  A	  and	  D,	  all	  other	  BCHS	  groups	  exhibited	  a	  statistically	  lower	  percentage	  of	  respondents	  with	  excellent	  health	  (Table	  C4).	  Within	  BCHS	  sampling	  groups,	  group	  D	  (LC	  incentive)	  contained	  the	  highest	  percentage	  of	  respondents	  within	  this	  group	  (19.2%).	  	  It	  was	  interesting	  to	  note	  that	  while	  both	  group	  B	  (I	  incentive)	  and	  group	  C	  (C	  incentive)	  received	  similar	  percentage	  of	  response	  for	  excellent	  health	  (13.6%	  and	  13.7%	  respectively),	  the	  lottery	  incentive	  group	  contained	  the	  lowest	  percentage	  of	  respondents	  with	  fair/poor	  health	  (10.6%)	  and	  the	  coin	  incentive	  group	  contained	  the	  highest	  percentage	  of	  respondents	  with	  fair/poor	  health	  (16.1%).	  Sampling	  groups	  B,	  C,	  E	  (LC	  InfoCan)	  and	  G	  (C	  short	  paper)	  showed	  a	  statistically	  significant	  difference	  compared	  to	  the	  CCHS	  distribution.	  	  	   67	  	  Figure	  7.12–	  Percentage	  distribution	  of	  general	  health	  in	  BCHS	  and	  CCHS	  	   68	  Arthritis	  The	  prevalence	  of	  arthritis	  was	  16.4%	  in	  the	  CCHS	  2010	  (Figure	  7.13).	  In	  the	  BCHS	  survey,	  the	  highest	  prevalence	  of	  arthritis	  belonged	  to	  group	  A	  (baseline,	  23.2%),	  while	  the	  lowest	  prevalence	  belonged	  to	  group	  F	  (LC	  short,	  13.9%).	  	  With	  the	  exception	  of	  group	  A	  (baseline)	  and	  group	  G	  (C	  short	  paper),	  arthritis	  prevalence	  in	  all	  BCHS	  sampling	  groups	  was	  statistically	  comparable	  to	  CCHS.	  	  	  Figure	  7.13	  –	  Prevalence	  of	  arthritis	  in	  CCHS	  and	  BCHS	  sampling	  groups	  	  	  	  	  	  	   69	  Asthma	  The	  prevalence	  of	  asthma	  within	  the	  CCHS	  2010	  was	  7.5%.	  	  In	  the	  BCHS	  data,	  Group	  D	  (LC	  incentive)	  presented	  the	  highest	  prevalence	  of	  9.3%,	  while	  group	  C	  (C	  incentive)	  exhibited	  the	  lowest	  prevalence	  of	  3.9%	  (Figure	  7.14).	  	  Overall,	  there	  was	  no	  statistically	  significant	  difference	  in	  the	  prevalence	  of	  asthma	  between	  the	  CCHS	  and	  any	  of	  the	  BCHS	  sampling	  groups.	  	  	  Figure	  7.14	  –	  Prevalence	  of	  asthma	  in	  CCHS	  and	  BCHS	  sampling	  groups	  	  	  	  	  	  	   70	  Diabetes	  The	  prevalence	  of	  diabetes	  within	  the	  CCHS	  2010	  was	  5.7%	  (Figure	  7.15).	  	  In	  the	  BCHS,	  the	  lowest	  prevalence	  of	  diabetes	  belonged	  to	  group	  A	  (baseline,	  3.0%),	  while	  the	  highest	  prevalence	  belonged	  to	  groups	  E	  (LC	  InfoCan,	  9.5%).	  Groups	  E	  and	  G	  (C	  short	  paper)	  exhibited	  a	  significantly	  higher	  rate	  of	  diabetes	  than	  the	  CCHS.	  	  Figure	  7.15	  –	  Prevalence	  of	  diabetes	  in	  CCHS	  and	  BCHS	  sampling	  groups	  	  	  	  	  	  	  	   71	  Heart	  Disease	  The	  prevalence	  of	  heart	  disease	  in	  the	  CCHS	  2010	  was	  3.9%.	  In	  the	  BCHS,	  the	  prevalence	  was	  highest	  within	  group	  E	  (LC	  infoCan,	  6.8%),	  while	  the	  lowest	  prevalence	  was	  within	  group	  B	  (L	  Incentive,	  2.0%)	  (Figure	  7.16).	  Group	  E	  exhibited	  a	  statistically	  higher	  rate	  of	  heart	  disease	  compared	  to	  CCHS.	  	  Figure	  7.16-­‐	  Prevalence	  of	  heart	  disease	  in	  CCHS	  and	  BCHS	  sampling	  groups	  	  	  	  	  	  	  	   72	  Hypertension	  The	  prevalence	  of	  hypertension	  in	  CCHS	  2010	  was	  16.3%.	  Within	  the	  BCHS	  sampling	  groups,	  it	  varied	  from	  14.1%	  (Group	  B,	  L	  incentive)	  to	  23.3%	  (Group	  E,	  LC	  InfoCan).	  	  The	  rate	  of	  hypertension	  in	  groups	  E	  and	  G	  (C	  short	  paper)	  were	  significantly	  higher	  compared	  to	  the	  CCHS	  (Figure	  7.17)	  	  	  Figure	  7.17	  –	  Prevalence	  of	  hypertension	  in	  CCHS	  and	  BCHS	  sampling	  groups	  	  	  	  	  	  	  	   73	  7.5	  The	  Effect	  of	  Survey	  Features	  on	  Respondent	  Characteristics	  Two	  survey	  features	  were	  examined	  for	  their	  effect	  on	  the	  socio-­‐demographic	  characteristics	  of	  the	  respondents.	  The	  sampling	  frame	  was	  split	  into	  Info	  Canada	  and	  Canada	  Post.	  	  While	  group	  E	  respondents	  belonged	  to	  the	  info	  Canada	  group,	  groups	  A,	  B,	  C,	  D,	  F	  and	  G	  were	  collapsed	  into	  the	  Canada	  Post	  group.	  	  The	  other	  feature	  under	  investigation	  was	  survey	  mode,	  which	  included	  paper	  and	  online	  surveys.	  Similarly,	  Group	  G	  was	  the	  paper	  survey	  group	  and	  groups	  A,	  B,	  C,	  D,	  F	  were	  collapsed	  into	  the	  online	  survey	  group	  (note	  group	  E	  was	  excluded	  from	  the	  latter	  group	  due	  to	  the	  possibility	  of	  the	  Info	  Canada	  sampling	  source	  as	  a	  confounder).	  	  7.5.1	  The	  effect	  of	  sampling	  frame	  (Info	  Canada	  vs.	  Canada	  Post)	  on	  respondent	  characteristics	  As	  shown	  earlier,	  when	  compared	  to	  CCHS	  2010	  data,	  Info	  Canada	  sampling	  frame	  appeared	  to	  have	  a	  larger	  percentage	  of	  male	  respondents,	  whereas	  Canada	  Post	  sampling	  frame	  contained	  a	  higher	  percentage	  of	  female	  respondents	  (Table	  7.12).	  Both	  BCHS	  sampling	  frames	  showed	  statistically	  significant	  differences	  when	  compared	  to	  the	  weight	  adjusted	  CCHS	  gender	  percentage	  (p	  <	  0.001	  for	  both	  sampling	  frames).	  This	  was	  true	  for	  all	  variables	  studied	  (Table	  7.12).	  When	  compared	  to	  the	  age	  distribution	  of	  the	  weight	  adjusted	  CCHS	  data,	  both	  BCHS	  sampling	  frames	  showed	  a	  lower	  percentage	  of	  younger	  respondents	  and	  higher	  percentage	  of	  older	  participants	  (Table	  7.13).	  	  General	  health	  distribution	  of	  weight	  adjusted	  CCHS	  suggested	  a	  higher	  percentage	  of	  excellent	  health	  participants	  when	  compared	  to	  both	  Info	  Canada	  and	  Canada	  Post	  survey	  groups	  (Table	  7.13).	  	  	   74	  Compared	  to	  the	  two	  BCHS	  sampling	  frames,	  CCHS	  data	  showed	  a	  higher	  percentage	  of	  respondents	  who	  were	  single	  (never	  married)	  (Table	  7.13).	  	  Lastly,	  Info	  Canada	  sampling	  frame	  showed	  a	  distinctively	  higher	  percentage	  of	  higher	  income	  respondents	  (Table	  7.13).	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   75	  Table	  7.12	  –	  Percentage	  distribution	  of	  socio-­‐demographic	  and	  general	  health	  variables	  between	  CCHS	  and	  BCHS	  sampling	  frames	  	   	   CCHS	  (%)	   Info	  Canada	  (%)	   Canada	  Post	  (%)	   Df	  Gender	   	   	   	   	  Male	   50.9	   58.6	   41.1	   	  Female	   49.1	   41.4	   58.7	   	  P	  value	   	   <	  0.001	   <	  0.001	   1	  Age	  (years)	   	   	   	   	  ≤	  29	   20.3	   4.7	   9.9	   	  30-­‐49	   35.4	   22.5	   31.8	   	  50-­‐64	   26.3	   43.6	   31.1	   	  ≥	  65	   18.0	   29.2	   27.1	   	  P	  value	   	   <	  0.001	   <	  0.001	   3	  Marital	  Status	   	   	   	   	  Married	   55.2	   70.4	   53.9	   	  Common-­‐Law	   8.2	   5.4	   9.1	   	  Widowed/Sep/Div	   13.4	   12.5	   16.1	   	  Single/	  Never	  Married	   22.9	   8.8	   16.1	   	  P	  value	   	   <0.001	   <0.001	   3	  Total	  Annual	  Household	  Income	   	   	   	   	  ≤	  $39,999	   22.6	   17.2	   21.8	   	  $40,000	  -­‐	  $79,999	   26.4	   29.9	   32.1	   	  ≥	  80,000	   30.5	   40.5	   32.7	   	  P	  value	   	   <0.001	   0.002	   2	  General	  Health	   	   	   	   	  Excellent	   22.0	   14.5	   14.9	   	  Very	  Good	   37.4	   37.3	   38.0	   	  Good	   28.8	   35.1	   32.6	   	  Fair/Poor	   11.7	   12.5	   12.8	   	  P	  value	   	   <	  0.001	   <	  0.001	   3	  Note:	  all	  p	  values	  are	  with	  comparison	  to	  the	  CCHS	  distribution	  	  	  	  	  	  	   76	  Table	  7.13	  –Analysis	  of	  differences	  in	  the	  percentage	  of	  persons	  in	  selected	  categories	  of	  socio-­‐demographic	  and	  general	  health	  variables	  between	  the	  CCHS	  and	  two	  BCHS	  sampling	  frames	  	  	  	   CCHS	   Info	  Canada	   Canada	  Post	   df	  Age	  (years)	   	  	   	  	   	  	   	  ≤	  29	  (%)	   20.3	   4.7	   9.9	   	  Other	  (%)	   79.7	   95.3	   90	   	  P	  valve	   	  	   ≤	  0.0001	   ≤	  0.0001	   1	  ≥	  65	  (%)	   18.0	   29.2	   27.1	   	  Other	  (%)	   82.0	   70.8	   72.8	   	  P	  value	   	   ≤	  0.0001	   ≤	  0.0001	   1	  Marital	  Status	  	   	  	   	  	   	  	   	  Married	  (%)	   55.2	   70.4	   53.9	   	  Other	  (%)	   44.8	   26.7	   41.3	   	  P	  value	   	  	   ≤	  0.0001	   0.3055	   1	  Not	  Married	  (%)	   22.9	   8.8	   16.1	   	  Other	  (%)	   76.8	   88.3	   79.1	   	  P	  value	   	  	   ≤	  0.0001	   ≤	  0.0001	   1	  Income	   	  	   	  	   	  	   	  ≥	  $80,000	  (%)	   30.5	   40.5	   32.7	   	  Other	  (%)	   49.0	   47.1	   53.9	   	  P	  value	   	  	   0.0128	   0.6892	   1	  General	  Health	   	  	   	  	   	  	   	  Excellent	  (%)	   22.0	   14.5	   14.9	   	  Other	  (%)	   77.9	   84.9	   83.4	   	  P	  value	   	  	   ≤	  0.0001	   ≤	  0.0001	   1	  Note:	  data	  excludes	  “not	  stated”	  responses	  	  	  	  	  	  	  	  	  	   77	  	  7.5.2	  The	  effect	  of	  survey	  form	  (Paper	  vs.	  Online)	  on	  respondent	  characteristics	  (within	  Canada	  Post	  sampling	  frame)	  	  Percentage	  distributions	  of	  all	  variables	  except	  income	  in	  the	  paper	  group	  were	  significantly	  different	  compared	  to	  that	  of	  the	  CCHS	  2010	  (Table	  7.14).	  When	  compared	  to	  the	  CCHS	  data,	  both	  paper	  and	  online	  BCHS	  groups	  showed	  an	  apparent	  underrepresentation	  of	  younger	  persons	  (Table	  7.15).	  Additionally,	  paper	  and	  online	  surveys	  showed	  a	  statistically	  higher	  percentage	  of	  older	  respondents	  (Table	  7.15).	  Comparison	  between	  two	  survey	  modes	  showed	  a	  statistically	  higher	  percentage	  of	  older	  population	  within	  the	  paper	  survey	  group	  (p	  =	  <0.0001).	  Sub-­‐group	  analysis	  of	  marital	  status	  of	  both	  survey	  modes	  showed	  a	  statistically	  lower	  percentage	  of	  participants	  who	  were	  single	  or	  never	  married	  compared	  to	  the	  CCHS	  data	  (Table	  7.15).	  Results	  also	  showed	  statistically	  nonsignificant	  difference	  in	  income	  distribution	  between	  the	  BCHS	  paper	  survey	  group	  and	  CCHS	  sampling	  group	  (p=0.625).	  However,	  a	  significant	  difference	  was	  detected	  between	  the	  BCHS	  online	  survey	  group	  and	  the	  CCHS	  data	  (Figure	  7.14).	  Lastly,	  the	  distribution	  of	  general	  health	  reported	  a	  statistically	  higher	  percentage	  of	  CCHS	  participants	  in	  excellent	  health	  compared	  to	  both	  BCHS	  survey	  modes	  (Table	  7.15).	  	  	  	  	  	  	   78	  Table	  7.14	  –Percentage	  distribution	  of	  socio-­‐demographic	  and	  general	  health	  variables	  between	  CCHS	  and	  BCHS	  sampling	  modes	  	   Gender	   CCHS	  (%)	   Paper	  (%)	   Online	  (%)	   df	  Male	   50.9	   41.5	   40.9	   	  Female	   49.1	   57.8	   59.1	   	  P	  value	   	   <	  0.001	   <	  0.001	   1	  Age	   	   	   	   	  ≤	  29	   20.3	   5.9	   10.7	   	  30-­‐49	   35.4	   28.8	   32.5	   	  50-­‐64	   26.3	   28.1	   31.7	   	  ≥	  65	   18.0	   36.5	   25.2	   	  P	  value	   	   <	  0.001	   <	  0.001	   3	  Marital	  Status	   	   	   	   	  Married	   55.2	   51.9	   54.4	   	  Common-­‐Law	   8.2	   6.6	   9.6	   	  Widowed/Sep/Div	   13.4	   16.9	   15.9	   	  Single/	  Never	  Married	   22.9	   11.0	   17.1	   	  P	  value	   	   <	  0.001	   <	  0.001	   3	  Total	  Annual	  Household	  Income	   	   	   	   	  ≤	  $39,999	   22.6	   22.6	   21.9	   	  $40,000	  -­‐	  $79,999	   26.4	   27.7	   32.9	   	  ≥	  $80,000	   30.5	   28.4	   33.7	   	  P	  value	   	   0.625	   <	  0.001	   2	  General	  Health	   	   	   	   	  Excellent	   22.0	   11.2	   15.7	   	  Very	  Good	   37.4	   33.4	   38.9	   	  Good	   28.8	   33.4	   32.4	   	  Fair/Poor	   11.7	   12.7	   12.8	   	  P	  value	   	   <	  0.001	   <	  0.001	   3	  Note:	  all	  p	  values	  are	  with	  comparison	  to	  the	  CCHS	  distribution	  	  	  	  	  	  	  	   79	  Table	  7.15	  –	  Analysis	  of	  differences	  in	  the	  percentage	  of	  persons	  in	  selected	  categories	  of	  socio-­‐demographic	  and	  general	  health	  variables	  between	  the	  CCHS	  and	  two	  BCHS	  survey	  administration	  methods	  	  	  	   CCHS	   Paper	   Online	   df	  Age	  (years)	   	  	   	  	   	  	   	  ≤	  29	  (%)	   20.3	   5.9	   10.7	   	  Other	  (%)	   79.7	   93.4	   89.4	   	  P	  valve	   	  	   ≤	  0.0001	   ≤	  0.0001	   1	  ≥65	  (%)	   18.0	   36.5	   25.1	   	  Other	  (%)	   81.7	   62.8	   74.9	   	  P	  value	   	   ≤	  0.0001	   0.0003	   1	  Marital	  Status	  	   	  	   	  	   	  	   	  Married	  (%)	   55.2	   51.9	   54.4	   	  Other	  (%)	   44.8	   34.5	   42.6	   	  P	  value	   	  	   0.075	   0.5541	   1	  Not	  Married	  (%)	   22.9	   11.0	   17.1	   	  Other	  (%)	   76.8	   75.4	   79.9	   	  P	  value	   	  	   ≤	  0.0001	   ≤	  0.0001	   1	  Income	   	  	   	  	   	  	   	  ≥	  $80,000	  (%)	   30.5	   28.4	   33.6	   	  Other	  (%)	   49.0	   50.3	   54.6	   	  P	  value	   	  	   0.4274	   0.8875	   1	  General	  Health	   	  	   	  	   	  	   	  Excellent	  (%)	   22.0	   11.2	   15.7	   	  Other	  (%)	   77.9	   79.5	   84.5	   	  P	  value	   	  	   ≤	  0.0001	   ≤	  0.0001	   1	  Note:	  data	  excludes	  “not	  stated”	  responses	  	  	  	  	  	  	  	  	  	   80	  8	  Discussion	  8.1	  Overall	  response	  rates	  for	  BCHS	  Adjusted	  response	  rates	  using	  intention	  to	  treat	  analysis	  showed	  a	  general	  pattern	  of	  increasing	  survey	  response	  as	  more	  survey	  design	  features	  were	  added.	  The	  baseline	  survey	  (A)	  had	  a	  response	  rate	  of	  17.1%.	  The	  response	  rate	  was	  19.8	  and	  20.8%	  when	  instant	  lottery	  and	  prepaid	  cash	  incentives	  were	  added,	  respectively.	  When	  both	  instant	  lottery	  and	  prepaid	  cash	  incentives	  were	  offered,	  the	  response	  rate	  increased	  to	  28.2%.	  The	  response	  rate	  was	  30.1%	  when	  the	  Info	  Canada	  sampling	  frame	  was	  used	  together	  with	  monetary	  incentives.	  The	  shortened	  length	  survey,	  which	  contained	  both	  forms	  of	  monetary	  incentive,	  achieved	  a	  response	  rate	  of	  33.7%.	  Lastly,	  the	  L	  short	  paper	  survey	  (G)	  with	  a	  $2	  coin	  incentive	  achieved	  the	  highest	  response	  rate	  of	  43.4%.	  	  	  One	  interesting	  finding	  was	  that	  both	  instant	  lottery	  and	  prepaid	  cash	  incentives	  produced	  a	  slight	  increase	  in	  response	  rate,	  however,	  when	  both	  incentives	  were	  offered	  together,	  the	  response	  rate	  increase	  was	  substantially	  and	  statistically	  significant.	  These	  results	  suggest	  a	  possible	  interaction	  effect	  that	  may	  exist	  between	  the	  two	  monetary	  incentives,	  such	  that	  the	  effect	  of	  adding	  instant	  lottery	  on	  response	  rate	  may	  depend	  on	  the	  presence	  of	  a	  prepaid	  cash	  incentive,	  and	  vice	  versa.	  However,	  in	  the	  logistic	  regression	  model	  with	  an	  interaction	  (Table	  B1),	  the	  effect	  of	  interaction	  between	  instant	  lottery	  and	  prepaid	  cash	  incentive	  was	  not	  significant	  and	  therefore	  was	  excluded	  from	  the	  final	  model.	  Since	  the	  logistic	  	   81	  regression	  interaction	  is	  a	  departure	  from	  a	  multiplicative	  model,	  it	  is	  possible	  that	  the	  effect	  of	  interaction	  is	  additive	  and	  therefore	  was	  undetected	  by	  this	  method.	  	  	  The	  overall	  response	  rate	  was	  27.9%	  in	  this	  study.	  This	  was	  the	  mean	  value	  from	  all	  seven	  sampling	  groups,	  and	  thus	  was	  influenced	  by	  the	  lack	  of	  design	  enhancements	  in	  some	  groups	  such	  as	  the	  baseline	  survey.	  	  The	  response	  rates	  are	  comparable	  to	  recent	  mail	  or	  web-­‐based	  general	  population	  studies.	  The	  National	  Health	  and	  Wellness	  Survey	  (NHWS)	  is	  an	  internet-­‐based	  survey	  administered	  to	  a	  representative	  sample	  of	  the	  US	  adult	  population.	  Results	  from	  the	  2009	  -­‐	  2011	  data	  showed	  a	  response	  rate	  of	  21.7%90.	  A	  2012	  household	  study	  was	  carried	  out	  in	  Minnesota,	  USA	  using	  a	  sample	  of	  1,300	  households.	  Selected	  participants	  were	  invited	  to	  complete	  a	  paper	  survey	  regarding	  attitudes	  towards	  school-­‐based	  depression	  and	  suicide	  screening	  and	  education.	  Overall	  response	  rate	  was	  43%91.	  	  One	  of	  the	  earliest	  studies	  of	  using	  mail	  invitations	  to	  prompt	  online	  survey	  participation	  was	  the	  Lewiston/Clarkston	  quality	  of	  life	  study,	  conducted	  in	  two	  rural	  towns:	  Lewiston,	  Idaho	  and	  Clarkston,	  Washington.	  Participants	  received	  a	  pre-­‐notice	  letter,	  questionnaire,	  thank	  you	  postcard,	  and	  a	  replacement	  questionnaire	  along	  with	  a	  $5	  cash	  incentive.	  The	  mail	  survey	  achieved	  a	  response	  rate	  of	  71%,	  while	  the	  online	  survey	  produced	  a	  response	  rate	  of	  55%92.	  	  One	  reason	  that	  may	  explain	  the	  lower	  response	  rate	  observed	  in	  our	  study	  is	  the	  setting	  in	  which	  the	  two	  surveys	  took	  place.	  The	  BCHS	  survey	  was	  conducted	  in	  a	  random	  sample	  from	  the	  province	  of	  British	  Columbia,	  which	  was	  mostly	  urban,	  while	  the	  Lewiston/Clarkson	  	   82	  study	  was	  conducted	  in	  two	  small	  rural	  towns.	  It	  is	  possible	  that	  in	  a	  more	  rural	  setting,	  the	  social	  norms	  are	  different	  and	  comfort	  level	  of	  responding	  to	  surveys	  is	  higher	  compared	  to	  that	  of	  an	  urban	  setting93,94,	  leading	  to	  a	  higher	  propensity	  to	  respond	  to	  public	  surveys.	  Other	  factors	  such	  as	  topic	  salience	  and	  type	  of	  community	  may	  also	  contribute	  to	  the	  difference	  in	  response	  rate.	  	  	  Dillman	  et	  al.	  stated	  that	  one	  should	  see	  response	  rates	  between	  50%	  and	  70%	  when	  solid	  implementation	  procedures	  are	  used	  in	  mix-­‐mode	  random	  household	  surveys	  in	  the	  general	  public95.	  This	  was	  clearly	  not	  observed,	  even	  though	  the	  study	  design	  of	  some	  BCHS	  groups	  fulfilled	  almost	  all	  criteria	  stated	  in	  the	  “tailored	  design	  method”,	  except	  for	  the	  aspect	  of	  survey	  topic	  which	  may	  not	  be	  appealing	  to	  certain	  sample	  groups12.	  Numerous studies have shown a consistent trend of decreasing response rates over recent decades1-8. Dillman, Dolsen and Machils observed an average annual decline of 10% in response rates from the late 1980’s to 199596. One	  may	  argue	  that	  the	  declining	  response	  rates	  may	  be	  attributed	  to	  the	  changing	  public	  opinion	  regarding	  survey	  participation	  as	  well	  as	  increasing	  security	  concerns	  for	  online	  surveys9. Regardless, the current study showed that a substantial increase in online response rate (17.1% to 33.7%) could be achieved when using monetary rewards in combination with a relatively short (10 min) questionnaire.     	   83	  8.2	  The	  Effect	  of	  Individual	  Survey	  Factors	  on	  Response	  Rate The	  results	  of	  the	  analysis	  of	  the	  effect	  of	  individual	  survey	  factors	  were	  in	  agreement	  with	  prior	  hypotheses,	  in	  which	  all	  factors	  under	  examination	  achieved	  an	  increase	  in	  response	  rate	  compared	  to	  the	  reference	  comparison.	  	  	  8.2.1	  The	  effect	  of	  survey	  mode	  on	  survey	  response	  Results	  of	  this	  study	  showed	  that	  the	  use	  of	  paper	  survey	  achieved	  significantly	  higher	  response	  rate	  compared	  to	  the	  online	  survey.	  The	  difference	  in	  response	  rate	  between	  group	  G	  (C	  short	  paper)	  and	  group	  F	  (LC	  short)	  was	  9.7%.	  Controlling	  for	  other	  factors,	  the	  use	  of	  paper	  survey	  had	  104%	  higher	  odds	  of	  response	  compared	  to	  the	  online	  survey.	  These	  results	  concurred	  with	  the	  study	  hypotheses.	  Dillman	  and	  colleagues	  stated	  that	  although	  online	  surveys	  offer	  many	  advantages	  such	  as	  speed,	  wider	  geographic	  distribution,	  and	  lower	  mailing	  costs,	  traditional	  mail	  surveys	  continue	  to	  be	  favored	  for	  a	  number	  of	  reasons12.	  One	  difference	  between	  paper	  and	  online	  surveys	  is	  accessibility.	  Mailed	  paper	  surveys	  allow	  questionnaires	  to	  be	  delivered	  to	  participants	  and	  are	  easy	  to	  fill	  out.	  	  In	  contrast,	  online	  surveys	  usually	  require	  participants	  to	  have	  access	  to	  a	  computer	  with	  Internet	  connectivity.	  Due	  to	  security	  reasons,	  this	  step	  may	  become	  a	  tedious	  task	  that	  involves	  finding	  the	  survey	  link	  and	  inputting	  a	  given	  passcode.	  In	  addition,	  web	  illiteracy	  and	  lack	  of	  computer	  knowledge	  poses	  a	  barrier	  for	  survey	  participation.	  This	  is	  especially	  prominent	  within	  the	  elderly	  population,	  in	  which	  recent	  research	  has	  shown	  that	  American	  seniors	  over	  the	  age	  of	  65	  continues	  to	  lag	  behind	  the	  rest	  of	  the	  population	  with	  regards	  to	  technology	  adoption	  and	  use26.	  The	  same	  study	  also	  showed	  that	  Internet	  	   84	  and	  broadband	  use	  diminishes	  significantly	  around	  the	  age	  of	  75.	  More	  complicated	  tasks	  such	  as	  login	  procedures,	  web	  navigation,	  and	  troubleshooting	  may	  be	  challenging	  to	  accomplish	  without	  clear	  instructions	  and	  timely	  support	  from	  the	  research	  staff.	  Lastly,	  trust	  remains	  to	  be	  a	  large	  issue	  for	  web	  survey	  participation.	  Respondents	  may	  be	  reluctant	  to	  take	  part	  due	  to	  fears	  of	  potential	  scams,	  infraction	  of	  privacy,	  or	  links	  containing	  computer	  viruses.	  On	  the	  contrary,	  most	  people	  are	  comfortable	  with	  opening	  a	  mail	  envelope	  containing	  the	  paper	  questionnaire97.	  Although	  it	  has	  been	  established	  that	  mailed	  surveys	  generate	  a	  higher	  response	  rate	  compared	  to	  web	  surveys,	  there	  are	  still	  a	  number	  of	  disadvantages	  of	  using	  the	  former	  method.	  Compared	  to	  paper	  or	  interview	  surveys,	  one	  problem	  associated	  with	  mailed	  survey	  is	  its	  inability	  to	  use	  skip	  logic	  or	  probing	  questions,	  which	  may	  limit	  survey	  efficiency	  and	  depth	  of	  questions	  asked98-­‐100.	  Therefore,	  it	  is	  important	  to	  continue	  exploring	  the	  use	  of	  design	  features	  in	  web	  surveys	  to	  maximize	  survey	  response.	  	  8.2.2	  The	  effects	  of	  monetary	  incentives	  on	  survey	  response	  Instant	  Lottery	  The	  difference	  in	  response	  rate	  between	  the	  L	  incentive	  group	  (B)	  and	  the	  baseline	  group	  (A)	  was	  not	  statistically	  significant.	  However,	  using	  data	  from	  all	  groups	  and	  controlling	  for	  other	  design	  factors	  in	  multiple	  regression,	  participants	  who	  received	  instant	  lottery	  incentive	  had	  35%	  higher	  odds	  of	  response	  compared	  to	  those	  who	  received	  post-­‐study	  lottery.	  There	  is	  currently	  a	  lack	  of	  literature	  on	  the	  implementation	  of	  instant	  lottery	  as	  opposed	  to	  end-­‐of-­‐study	  lottery.	  Regardless,	  	   85	  results	  from	  the	  current	  study	  suggest	  that	  the	  use	  of	  instant	  lotteries	  as	  a	  survey	  incentive	  may	  have	  a	  significant	  effect	  on	  response	  rate.	  Previous	  studies	  showed	  that	  lottery	  incentives	  did	  not	  increase	  response	  rates	  significantly6,68,101.	  	  However,	  Goritz	  et	  al.	  (2006)	  reviewed	  the	  effectiveness	  of	  lotteries	  in	  32	  web-­‐based	  studies	  and	  reported	  in	  a	  meta-­‐analysis	  that	  end-­‐of-­‐study	  lotteries	  significantly	  increased	  response	  rates	  (OR	  1.19,	  1.13-­‐1.25)55.	  Given	  our	  finding	  of	  an	  OR	  of	  1.35	  in	  this	  study,	  the	  use	  of	  an	  instant	  lottery	  may	  be	  a	  better	  alternative	  survey	  method	  over	  the	  end-­‐of-­‐study	  lottery.	  	  It	  can	  be	  reasoned	  that	  the	  implementation	  of	  end-­‐of-­‐study	  lotteries	  require	  a	  degree	  of	  trust	  between	  the	  surveyor	  and	  respondents,	  in	  which	  it	  is	  expected	  a	  lottery	  draw	  will	  be	  carried	  out	  at	  the	  conclusion	  of	  the	  study.	  Therefore,	  an	  instant	  lottery	  can	  be	  used	  to	  reassure	  respondents	  of	  the	  survey’s	  integrity.	  	  	  Prepaid	  Cash	  Incentive	  The	  difference	  in	  response	  rates	  between	  the	  C	  incentive	  group	  (C)	  and	  baseline	  group	  (A)	  was	  3.7%.	  After	  controlling	  for	  other	  factors,	  participants	  who	  were	  offered	  the	  coin	  incentive	  had	  44%	  higher	  odds	  of	  response	  than	  those	  who	  did	  not.	  These	  results	  affirm	  conclusions	  from	  past	  literatures	  that	  cash	  incentives	  are	  effective	  in	  increasing	  response	  rates	  in	  all	  forms	  of	  surveys10,102-­‐104.	  	  	  Two	  forms	  of	  cash	  incentives	  (prepaid	  and	  postpaid)	  have	  been	  used	  in	  past	  studies.	  	  There	  are	  a	  number	  of	  advantages	  that	  prepaid	  cash	  incentive	  hold	  over	  the	  latter.	  First,	  the	  cost	  may	  be	  lower	  for	  prepaid	  cash	  incentives	  given	  that	  the	  promised	  reward	  for	  a	  completed	  questionnaire	  is	  usually	  larger.	  	  Very	  few	  studies	  have	  	   86	  examined	  the	  cost-­‐effectiveness	  of	  prepaid	  incentives,	  however	  findings	  suggest	  that	  since	  incentives	  encourage	  early	  response,	  cost	  may	  be	  saved	  from	  other	  aspects	  of	  the	  study	  such	  as	  mailing	  fees	  for	  reminder	  letters67.	  The	  use	  of	  prepaid	  award	  is	  also	  more	  effective,	  such	  that	  the	  inclusion	  of	  a	  pre-­‐study	  incentive	  may	  promote	  social	  exchange,	  establish	  survey	  legitimacy,	  as	  well	  as	  improving	  participant	  cooperation35,62.	  In	  fact,	  what	  influences	  some	  participant’s	  decision	  to	  respond	  may	  not	  be	  the	  value	  of	  the	  monetary	  incentive,	  but	  the	  gesture	  of	  including	  a	  reward105	  .	  	  It	  is	  noted	  that	  the	  magnitude	  of	  response	  rate	  is	  generally	  positively	  related	  to	  the	  amount	  of	  incentive,	  although	  there	  is	  a	  point	  of	  saturation	  in	  which	  response	  rates	  plateau	  at	  a	  certain	  monetary	  value36,64,106.	  McPhee	  and	  Hastedt	  (2012)	  conducted	  an	  experiment	  in	  which	  prepaid	  cash	  incentives	  levels	  of	  $0,	  $5,	  $10,	  $15	  and	  $20	  were	  used	  to	  measure	  response	  rates	  in	  a	  mail	  questionnaire107.	  Results	  showed	  that	  $5	  is	  an	  appropriate	  reward	  amount	  for	  first	  round	  questionnaire,	  and	  recommended	  that	  the	  reward	  be	  increased	  up	  to	  $10	  for	  a	  second	  round	  questionnaire.	  	  Additionally,	  McPhee	  and	  Hastedt	  stated	  that	  a	  higher	  prepaid	  incentive	  ($10-­‐15)	  was	  found	  to	  be	  effective	  in	  encouraging	  late	  response	  and	  concluded	  that	  raising	  the	  incentive	  to	  $20	  has	  low	  additional	  effect	  on	  the	  odds	  of	  response.	  From	  a	  cost	  and	  feasibility	  standpoint,	  we,	  however,	  felt	  that	  $2	  is	  an	  optimal	  reward	  amount	  to	  promote	  survey	  response.	  	  	  One	  difference	  between	  these	  two	  forms	  of	  monetary	  incentives	  is	  that	  the	  coin	  incentive	  is	  a	  prepaid	  reward,	  whereas	  instant	  lottery	  is	  a	  form	  of	  postpaid	  incentive.	  	  	   87	  Church	  (1993)	  conducted	  a	  meta-­‐analysis	  of	  38	  surveys	  looking	  at	  four	  groups	  composed	  of	  1)	  Prepaid	  monetary	  incentive,	  2)	  Prepaid	  non-­‐monetary	  incentive,	  3)	  Post-­‐paid	  monetary	  incentives,	  and	  4)	  Post-­‐paid	  non-­‐monetary	  incentives62.	  	  He	  concluded	  that	  prepaid	  incentives	  elicited	  a	  positive	  impact	  on	  response	  rate,	  but	  found	  no	  clear	  association	  between	  postpaid	  incentives	  and	  response	  rates.	  Whitman	  et	  al.	  (2003)	  found	  similar	  results	  when	  randomly	  allocating	  survey	  participants	  to	  cash	  incentive	  or	  lottery	  prize	  groups105.	  Findings	  suggested	  that	  prepaid	  cash	  incentive	  was	  the	  only	  factor	  that	  had	  a	  significant	  impact	  on	  likelihood	  of	  response.	  One	  main	  reason	  for	  the	  observed	  effect	  difference	  is	  that	  lotteries	  may	  represent	  “an	  indirect	  payment	  for	  service”	  rather	  than	  a	  “gesture	  of	  good	  will”,	  indicated	  by	  a	  prepaid	  incentive17.	  However	  in	  the	  current	  study,	  the	  difference	  in	  effect	  between	  instant	  lottery	  and	  prepaid	  cash	  incentive	  was	  not	  significant	  (OR	  1.35,	  1.44,	  respectively).	  It	  is	  possible	  that	  the	  instant	  lottery	  method	  produced	  a	  higher	  response	  than	  the	  standard	  lottery,	  thus	  achieving	  an	  effect	  that	  is	  more	  comparable	  to	  the	  prepaid	  coin	  incentive.	  	  8.2.3	  The	  effect	  of	  length	  on	  survey	  response	  After	  controlling	  for	  other	  factors,	  results	  suggested	  that	  shortening	  the	  questionnaire	  significantly	  influenced	  survey	  response.	  Participants	  who	  received	  the	  shorter	  questionnaire	  had	  35%	  higher	  odds	  of	  response	  compared	  to	  those	  who	  received	  the	  longer	  questionnaire.	  It	  is	  generally	  known	  that	  questionnaire	  length	  is	  one	  of	  the	  most	  frequent	  reasons	  for	  survey	  non-­‐response	  and	  that	  a	  negative	  relationship	  exists	  between	  survey	  length	  and	  response	  rate108-­‐111.	  In	  a	  survey	  of	  	   88	  unemployed	  residents	  in	  Croatia,	  response	  rate	  was	  significantly	  higher	  for	  a	  10-­‐minute	  survey	  compared	  to	  a	  30-­‐minute	  survey	  (75%	  vs	  63%,	  respectively)112.	  The	  same	  phenomenon	  was	  observed	  in	  another	  study	  in	  which	  the	  response	  rate	  was	  significantly	  higher	  for	  a	  8-­‐19	  minutes	  survey	  compared	  to	  a	  longer	  20-­‐minute	  survey	  (67.5%	  vs.	  63.4%,	  respectively)29.	  Aside	  from	  a	  decrease	  in	  response,	  questionnaires	  that	  are	  overly	  long	  may	  also	  produce	  lower	  quality	  data.	  Quality	  of	  data	  is	  generally	  defined	  as	  “	  degree	  of	  effort	  and	  thought	  that	  respondent	  invests	  in	  answering	  the	  questions”113,114.	  Surveys	  of	  longer	  length	  (>	  17.5	  minutes)	  may	  induce	  fatigue	  in	  respondents,	  leading	  to	  inaccurate	  response115.	  Galesic	  (2002)	  stated	  that	  “as	  questionnaire	  lasts,	  respondents	  are	  more	  likely	  to	  become	  tired,	  annoyed,	  bored,	  and/or	  distracted	  by	  external	  factor”116.	  Due	  to	  the	  effect	  of	  fatigue,	  Krosnick	  et	  al	  (2002)	  recorded	  that	  participants	  are	  more	  likely	  to	  select	  ‘don’t	  knows’	  towards	  the	  end	  of	  a	  lengthy	  survey,	  thus	  leading	  to	  erroneous	  survey	  results	  and	  eliciting	  a	  measurement	  bias117.	  The	  results	  from	  this	  study	  are	  in	  agreement	  with	  these	  past	  findings,	  such	  that	  the	  shorter	  10-­‐min	  survey	  produced	  a	  significantly	  higher	  odds	  or	  response	  compared	  to	  the	  longer	  30-­‐min	  survey.	  However,	  we	  do	  not	  discourage	  the	  use	  of	  long	  questionnaires.	  The	  notion	  of	  cost	  per	  amount	  of	  information	  is	  also	  crucial	  in	  population-­‐based	  surveys,	  such	  that	  longer	  length	  questionnaire	  can	  maximize	  the	  amount	  of	  information	  obtained.	  Other	  methods	  such	  as	  the	  use	  of	  progress	  bars	  and	  visual	  elements	  may	  be	  used	  to	  partially	  alleviate	  user	  fatigue	  experienced	  during	  participation12.	  	  In	  the	  current	  study,	  participants	  were	  pre-­‐notified	  of	  the	  estimated	  survey	  length	  of	  either	  10-­‐min	  or	  30-­‐mins.	  These	  estimated	  length	  of	  the	  survey	  stated	  in	  the	  pre-­‐	   89	  survey	  invitation	  may	  influence	  respondent’s	  perceived	  survey	  duration118.	  Boltz	  (1993)	  suggested	  that	  participants	  are	  more	  likely	  to	  underestimate	  the	  survey	  duration	  when	  the	  perceived	  time	  is	  less	  than	  the	  expected	  time119.	  Likewise,	  when	  the	  perceived	  completion	  time	  exceeds	  the	  expected	  survey	  duration,	  overestimation	  of	  the	  perceived	  length	  will	  likely	  occur.	  	  As	  such,	  the	  likelihood	  of	  survey	  initiation	  and	  completion	  are	  strongly	  influenced	  by	  the	  respondent’s	  expected	  completion	  time.	  	  	  8.2.4	  Personalization	  and	  Info	  Canada	  sampling	  frame	  The	  use	  of	  the	  Info	  Canada	  sampling	  frame	  was	  accompanied	  by	  an	  invitation	  letter,	  which	  was	  personally	  addressed	  to	  the	  head	  of	  each	  household	  (the	  individual	  listed	  in	  the	  database).	  Comparison	  between	  the	  LC	  Info	  Canada	  group	  (E)	  and	  the	  LC	  incentive	  group	  (D)	  showed	  a	  1.8%	  increase	  in	  response	  rate,	  suggesting	  no	  significant	  difference	  in	  response	  rates	  between	  the	  two	  sampling	  frames.	  Controlling	  for	  other	  factors,	  the	  odds	  of	  response	  from	  the	  Info	  Canada	  sampling	  group	  was	  slightly	  higher	  compared	  to	  the	  Canada	  Post	  group	  (OR	  1.14,	  0.98	  –	  1.33).	  	  Past	  studies	  on	  the	  effect	  of	  personalized	  invitations	  showed	  a	  positive	  association	  with	  survey	  response.	  Heerwegh	  et	  al.	  (2004)	  conducted	  a	  web	  survey	  of	  students	  using	  personalized	  salutation	  as	  an	  intervention120.	  	  Results	  showed	  that	  personalization	  elicited	  a	  statistically	  significant	  increase	  in	  response	  rate	  (8.6%)	  compared	  to	  the	  control	  group.	  Similarly,	  by	  using	  a	  personalized	  invitation	  including	  “Dear	  [First	  Name]”,	  Joinson	  and	  Reips	  (2007)	  observed	  a	  6.5%	  (OR	  1.40)	  increase	  in	  response	  rate	  compared	  to	  using	  “Dear	  Student”77.	  In	  our	  study,	  it	  is	  possible	  that	  the	  effect	  of	  	   90	  personalization	  is	  influenced	  by	  the	  characteristics	  of	  the	  Info	  Canada	  sampling	  frame	  population.	  Because	  males	  are	  more	  often	  listed	  as	  heads	  of	  household	  in	  the	  telephone	  directory,	  there	  were	  a	  higher	  percentage	  of	  male	  recipients	  in	  this	  sampling	  group	  and	  this	  may	  have	  had	  an	  adverse	  affect	  on	  response	  rates	  due	  to	  the	  non-­‐responsive	  nature	  of	  this	  group76,121.	  	  It	  has	  been	  suggested	  that	  personalized	  invitation	  letter	  reduces	  the	  participants’	  “perception	  of	  anonymity”	  and	  encourage	  social	  exchange77.	  However,	  this	  form	  of	  contact	  may	  also	  prompt	  socially	  desirable	  responses,	  leading	  to	  measurement	  bias76.	  Another	  disadvantage	  of	  using	  personalized	  invitations	  arises	  when	  the	  survey	  questionnaire	  involves	  sensitive	  topics,	  such	  as	  experience	  with	  discrimination83.	  Participants	  might	  feel	  vulnerable	  and	  discomforted,	  and	  therefore,	  might	  not	  wish	  to	  respond.	  Sensitive	  topics	  such	  as	  total	  annual	  household	  income	  and	  general	  health	  in	  BCHS	  may	  affect	  participant	  response	  rates	  when	  using	  a	  personalized	  approach12.	  	  	  	  The	  effect	  of	  the	  Info	  Canada	  sampling	  group	  on	  response	  rate	  is	  difficult	  to	  interpret,	  since	  we	  don't	  know	  whether	  the	  observed	  effect	  is	  attributable	  to	  the	  Info	  Canada	  sampling	  frame	  or	  to	  the	  effect	  of	  the	  personalized	  invitation.	  As	  such,	  it	  is	  difficult	  to	  conclude	  whether	  Info	  Canada	  would	  be	  a	  better	  sampling	  frame	  to	  use	  compared	  to	  Canada	  Post	  in	  terms	  of	  optimizing	  response	  rates.	  Since	  past	  studies	  have	  shown	  that	  personalization	  has	  a	  positive	  impact	  on	  response	  rate,	  it	  is	  possible	  that	  without	  personalization,	  the	  Info	  Canada	  sampling	  group	  alone	  would	  have	  generated	  a	  lower	  	   91	  response	  rate	  compared	  to	  Canada	  Post.	  Therefore,	  further	  investigation	  is	  needed	  to	  examine	  the	  direct	  effect	  of	  Info	  Canada	  on	  response	  rate.	  	  Probabilities	  of	  response	  for	  individual	  survey	  features	  may	  be	  less	  appealing	  due	  to	  low	  response	  rates	  (table	  7.6).	  This	  is	  mainly	  due	  to	  the	  relatively	  small	  effects	  of	  individual	  survey	  factors	  on	  response	  rate.	  	  The	  usefulness	  of	  these	  results	  stems	  from	  the	  logit	  equation,	  which	  allows	  the	  estimation	  of	  response	  rates	  for	  any	  combinations	  of	  the	  examined	  factors	  (assuming	  no	  interaction).	  	  8.3	  Survey	  Costs	  Two	  cost	  measurements	  were	  recorded	  in	  this	  study	  –	  cost	  per	  survey	  sent	  and	  cost	  per	  response.	  	  Both	  measurements	  present	  useful	  information	  when	  estimating	  the	  costs	  of	  a	  survey.	  The	  use	  of	  cost	  per	  survey	  sent	  allows	  researchers	  to	  realistically	  project	  the	  survey	  implementation	  costs	  for	  a	  given	  sample	  size.	  One	  may	  use	  the	  results	  of	  the	  linear	  regression	  to	  estimate	  the	  costs	  of	  implementing	  various	  survey	  factors.	  	  On	  the	  other	  hand,	  the	  use	  of	  cost	  per	  response	  as	  a	  cost	  measure	  is	  advantageous	  when	  the	  researcher	  can	  predict	  the	  response	  rate	  of	  the	  survey,	  or	  when	  a	  certain	  response	  level	  is	  required	  for	  study	  completion	  and	  credibility.	  	  8.3.1	  Cost/survey	  sent	  In	  our	  study,	  cost	  per	  survey	  sent	  was	  the	  highest	  for	  the	  C	  short	  paper	  group	  (G,	  $17.87/	  survey	  sent).	  Costs	  include	  the	  printing	  and	  mailing	  of	  invitation	  letters	  and	  paper	  questionnaires,	  return	  postage,	  and	  data	  entry	  costs.	  Next	  from	  the	  highest	  to	  	   92	  lowest,	  the	  adjusted	  costs	  for	  LC	  Info	  Canada	  group	  (E)	  was	  $16.14/survey	  sent,	  which	  can	  be	  accounted	  for	  by	  the	  combination	  of	  costs	  for	  instant	  lottery,	  coin	  incentive,	  and	  the	  higher	  costs	  of	  the	  Info	  Canada	  sampling	  frame	  address	  list.	  The	  LC	  incentive	  group	  (D),	  which	  contains	  both	  the	  instant	  lottery	  and	  prepaid	  cash	  incentive,	  costs	  $15.28/survey	  sent.	  The	  LC	  short	  group	  (F)	  used	  a	  survey	  design	  that	  was	  similar	  to	  group	  D,	  with	  the	  exception	  of	  using	  a	  shorter	  questionnaire.	  The	  decreased	  mailing	  cost	  of	  this	  group	  may	  explain	  the	  lower	  cost	  for	  group	  F	  ($15.05/survey	  sent).	  Cost	  for	  the	  C	  incentive	  group	  (C)	  was	  $15.03/survey	  sent,	  which	  may	  be	  accounted	  for	  by	  the	  absence	  of	  instant	  lottery	  programming	  fees.	  	  The	  L	  incentive	  group	  (B)	  has	  the	  second	  lowest	  costs	  per	  survey	  mailed,	  due	  to	  the	  saved	  costs	  for	  the	  coin	  incentive	  and	  the	  associated	  labor	  costs.	  	  As	  expected,	  the	  baseline	  survey	  (A)	  was	  the	  least	  costly	  to	  implement	  ($12.76/surveys	  sent).	  Results	  of	  multiple	  linear	  regressions	  were	  in	  agreement	  with	  the	  previously	  discussed	  findings.	  Controlling	  for	  other	  factors,	  implementation	  of	  paper	  survey,	  coin	  incentive,	  Info	  Canada	  sampling	  frame,	  and	  instant	  lottery	  were	  all	  associated	  with	  increased	  cost	  per	  surveys	  sent.	  On	  the	  other	  hand,	  a	  decrease	  in	  cost	  per	  survey	  sent	  was	  observed	  when	  implementing	  the	  shorter	  questionnaire.	  	  8.3.2	  Cost/response	  Results	  for	  cost	  per	  response	  were	  substantially	  different	  compared	  to	  that	  of	  cost	  per	  survey	  sent.	  One	  general	  pattern	  observed	  was	  that	  as	  more	  survey	  factors	  were	  implemented,	  the	  costs	  per	  response	  received	  decreased.	  This	  reporting	  unit	  was	  a	  combined	  measure	  taking	  account	  of	  the	  total	  cost	  for	  each	  sampling	  group,	  as	  well	  	   93	  as	  the	  level	  of	  response	  achieved	  as	  a	  result.	  Therefore	  the	  observed	  cost	  per	  response	  for	  each	  group	  can	  be	  attributed	  to	  the	  costs	  of	  survey	  implementation	  as	  well	  as	  the	  individual	  effects	  of	  survey	  factors	  on	  participant	  response.	  	  	  Of	  all	  groups,	  the	  cost	  per	  response	  was	  highest	  for	  the	  baseline	  group	  (A)	  ($74.62/response).	  	  This	  suggests	  that	  the	  lack	  of	  response	  from	  this	  sampling	  group	  outweighed	  the	  low	  implementation	  costs,	  resulting	  in	  an	  overall	  high	  cost	  per	  response.	  	  The	  L	  incentive	  (B),	  C	  incentive	  (C),	  and	  LC	  incentive	  groups	  (D)	  all	  contained	  monetary	  rewards.	  Implementation	  of	  both	  incentives	  resulted	  in	  a	  higher	  response	  rate	  over	  offering	  individual	  incentives.	  Therefore,	  the	  cost	  per	  response	  was	  the	  lowest	  ($54.19/response)	  in	  the	  LC	  incentive	  group,	  compared	  to	  that	  of	  the	  L	  incentive	  ($64.87/response)	  and	  C	  incentive	  ($72.28/response)	  groups.	  Next,	  the	  adjusted	  LC	  Info	  Canada	  group	  (E)	  resulted	  in	  a	  cost	  of	  $53.63/response.	  The	  lowered	  cost	  per	  response	  may	  be	  due	  to	  the	  combination	  of	  both	  incentives	  as	  well	  as	  the	  use	  of	  a	  personalized	  invitation	  letter.	  It	  is	  interesting	  to	  see	  that	  the	  slightly	  higher	  response	  rate	  in	  the	  Info	  Canada	  sampling	  group	  compared	  to	  the	  baseline	  group	  (1.9%)	  was	  able	  to	  offset	  the	  higher	  cost	  of	  the	  Info	  Canada	  address	  list,	  and	  subsequently	  resulted	  in	  a	  $0.56	  lower	  cost	  per	  response.	  Also	  containing	  both	  monetary	  incentives,	  the	  LC	  short	  group	  (F)	  achieved	  a	  cost	  of	  $44.66/response.	  Compared	  to	  the	  longer	  survey	  of	  group	  D,	  shortening	  the	  questionnaire	  from	  30	  min	  to	  10	  min	  elicited	  a	  decrease	  in	  cost	  per	  of	  $9.53/response.	  Since	  the	  short	  questionnaire	  produced	  a	  significantly	  higher	  response	  rate,	  this	  may	  explain	  the	  reduced	  cost	  per	  response	  in	  this	  group.	  Lastly,	  the	  C	  short	  paper	  survey	  resulted	  in	  	   94	  the	  lowest	  cost	  per	  response	  at	  $41.18/response.	  This	  result	  is	  in	  agreement	  with	  the	  significant	  effect	  of	  paper	  survey	  on	  response	  rate.	  	  Controlling	  for	  other	  factors,	  findings	  from	  the	  multiple	  linear	  regressions	  showed	  that	  the	  Info	  Canada	  sampling	  frame	  had	  the	  lowest	  effect	  on	  cost	  per	  response	  	  ($2.65/response	  compared	  to	  Canada	  Post)	  whereas	  the	  paper	  survey	  elicited	  the	  highest	  effect	  on	  cost	  per	  response	  ($17.40/response	  compared	  to	  online	  survey).	  	  Overall,	  findings	  suggest	  a	  strong	  negative	  correlation	  between	  the	  initial	  cost	  per	  survey	  sent	  and	  the	  resulting	  cost	  per	  response	  received.	  For	  example,	  the	  baseline	  group	  had	  the	  lowest	  cost	  per	  survey	  sent.	  However,	  due	  to	  low	  response	  rates,	  the	  resulting	  cost	  per	  response	  was	  the	  highest	  amongst	  all	  sampling	  groups.	  Likewise,	  while	  the	  C	  short	  paper	  survey	  was	  the	  most	  costly	  sampling	  group	  in	  terms	  of	  cost	  per	  surveys	  sent,	  this	  group	  was	  the	  most	  cost	  efficient	  in	  terms	  of	  cost	  per	  response.	  One	  would	  think	  that	  the	  higher	  cost	  of	  paper	  mode	  due	  to	  postage,	  printing,	  and	  data	  entry	  costs	  would	  render	  mailed	  surveys	  an	  inefficient	  data	  collection	  method.	  However,	  past	  literature	  has	  refuted	  this	  notion12,122.	  There	  are	  a	  number	  of	  explanations	  that	  may	  account	  for	  this	  finding.	  First	  from	  the	  cost	  perspective,	  the	  total	  costs	  for	  online	  surveys	  were	  not	  significantly	  lower	  than	  the	  paper	  survey,	  since	  invitation	  letters	  were	  sent	  using	  postal	  service,	  cost	  of	  invitation	  and	  reminder	  mailing	  were	  also	  included	  in	  online	  survey	  costs.	  	  Secondly,	  Anderson	  and	  Tancreto	  (2011)	  reasoned	  that	  since	  respondents	  require	  time	  and	  effort	  to	  change	  modes	  from	  the	  paper	  contact	  to	  the	  online	  survey,	  subsequent	  response	  rates	  for	  web	  surveys	  could	  be	  adversely	  affected123.	  This	  argument	  was	  supported	  by	  the	  results	  	   95	  of	  the	  current	  study,	  in	  which	  the	  paper	  survey	  achieved	  a	  higher	  response	  rate	  as	  well	  as	  lower	  cost	  per	  response	  compared	  to	  that	  of	  the	  online	  survey.	  	  Thirdly,	  as	  previously	  mentioned,	  the	  advantages	  associated	  with	  paper	  survey	  (including	  ease	  of	  access,	  comfort	  and	  safety,	  and	  does	  not	  require	  technological	  skills)	  outweigh	  the	  benefits	  of	  web	  surveys	  in	  terms	  of	  participants’	  propensity	  to	  respond.	  	  The	  results	  show	  that	  in	  terms	  of	  cost,	  the	  paper	  survey	  is	  a	  better	  mode	  to	  administer	  as	  opposed	  to	  the	  web	  survey	  in	  the	  general	  population.	  However,	  another	  factor	  that	  warrants	  consideration	  is	  the	  effect	  of	  survey	  incentives	  on	  geographic	  coverage	  and	  data	  quality.	  One	  possible	  concern	  is	  that	  despite	  the	  increase	  in	  response	  rate,	  certain	  demographic	  groups	  are	  more	  likely	  to	  respond	  to	  surveys	  that	  include	  an	  incentive,	  leading	  to	  an	  unrepresentative	  sample.	  However,	  we	  believe	  that	  implementing	  incentives	  may	  improve	  data	  representativeness.	  Firstly,	  incentives	  may	  encourage	  participation	  from	  respondents	  who	  would	  normally	  choose	  to	  decline	  non-­‐incentive	  surveys,	  thus	  capturing	  additional	  groups	  with	  differing	  demographic	  characteristics.	  Secondly,	  surveys	  of	  low	  response	  rates	  are	  more	  likely	  to	  produce	  unrepresentative	  data	  compared	  to	  those	  of	  high	  response	  rates.	  Therefore,	  it	  may	  be	  preferable	  to	  use	  incentives	  to	  achieve	  higher	  response	  rates.	  	  	  8.4	  BCHS	  Data	  Representativeness	  Percentage	  distributions	  of	  both	  socio-­‐demographic	  and	  health	  variables	  were	  compared	  between	  the	  BCHS	  and	  the	  CCHS.	  	  Socio-­‐demographic	  variables	  include	  gender,	  age,	  marital	  status,	  and	  total	  annual	  household	  income.	  	  Health	  variables	  of	  	   96	  interest	  consist	  of	  overall	  health	  rating	  and	  diagnosis	  of	  arthritis,	  asthma,	  diabetes,	  heart	  disease,	  and	  hypertension.	  	  8.4.1	  	  	  Gender	  With	  the	  exception	  of	  the	  Info	  Canada	  sampling	  group	  (E),	  all	  other	  BCHS	  groups	  showed	  that	  the	  percentage	  of	  women	  responding	  was	  higher	  than	  men,	  even	  when	  the	  invitation	  letter	  specifically	  instructed	  the	  household	  member	  with	  the	  most	  recent	  birthday	  to	  complete	  the	  survey	  (Figure	  7.7).	  A	  higher	  percentages	  of	  female	  respondents	  in	  general	  population	  surveys	  was	  also	  observed	  in	  past	  studies121,124.	  Furthermore,	  a	  number	  of	  studies	  have	  suggested	  that	  token	  cash	  incentives	  and	  lottery	  may	  lead	  to	  an	  overrepresentation	  of	  female	  respondents76,125,126.	  Contrarily,	  the	  Info	  Canada	  sampling	  group	  (E)	  showed	  a	  higher	  percentage	  of	  male	  respondents	  (58.6%).	  Although	  the	  number	  of	  female	  household	  heads	  has	  been	  increasing	  in	  the	  past	  decades,	  the	  majority	  of	  heads	  of	  household	  in	  Canada	  are	  males127.	  Other	  factors,	  such	  as	  topic	  saliency	  and	  monetary	  incentives,	  may	  also	  affect	  gender	  differences	  in	  response	  rates.	  	  	  From	  the	  demographics	  analysis,	  respondents	  in	  the	  Info	  Canada	  sampling	  group	  (E)	  and	  paper	  survey	  group	  (G)	  had	  significantly	  higher	  mean	  age	  (57.2	  and	  57.3,	  respectively)	  compared	  with	  other	  groups	  (Table	  7.1).	  	  	  In	  a	  study	  conducted	  in	  southern	  Australia,	  Dal	  Grande	  and	  Taylor	  (2010)	  found	  that	  telephone	  numbers	  that	  were	  most	  likely	  to	  be	  listed	  in	  phone	  directory	  were	  of	  the	  older	  population128.	  	  In	  Canada,	  the	  Residential	  Telephone	  Service	  Survey	  conducted	  by	  Statistics	  Canada	  	   97	  showed	  that	  50%	  of	  young	  households	  (19-­‐34	  years	  of	  age)	  used	  wireless	  devices	  for	  communication129.	  The	  increasing	  prevalence	  of	  cellphone-­‐only	  young	  adult	  households	  may	  be	  attributed	  to	  the	  fact	  that	  they	  are	  the	  most	  mobile	  group	  among	  all	  age	  groups,	  as	  they	  continually	  seek	  new	  education	  and	  employment	  opportunities130.	  The	  higher	  mean	  age	  of	  paper	  survey	  participants	  compared	  with	  online	  respondents	  may	  be	  due	  to	  better	  computer	  skills	  and	  Internet	  access	  among	  younger	  people.	  	  Elderly	  people	  are	  less	  likely	  than	  other	  age	  groups	  to	  use	  the	  internet28,131.	  	  	  When	  comparing	  against	  the	  weight	  adjusted	  CCHS	  2010,	  all	  groups	  exhibited	  a	  significant	  difference	  in	  age	  distribution.	  Niemi	  and	  colleagues	  stated	  that,	  theoretically,	  young	  adults	  should	  be	  more	  representative	  of	  the	  population	  in	  web-­‐based	  surveys	  due	  to	  their	  frequent	  use	  of	  Internet132.	  	  The	  current	  study	  showed	  that	  although	  online	  surveys	  received	  a	  higher	  response	  from	  young	  adults	  (age	  18-­‐29)	  compared	  to	  the	  paper	  surveys	  (Table	  7.8),	  there	  is	  still	  a	  lack	  of	  representation	  in	  this	  age	  group	  when	  compared	  to	  the	  weight	  adjusted	  CCHS	  data.	  The	  pattern	  of	  under-­‐representation	  for	  the	  younger	  population	  has	  been	  observed	  in	  past	  US	  and	  Canadian	  national	  elections133,134.	  It	  has	  been	  reasoned	  that	  the	  low	  turnout	  rate	  for	  young	  voters	  may	  be	  due	  to	  two	  distinct	  factors:	  lack	  of	  interest	  and	  personal	  reasons135.	  These	  explanations	  can	  be	  partly	  applied	  to	  general	  population	  surveys,	  in	  which	  topic	  saliency	  has	  a	  large	  influence	  on	  respondent	  demographics136.	  Younger	  individuals	  may	  feel	  too	  busy	  or	  are	  less	  interested	  in	  health	  surveys.	  	  	   98	  8.4.3	  Marital	  status	  Martial	  status	  distributions	  are	  comparable	  between	  weight	  adjusted	  CCHS	  and	  BCHS	  for	  sampling	  groups	  A	  (baseline),	  B	  (lottery),	  C	  (coin),	  and	  D	  (incentives).	  However,	  a	  statistically	  significant	  difference	  was	  observed	  in	  groups	  E	  (Info	  Canada),	  F	  (short),	  and	  G	  (paper).	  	  The	  higher	  percentage	  of	  married	  and	  lower	  percentage	  of	  single/never-­‐married	  respondents	  in	  the	  Info	  Canada	  group	  (E)	  may	  perhaps	  be	  explained	  by	  the	  higher	  mean	  age	  of	  the	  respondents	  (Table	  7.1)	  and	  their	  higher	  use	  of	  landline	  phones.	  The	  difference	  in	  marital	  status	  distribution	  in	  the	  paper	  survey	  (G)	  cannot	  be	  easily	  explained,	  but	  may	  be	  due	  to	  the	  high	  percentage	  of	  respondents	  who	  selected	  “not	  stated”	  (13.6%).	  Further	  research	  is	  required	  to	  explain	  the	  observed	  results	  in	  the	  paper	  questionnaire	  group.	  	  8.4.4	  Total	  annual	  household	  income	  The	  percentage	  distribution	  of	  total	  household	  income	  is	  most	  comparable	  between	  the	  weight	  adjusted	  CCHS	  and	  the	  paper	  survey	  group	  (G).	  	  The	  higher	  percentage	  of	  participant	  with	  high	  annual	  income	  (≥	  $80,000)	  in	  the	  Info	  Canada	  sampling	  frame	  may	  perhaps	  suggest	  a	  potential	  coverage	  error,	  such	  that	  there	  is	  an	  under-­‐sampling	  of	  the	  lower	  income	  population.	  Blumberg	  and	  Luke(2009),	  who	  analyzed	  The	  USA	  National	  Health	  Interview	  Survey	  (NHIS)	  in	  2007,	  found	  a	  significant	  underrepresentation	  of	  low-­‐income	  and	  younger	  adults	  in	  this	  landline	  based	  survey137.	  The	  use	  of	  landline-­‐based	  surveys	  may	  become	  less	  valid	  over	  time	  due	  to	  continuing	  growth	  of	  wireless	  only	  households	  in	  the	  current	  society138.	  	  	   99	  8.4.5	  General	  health	  	  The	  percentage	  distribution	  of	  general	  health	  suggested	  that	  compared	  to	  BCHS,	  the	  weight	  adjusted	  CCHS	  2010	  data	  contains	  a	  higher	  percentage	  of	  participants	  reporting	  “excellent”	  health	  (Table	  C4).	  In	  contrast,	  the	  paper	  survey	  group	  (G)	  contained	  the	  lowest	  percentage	  of	  participants	  in	  this	  category.	  There	  are	  at	  least	  two	  possible	  reasons	  that	  may	  account	  for	  these	  findings.	  	  Due	  to	  social	  desirability,	  measurement	  bias	  may	  be	  introduced	  in	  interview-­‐based	  surveys	  (such	  as	  CCHS).	  	  This	  phenomenon	  was	  noted	  in	  past	  survey	  studies139,140,	  in	  which	  a	  higher	  percentage	  of	  participants	  in	  interviewer-­‐administered	  surveys	  reported	  to	  have	  “excellent”	  health	  over	  self-­‐administered	  surveys.	  It	  seems	  that	  in	  the	  presence	  of	  an	  interviewer,	  the	  interviewee	  is	  more	  likely	  to	  give	  an	  answer	  that	  is	  more	  socially	  desirable	  or	  what	  the	  interview	  would	  like	  to	  hear12.	  This	  bias	  also	  extends	  to	  certain	  socially	  sensitive	  survey	  topics,	  such	  as	  alcohol	  abuse	  and	  criminal	  record141,142.	  	  Secondly,	  the	  lower	  percentage	  of	  “excellent”	  health	  respondents	  in	  group	  G	  may	  be	  due	  to	  the	  older	  age	  in	  this	  group	  (Table	  7.1),	  and	  thus	  negatively	  correlates	  with	  general	  health.	  However,	  this	  explanation	  is	  inconclusive	  due	  to	  the	  high	  percentage	  of	  “not	  stated”	  answers	  (9.2%)	  in	  this	  group.	  	  	  8.4.6	  Chronic	  diseases	  The	  prevalences	  of	  five	  diseases	  were	  examined	  between	  the	  CCHS	  data	  and	  BCHS,	  including	  arthritis,	  asthma,	  diabetes,	  heart	  disease	  and	  hypertension.	  Overall,	  prevalences	  of	  the	  diseases	  of	  interest	  were	  mostly	  representative	  of	  the	  population	  in	  BCHS	  sampling	  groups.	  An	  observed	  pattern	  was	  that	  both	  Info	  Canada	  (E)	  and	  	   100	  Paper	  survey	  (G)	  groups	  exhibited	  statistically	  significant	  differences	  in	  prevalence	  of	  certain	  chronic	  diseases	  when	  compared	  to	  CCHS	  (Figure	  7.12-­‐	  7.16).	  Group	  E	  showed	  significantly	  higher	  prevalence	  of	  hypertension,	  while	  group	  G	  showed	  significantly	  higher	  prevalence	  of	  arthritis,	  diabetes,	  and	  hypertension.	  This	  may	  be	  explained	  by	  the	  fact	  that	  these	  diseases	  are	  age	  related	  and	  the	  mean	  age	  of	  participants	  was	  higher	  in	  Group	  E	  and	  G	  (Table	  7.1).	  	  8.4.7	  Effects	  of	  sampling	  frame	  and	  survey	  mode	  on	  respondent	  characteristics	  The	  percentage	  distributions	  of	  socio-­‐demographic	  variables	  were	  compared	  between	  groups	  of	  differing	  sampling	  frames	  and	  survey	  modes.	  It	  should	  be	  noted	  that	  because	  of	  the	  large	  sample	  sizes,	  very	  small	  differences	  in	  distribution	  are	  significant.	  Therefore,	  the	  emphasis	  is	  on	  the	  substantive	  differences	  rather	  than	  significance	  tests.	  	  	  	  	  Results	  from	  comparing	  BCHS	  sampling	  frame	  strata	  largely	  reiterate	  earlier	  findings	  in	  the	  data	  representativeness	  section.	  Findings	  showed	  that	  the	  Info	  Canada	  sampling	  group	  had	  a	  higher	  mean	  age,	  which	  directly	  affects	  a	  number	  of	  other	  demographic	  and	  health	  characteristics,	  such	  as	  martial	  status,	  total	  annual	  household	  income,	  and	  general	  health.	  	  Therefore,	  we	  conclude	  that	  this	  sampling	  frame	  produced	  an	  unrepresentative	  sample	  of	  the	  general	  population	  of	  BC.	  	  	  There	  may	  be	  a	  lack	  of	  representation	  for	  certain	  groups	  of	  population	  within	  the	  Info	  Canada	  sampling	  frame	  due	  to	  the	  exclusion	  of	  cell	  phone	  only	  households.	  Most	  notably,	  younger	  adults	  are	  most	  prevalent	  within	  this	  group78,138,143.	  With	  regards	  to	  	   101	  disproportionate	  income	  levels,	  the	  US	  National	  Centre	  for	  Health	  Statistics	  continued	  to	  find	  that	  individuals	  living	  in	  or	  near	  poverty	  are	  more	  likely	  to	  reside	  within	  households	  that	  do	  not	  support	  landline	  based	  telephones144.	  Additionally,	  results	  from	  the	  US	  National	  Health	  Interview	  Survey	  suggested	  that	  non-­‐coverage	  of	  cellphone	  only	  population	  was	  not	  random.	  Lambert,	  Langer,	  and	  McMenemy	  (2010)	  noted	  that	  the	  non-­‐coverage	  was	  highly	  associated	  with	  the	  younger	  population,	  males,	  and	  those	  who	  live	  in	  poverty145.	  Given	  that	  the	  same	  phenomenon	  exists	  within	  the	  BC	  population,	  the	  use	  of	  landline-­‐based	  sampling	  frame	  may	  produce	  a	  highly	  selective	  survey	  sample.	  Lastly,	  although	  info	  Canada	  is	  a	  landline-­‐based	  sampling	  frame,	  households	  that	  reside	  within	  apartments	  may	  also	  be	  excluded	  from	  selection	  if	  the	  unit	  number	  is	  missing	  from	  the	  address.	  Due	  to	  the	  reasons	  listed	  above,	  a	  coverage	  error	  may	  exist	  in	  the	  Info	  Canada	  sampling	  frame.	  	  Therefore,	  an	  address-­‐based	  sampling	  frame	  such	  as	  Canada	  post	  should	  be	  considered	  as	  a	  better	  source	  of	  target	  population	  over	  landline-­‐based	  sampling	  frames	  in	  general	  population	  surveys.	  	  	  One	  challenge	  with	  regards	  to	  using	  an	  address	  area	  frame	  is	  that	  surveyors	  are	  forced	  to	  send	  out	  mail	  invitations	  when	  administering	  a	  web-­‐based	  survey.	  This	  additional	  step	  greatly	  complicates	  the	  survey	  process	  and	  also	  increases	  cost.	  	  One	  alternative	  (for	  surveys	  of	  certain	  groups)	  is	  to	  use	  a	  population	  frame	  that	  includes	  email	  addresses,	  allowing	  all	  communications	  to	  be	  mediated	  through	  the	  web.	  	  In	  these	  populations,	  the	  response	  rate	  for	  online	  surveys	  may	  not	  be	  worse	  compared	  to	  the	  current	  study	  since	  one	  main	  reason	  for	  non-­‐response	  in	  general	  population	  	   102	  web	  surveys	  is	  the	  lack	  of	  web	  knowledge	  and	  Internet	  access.	  However,	  disadvantages	  include	  the	  inability	  to	  send	  pre-­‐paid	  cash	  incentives	  and	  other	  forms	  of	  tangible	  awards.	  One	  may	  resort	  to	  using	  electronic	  awards	  such	  as	  online	  gift	  certificates.	  	  	  The	  comparison	  of	  percentage	  distribution	  of	  BCHS	  survey	  modes	  against	  the	  CCHS	  data	  produced	  insightful	  information	  on	  respondent	  characteristics	  in	  both	  the	  paper	  and	  online	  surveys.	  In	  our	  study,	  neither	  survey	  modes	  produced	  fully	  generalizable	  results.	  Online	  survey	  coverage	  error	  may	  be	  introduced	  in	  a	  geographic	  population	  that	  faces	  high	  technological	  barrier	  or	  has	  low	  web	  literacy.	  In	  addition,	  socio-­‐demographic	  qualities	  of	  respondents	  such	  as	  age	  and	  income	  may	  also	  affect	  data	  representativeness.	  The	  2012	  Canadian	  Internet	  User	  Survey	  reported	  that	  95%	  households	  in	  the	  highest	  income	  quartile	  incomes	  uses	  internet,	  compared	  to	  only	  62%	  of	  households	  in	  the	  lowest	  income	  quartile28.	  	  Lastly,	  measurement	  error	  may	  also	  arise	  since	  web	  survey	  participants	  are	  more	  likely	  to	  be	  impatient	  compared	  to	  paper	  survey	  participants12.	  As	  a	  result,	  they	  may	  scan	  the	  web	  pages	  and	  select	  answers	  hastily	  when	  ready	  to	  move	  on	  to	  the	  next	  page146.	  In	  past	  studies,	  two	  methods	  were	  used	  to	  overcome	  this	  challenge.	  Smith	  (2001)	  suggested	  that	  results	  from	  Internet	  surveys	  could	  be	  generalizable	  when	  given	  to	  a	  pre-­‐recruited	  panel	  of	  Internet	  users72.	  Individuals	  for	  the	  web	  panel	  can	  be	  recruited	  from	  in	  person	  or	  telephone	  interviews,	  such	  that	  household	  that	  do	  not	  have	  access	  to	  Internet	  can	  also	  be	  represented	  using	  this	  method.	  Secondly,	  Dillman,	  Smyth,	  and	  Christian	  (2009)	  suggested	  that	  the	  use	  of	  a	  mix-­‐mode	  survey	  could	  greatly	  increase	  the	  data	  	   103	  representativeness	  of	  a	  survey12.	  	  As	  previously	  mentioned,	  a	  mixed	  mode	  survey	  can	  improve	  coverage	  when	  certain	  demographic	  groups	  could	  not	  be	  reached	  by	  a	  single	  mode60.	  	  Therefore,	  the	  mixed	  mode	  method	  of	  both	  paper	  and	  online	  survey	  may	  warrant	  consideration	  for	  future	  general	  population	  studies.	  	  8.5	  Generalizability	  of	  Study	  Results	  The	  target	  population	  of	  BCHS	  was	  all	  community-­‐dwelling	  adult	  residents	  within	  British	  Columbia,	  Canada.	  As	  such,	  findings	  from	  this	  study	  are	  limited	  to	  general	  population	  surveys.	  In	  surveys	  of	  special	  populations,	  such	  as	  hospital	  patients	  or	  members	  of	  an	  organization,	  response	  rates	  and	  the	  effects	  of	  various	  survey	  features	  may	  be	  quite	  different.	  Other	  considerations	  such	  as	  survey	  mode	  and	  sampling	  frame	  may	  also	  be	  irrelevant	  in	  special	  populations.	  Additionally,	  most	  residents	  of	  BC	  live	  in	  urban	  areas.	  Therefore,	  in	  population	  studies	  of	  rural	  areas,	  behavioral	  difference	  in	  survey	  participation	  may	  influence	  response	  rates	  and	  lead	  to	  difference	  in	  results.	  	  	  	  	  	  	  	  	  	   104	  9	  Limitations	  The	  study	  evaluated	  the	  effects	  of	  survey	  factors	  at	  specific	  levels.	  For	  example,	  we	  offered	  a	  $2	  reward	  for	  the	  prepaid	  cash	  incentives	  and	  were	  not	  able	  to	  examine	  the	  effect	  of	  a	  $1	  or	  $5	  rewards.	  Similarly,	  we	  offered	  10	  lottery	  prizes	  of	  $100	  dollars	  and	  a	  grand	  prize	  of	  $1000.	  However,	  the	  effect	  of	  a	  different	  size	  of	  reward	  cannot	  be	  determined.	  In	  our	  study,	  we	  compared	  two	  questionnaires	  with	  specific	  length	  (10	  min	  vs.	  30	  min),	  and	  therefore	  we	  could	  not	  confirm	  the	  existence	  of	  a	  length	  threshold	  for	  optimal	  response	  rate,	  nor	  explain	  whether	  the	  relationship	  between	  length	  and	  survey	  response	  is	  “continuous”.	  	  Lastly,	  our	  results	  suggest	  that	  the	  Info	  Canada	  sampling	  frame	  produced	  a	  higher	  response	  rate	  compared	  to	  the	  Canada	  Post	  sampling	  frame.	  However,	  we	  were	  not	  able	  to	  determine	  whether	  the	  increase	  in	  response	  was	  due	  to	  the	  personalized	  invitation	  letter	  or	  the	  sample	  characteristics	  themselves.	  	  	  One	  potential	  limitation	  of	  paper	  surveys	  is	  a	  higher	  item	  non-­‐response	  rate	  compared	  to	  online	  surveys.	  Paper	  survey	  respondents	  may	  choose	  to	  skip	  questions	  for	  various	  reasons,	  without	  ways	  to	  prevent	  this.	  On	  the	  other	  hand,	  if	  an	  online	  respondent	  does	  not	  select	  an	  answer	  choice	  by	  mistake,	  the	  survey	  system	  can	  be	  programmed	  to	  either	  prevent	  the	  individual	  from	  continuing	  or	  show	  reminders	  for	  skipped	  items	  before	  moving	  on.	  In	  our	  paper	  survey	  (G),	  item	  non-­‐response	  was	  the	  highest	  within	  socio-­‐demographic	  variables	  of	  marital	  status	  (13.6%),	  and	  total	  annual	  household	  income	  (21.3%),	  as	  well	  as	  general	  health	  ratings	  (9.2%).	  	  High	  item	  non-­‐response	  rates	  may	  affect	  the	  generalizability	  of	  the	  findings.	  	  	   105	  	  Individual	  mailing	  costs	  could	  not	  be	  calculated	  due	  to	  lack	  of	  data.	  Therefore,	  the	  difference	  in	  costs	  between	  groups	  due	  to	  different	  numbers	  of	  reminders	  that	  had	  to	  be	  mailed	  could	  not	  be	  included	  in	  the	  cost	  analysis.	  For	  example,	  the	  cost	  for	  a	  first	  round	  respondent	  would	  only	  include	  the	  initial	  mailing	  costs,	  as	  opposed	  to	  a	  last	  round	  respondent	  whose	  costs	  would	  include	  the	  initial	  mailing	  costs	  as	  well	  as	  the	  mailing	  fees	  for	  the	  three	  reminder	  mails	  sent.	  Including	  these	  differences	  in	  cost	  would	  favor	  the	  groups	  with	  higher	  response	  rates.	  Furthermore,	  the	  resulting	  cost	  regression	  was	  modeled	  on	  a	  group	  level	  and	  thus	  had	  very	  low	  statistical	  power	  and	  was	  unable	  to	  account	  for	  the	  variance	  in	  mailing	  costs	  for	  different	  waves	  of	  respondents.	  	  The	  95%	  confidence	  interval	  was	  not	  reported	  for	  this	  model	  due	  its	  single	  degree	  of	  freedom	  variance	  estimate.	  Nevertheless,	  the	  regression	  results	  provided	  information	  that	  give	  insight	  regarding	  the	  cost	  differences	  when	  implementing	  various	  survey	  factors.	  	  	  Lastly,	  Singer	  et	  al.	  mentioned	  in	  a	  past	  study	  that	  clear	  distinction	  should	  be	  made	  between	  two	  forms	  of	  participant	  non-­‐response147.	  The	  first	  type	  is	  non-­‐response	  due	  to	  respondent’s	  refusal	  of	  participation,	  which	  may	  be	  attributed	  to	  lack	  of	  interest	  or	  personal	  reasons.	  The	  second	  type	  is	  non-­‐response	  due	  to	  non-­‐contact,	  in	  which	  respondents	  were	  not	  notified	  of	  the	  initial	  survey	  invitations	  mainly	  because	  of	  change	  of	  address,	  unopened	  mail,	  or	  delivery	  failure.	  In	  our	  study,	  data	  regarding	  returned	  invitation	  mail	  for	  undeliverable	  address	  was	  unavailable	  and	  therefore,	  we	  could	  not	  distinguish	  between	  these	  two	  forms	  of	  participant	  non-­‐response.	  	  This	  is	  	   106	  an	  important	  issue	  in	  the	  current	  study.	  Since	  invalid	  addresses	  were	  not	  discounted	  during	  analysis,	  the	  reported	  response	  rates	  may	  be	  underestimated.	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   107	  10	  Strengths	  There	  are	  a	  number	  of	  strengths	  associated	  with	  the	  BCHS	  survey	  design.	  	  First,	  this	  was	  a	  large-­‐scale	  general	  population	  survey	  study,	  in	  which	  8000	  random	  households	  in	  British	  Columbia	  were	  sampled.	  The	  large	  sample	  size	  allows	  a	  high	  precision	  of	  the	  estimated	  effects.	  Since	  households	  were	  randomly	  selected	  and	  allocated	  to	  one	  of	  the	  seven	  sampling	  groups,	  the	  comparisons	  were	  also	  unconfounded.	  	  	  Furthermore,	  a	  number	  of	  survey	  factors	  were	  examined.	  These	  included	  survey	  mode	  (paper	  vs.	  online),	  instant	  lottery	  (vs.	  standard	  lottery),	  prepaid	  coin	  incentive	  ($2	  vs.	  no	  coin),	  questionnaire	  length	  (10	  min	  vs.	  30	  min),	  and	  sampling	  frame	  (Info	  Canada	  vs.	  Canada	  Post).	  Past	  studies	  have	  examined	  the	  effect	  of	  standard	  (end-­‐of-­‐study)	  lotteries	  on	  response	  rate,	  however	  very	  few	  have	  explored	  the	  effect	  of	  instant	  lotteries.	  The	  current	  study	  yielded	  insightful	  findings	  regarding	  the	  latter	  survey	  incentive.	  Comparisons	  between	  different	  sampling	  frames	  are	  unique	  to	  this	  study	  since	  this	  issue	  is	  not	  often	  examined	  in	  survey	  studies.	  To	  our	  best	  knowledge,	  this	  was	  the	  first	  large	  scale	  survey	  study	  that	  examined	  the	  effects	  of	  five	  survey	  features	  on	  response	  rate	  in	  a	  general	  population	  survey.	  	  	  Another	  strength	  of	  this	  study	  was	  that	  all	  expenditures	  such	  as	  mailing,	  supply,	  and	  salary	  costs	  were	  recorded	  mostly	  in	  forms	  of	  invoices	  and	  receipts.	  This	  makes	  the	  calculation	  of	  survey	  implementation	  costs	  fairly	  straightforward.	  As	  such,	  the	  calculated	  cost	  per	  survey	  sent	  and	  cost	  per	  response	  were	  relatively	  accurate.	  Few	  	   108	  existing	  publications	  have	  examined	  the	  effect	  of	  survey	  factors	  on	  costs.	  To	  our	  best	  knowledge,	  this	  is	  the	  first	  study	  to	  show	  a	  reduction	  in	  cost	  per	  response	  as	  more	  survey	  features	  were	  implemented.	  	  	  Lastly,	  respondent	  characteristics	  from	  BCHS	  groups	  were	  compared	  to	  the	  CCHS	  2010	  data.	  Socio-­‐demographic	  and	  health	  variables	  were	  compared	  between	  the	  two	  surveys	  to	  examine	  whether	  the	  BCHS	  sample	  is	  representative	  of	  the	  British	  Columbia	  population.	  Population	  representativeness	  is	  needed	  to	  ensure	  generalizability	  of	  the	  survey	  results.	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   109	  11	  Implications	  This	  study	  assessed	  the	  effectiveness	  and	  cost	  of	  different	  incentives	  designed	  to	  improve	  response	  rates	  and	  compared	  different	  sampling	  approaches	  for	  a	  mixed-­‐mode	  (mail/online)	  survey.	  To	  my	  best	  knowledge,	  this	  is	  the	  first	  large	  scale	  randomized	  study	  to	  assess	  the	  impact	  of	  combinations	  of	  various	  survey	  design	  methods	  in	  online	  surveys	  of	  the	  general	  population.	  	  	  The	  overall	  response	  rates	  were	  comparable	  to	  those	  of	  recent	  general	  population	  studies	  using	  similar	  modes	  of	  delivery.	  Given	  that	  the	  BCHS	  survey	  design	  largely	  followed	  Dillman’s	  tailored	  design	  method12,	  we	  believe	  the	  results	  from	  this	  study	  provided	  a	  realistic	  estimate	  for	  expected	  response	  rates	  in	  future	  self-­‐administered	  health	  survey	  in	  the	  Canadian	  general	  population.	  Findings	  from	  the	  logistic	  regression	  supported	  prior	  hypotheses,	  such	  that	  all	  survey	  factors,	  except	  for	  the	  info	  Canada	  sampling	  frame,	  had	  a	  statistically	  significant	  effect	  on	  response	  rate	  compared	  to	  the	  reference	  group.	  	  Odds	  of	  response	  were	  the	  lowest	  for	  the	  Info	  Canada	  sampling	  frame	  (OR	  1.14),	  followed	  by	  Instant	  lottery	  (OR	  1.35),	  short	  length	  (OR	  1.35),	  prepaid	  cash	  incentive	  (OR	  1.44),	  and	  paper	  survey	  (OR	  2.04).	  One	  main	  finding	  was	  that	  despite	  the	  current	  advances	  in	  computer	  knowledge,	  the	  paper	  format	  nevertheless	  proved	  to	  be	  the	  more	  effective	  sampling	  mode	  in	  producing	  a	  higher	  response	  rate	  compared	  to	  the	  online	  format.	  However,	  researchers	  must	  take	  into	  account	  the	  disadvantages	  of	  paper	  surveys	  (higher	  mailing	  cost,	  limited	  use	  of	  skip	  logic	  and	  probing	  questions,	  etc.)	  when	  deciding	  on	  the	  mode	  of	  delivery.	  Of	  the	  design	  features	  studied,	  use	  of	  the	  Info	  	   110	  Canada	  sampling	  frame,	  as	  opposed	  to	  Canada	  Post,	  produced	  the	  smallest	  effect	  on	  response	  rates.	  The	  size	  of	  effect	  for	  each	  survey	  factor	  may	  inform	  survey	  researchers	  in	  deciding	  between	  various	  survey	  features.	  Furthermore,	  the	  logistic	  regression	  equation	  is	  a	  useful	  estimation	  tool,	  which	  can	  provide	  expected	  odds/probabilities	  of	  response	  for	  specific	  combinations	  of	  survey	  factors.	  However,	  researchers	  must	  use	  caution	  while	  interpreting	  these	  findings,	  as	  they	  were	  produced	  from	  the	  specific	  conditions	  of	  the	  current	  study.	  Cost	  analyses	  showed	  a	  strong	  negative	  association	  between	  costs	  per	  survey	  sent	  and	  cost	  per	  response,	  such	  that	  the	  addition	  of	  more	  incentives	  led	  to	  a	  reduction	  in	  cost	  per	  response.	  Results	  suggested	  that	  the	  increased	  response	  rate	  offsets	  the	  higher	  implementation	  costs	  of	  monetary	  incentives	  and	  other	  factors.	  The	  cost	  table	  (Table7.7)	  serves	  as	  a	  potential	  cost	  projection	  tool	  for	  researchers	  who	  are	  interested	  in	  estimating	  costs	  for	  future	  surveys.	  	  Recorded	  values	  for	  specific	  items,	  such	  as	  purchase	  of	  the	  address	  lists,	  cost	  of	  mailing,	  and	  supply	  fees	  can	  be	  updated	  to	  take	  into	  account	  the	  changing	  costs	  due	  to	  inflation.	  	  Lastly	  with	  the	  exception	  of	  Info	  Canada	  and	  paper	  survey	  groups,	  BCHS	  data	  representativeness	  achieved	  adequate	  comparability	  to	  the	  CCHS	  2010	  in	  terms	  of	  disease	  prevalence.	  Age	  distribution	  of	  the	  Info	  Canada	  respondents	  suggested	  a	  substantial	  under-­‐representation	  of	  the	  younger	  adult	  population	  (18-­‐29	  years).	  Other	  socio-­‐demographic	  variables	  such	  as	  marital	  status,	  annual	  household	  income,	  and	  general	  health	  were	  also	  disproportionate	  compared	  to	  the	  CCHS	  data.	  We	  suspect	  this	  phenomenon	  is	  due	  to	  the	  exclusion	  of	  mobile-­‐only	  households	  in	  the	  	   111	  Info	  Canada	  sampling	  frame,	  in	  which	  the	  younger	  population	  is	  highly	  prevalent.	  Due	  to	  this	  coverage	  problem,	  we	  do	  not	  recommend	  the	  use	  of	  a	  landline-­‐based	  frame	  as	  a	  source	  of	  population	  sampling.	  With	  regards	  to	  the	  effect	  of	  survey	  modes	  (paper	  vs.	  online)	  on	  data	  representation,	  neither	  paper	  nor	  online	  respondent	  characteristics	  displayed	  full	  comparability	  with	  the	  CCHS	  data.	  Data	  collected	  from	  this	  survey	  provided	  me	  with	  a	  unique	  opportunity	  to	  examine	  how	  the	  response	  rate,	  cost,	  and	  survey	  representativeness	  depend	  on	  specific	  aspects	  of	  survey	  design	  and	  implementation.	  Findings	  from	  this	  study	  may	  give	  further	  insight	  to	  researchers	  on	  ways	  of	  improving	  response	  in	  future	  population-­‐based	  surveys. 	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   112	  12	  Future	  Studies	  Due	  to	  the	  limited	  scope	  of	  the	  current	  study,	  a	  number	  of	  topics	  surrounding	  general	  population	  surveys	  could	  not	  be	  addressed.	  	  Listed	  below	  are	  four	  topics,	  which	  may	  be	  of	  interest	  to	  examine	  and	  explore	  in	  future	  studies.	  	  Survey	  initiation	  and	  retention	  Survey	  initiation	  is	  described	  as	  the	  act	  of	  participants	  starting	  the	  survey	  either	  by	  arriving	  at	  the	  online	  survey	  host	  site	  or	  by	  beginning	  the	  paper	  survey.	  Survey	  retention	  is	  defined	  as	  the	  propensity	  of	  respondents	  to	  complete	  the	  survey	  once	  initiated.	  A	  lack	  of	  survey	  retention	  may	  be	  due	  to	  lack	  of	  interest,	  fatigue,	  or	  other	  distractions.	  Both	  of	  these	  behaviors	  are	  important	  to	  examine	  when	  evaluating	  the	  effects	  of	  various	  survey	  designs.	  For	  example,	  a	  number	  of	  studies	  showed	  that	  the	  use	  of	  cash	  lottery	  increases	  web	  survey	  retention	  (Bosnjak	  and	  Tuten,	  2003;	  Frick	  et	  al.,	  2001;	  O’Neil	  et	  al.,	  2003;	  Tuten,	  Galesic,	  and	  Bosnjak,	  2004;	  Doerfling	  et	  al.,	  2010).	  However,	  it	  is	  largely	  unknown	  if	  other	  survey	  factors	  play	  a	  role	  in	  affecting	  these	  behaviors.	  	  Topic	  salience	  Topic	  salience	  is	  known	  to	  be	  an	  important	  factor	  for	  survey	  response	  rate12.	  It	  is	  possible	  that	  the	  BCHS	  focus	  on	  musculosketal	  health	  caused	  a	  lack	  of	  interest	  within	  the	  younger	  adults	  and	  subsequently	  led	  to	  under-­‐representation	  of	  this	  age	  	   113	  category.	  Therefore,	  it	  would	  be	  interesting	  to	  examine	  the	  response	  level	  of	  the	  general	  population	  to	  different	  survey	  topics,	  which	  may	  be	  useful	  for	  researchers	  when	  predicting	  the	  response	  rate	  based	  on	  a	  specific	  topic.	  The	  challenge	  is	  that	  survey	  topics	  are	  often	  difficult	  to	  classify	  by	  salience.	  The	  public’s	  interest	  in	  the	  particular	  topic	  may	  also	  differ	  by	  region.	  Surveyor-­‐respondents	  relationship	  When	  administering	  a	  survey,	  it	  would	  be	  important	  to	  consider	  who	  are	  the	  surveyors	  (e.g.	  University	  researchers,	  government,	  marketing	  organization)	  and	  their	  relationship	  to	  the	  respondents.	  Often	  participants	  may	  have	  a	  higher	  propensity	  to	  participate	  in	  surveys	  when	  a	  level	  of	  trust	  has	  been	  already	  established	  with	  the	  surveyor.	  	  For	  example,	  students	  are	  more	  likely	  to	  participant	  in	  a	  university-­‐implemented	  survey	  as	  opposed	  to	  that	  administered	  by	  a	  marketing	  company.	  	  Furthermore,	  Joinson	  and	  Reips	  (2007)	  stated	  that	  effect	  of	  personalization	  is	  often	  stronger	  when	  surveys	  are	  sent	  from	  individuals	  with	  high	  authority,	  such	  as	  professors	  and	  vice	  chancellor77.	  Therefore,	  it	  would	  be	  of	  interest	  to	  examine	  different	  surveyor-­‐respondent	  relationships	  and	  their	  effect	  on	  response	  rate	  	  Progress	  bar	  Dillman,	  Smyth,	  and	  Christian	  (2009)	  commented	  that	  the	  use	  of	  a	  progress	  indicator	  might	  have	  an	  important	  impact	  on	  response	  rate	  in	  web	  surveys,	  but	  this	  has	  not	  been	  sufficiently	  studied12.	  One	  rationale	  is	  that	  participants	  are	  likely	  to	  respond	  to	  web	  surveys	  if	  they	  know	  the	  number	  of	  remaining	  questions.	  However,	  Yan	  et	  al	  	   114	  (2010)	  suggest	  that	  the	  use	  of	  progress	  indicator	  may	  not	  have	  a	  positive	  effect	  on	  survey	  participation	  and	  may	  even	  negatively	  impact	  response	  rate118.	  Therefore,	  it	  would	  be	  of	  interest	  to	  explore	  the	  use	  of	  progress	  indicators	  and	  examine	  the	  specific	  conditions	  in	  which	  this	  feature	  may	  be	  beneficial	  in	  a	  general	  population	  survey.	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   115	  13	  Conclusion	  The	  current	  study	  investigated	  the	  effects	  of	  various	  survey	  factors,	  including	  survey	  mode,	  prepaid	  cash	  incentive,	  instant	  lottery,	  questionnaire	  length,	  and	  sampling	  frame,	  on	  response	  rate,	  costs	  and	  data	  representativeness.	  Paper	  survey	  mode	  and	  $2	  prepaid	  cash	  incentive	  elicited	  the	  largest	  effect	  on	  response	  rates	  (OR	  2.04,	  1.61-­‐2.59	  and	  OR	  1.44,	  1.23-­‐1.67,	  respectively).	  	  The	  use	  of	  instant	  lottery	  and	  short	  questionnaire	  also	  produced	  a	  significant	  difference	  in	  the	  odds	  of	  response	  (OR	  1.35,	  1.16-­‐1.58	  and	  OR	  1.35,	  1.13-­‐1.62,	  respectively).	  A	  combination	  of	  a	  postal	  survey	  mode	  (as	  opposed	  to	  online	  mode),	  prepaid	  cash	  incentive,	  and	  a	  short	  questionnaire	  improved	  response	  rates	  from	  less	  than	  20%	  to	  over	  40%.	  Cost	  analyses	  exhibited	  a	  negative	  association	  between	  implementation	  costs	  and	  cost	  per	  response,	  suggesting	  that	  increasing	  survey	  incentives	  can	  ultimately	  be	  more	  cost-­‐effective	  in	  terms	  of	  dollars	  spent	  per	  returned	  survey.	  Therefore,	  the	  use	  of	  combinations	  of	  survey	  features	  discussed	  in	  this	  study	  may	  positively	  affect	  the	  response	  rates	  and	  in	  turn	  reduce	  cost	  per	  response.	  Nevertheless,	  it	  is	  important	  to	  keep	  in	  mind	  that	  due	  to	  the	  ever-­‐changing	  attitudes	  towards	  survey	  participation,	  social	  environment,	  and	  technology,	  researchers	  must	  accommodate	  and	  adapt,	  by	  utilizing	  the	  appropriate	  survey	  methods	  to	  fully	  maximize	  the	  effects	  of	  survey	  design	  on	  response	  rates.	  	  	  	  	  	  	  	  	   116	  References	  	  1.	   Beebe	  TJ,	  Rey	  E,	  Ziegenfuss	  JY,	  et	  al.	  Shortening	  a	  survey	  and	  using	  alternative	  forms	  of	  prenotification:	  impact	  on	  response	  rate	  and	  quality.	  BMC	  medical	  research	  methodology	  2010;10:50.	  2.	   Berk	  ML,	  Schur	  CL,	  Feldman	  J.	  Twenty-­‐five	  years	  of	  health	  surveys:	  does	  more	  data	  mean	  better	  data?	  Health	  affairs	  2007;26:1599-­‐611.	  3.	   Rossi	  P,	  Henry	  W,	  James	  D,	  Anderson	  AB.	  Sample	  surveys:	  History,	  current	  practice,	  and	  future	  prospects.	  San	  Diego,	  CA:	  Academic	  Press;	  2003.	  4.	   Groves	  RM.	  Survey	  Errors	  and	  Survey	  Costs.	  New	  York,	  NY:	  Wiley	  &	  Sons;	  1989.	  5.	   Dey	  EL.	  Working	  with	  Low	  Survey	  Response	  Rates:	  The	  Efficacy	  of	  Weighting	  Adjustments.	  Research	  in	  Higher	  Education	  1997;38:215-­‐27.	  6.	   Porter	  SR,	  Whitcomb	  ME.	  The	  impact	  of	  lottery	  incentives	  on	  student	  survey	  response	  rates.	  Research	  in	  Higher	  Education	  2003;44:389-­‐407.	  7.	   Tourangeau	  R.	  Survey	  research	  and	  societal	  change.	  Annual	  review	  of	  psychology	  2004;55:775-­‐801.	  8.	   Loosveldt	  G,	  Storms	  V.	  Measuring	  public	  opinions	  about	  surveys.	  International	  Jounal	  of	  Public	  Opinion	  Research	  2007;20:74-­‐89.	  9.	   Sahlqvist	  S,	  Song	  Y,	  Bull	  F,	  Adams	  E,	  Preston	  J,	  Ogilvie	  D.	  Effect	  of	  questionnaire	  length,	  personalisation	  and	  reminder	  type	  on	  response	  rate	  to	  a	  complex	  postal	  survey:	  randomised	  controlled	  trial.	  BMC	  medical	  research	  methodology	  2011;11:62.	  10.	   Brealey	  SD,	  Atwell	  C,	  Bryan	  S,	  et	  al.	  Improving	  response	  rates	  using	  a	  monetary	  incentive	  for	  patient	  completion	  of	  questionnaires:	  an	  observational	  study.	  BMC	  medical	  research	  methodology	  2007;7:12.	  11.	   Baines	  AD,	  Partin	  MR,	  Davern	  M,	  Rockwood	  TH.	  Mixed-­‐mode	  administration	  	  	  reduced	  bias	  and	  enhanced	  poststratification	  adjustments	  in	  a	  health	  behavior	  survey.	  Journal	  of	  clinical	  epidemiology	  2007;60:1246-­‐55.	  12.	   Dillman	  DA,	  Smyth	  JD,	  Christian	  LM.	  Mail	  and	  Internet	  surveys:	  the	  tailored	  design	  method.	  Hoboken,	  NJ:	  Wiley;	  2009.	  	   117	  13.	   Aday	  LA,	  Cornelius	  LJ.	  Designing	  and	  conducting	  health	  surveys.	  A	  comprehensive	  guide.	  San	  Fransisco,	  CA:	  Jossey-­‐Bass;	  2006.	  14.	   Hox	  JJ,	  Leeuw	  EDD.	  A	  comparison	  of	  nonresponse	  in	  mail,	  telephone,	  and	  face-­‐to-­‐face	  surveys.	  Quality	  &	  Quantity	  1994;28:16.	  15.	   Edwards	  P,	  Roberts	  I,	  Clarke	  M,	  et	  al.	  Increasing	  response	  rates	  to	  postal	  questionnaires:	  systematic	  review.	  BMJ	  2002;324:1183.	  16.	   Dillman	  DA.	  Mail	  and	  telephone	  surveys:	  the	  total	  design	  method.	  New	  York,	  NY:	  Wiley;	  1978.	  17.	   Dillman	  DA.	  Mail	  and	  internet	  surveys:	  The	  tailored	  design	  methods	  (2nd	  ed.).	  Koboken,NJ:	  Wiley;	  2007.	  18.	   Edwards	  PJ,	  Roberts	  I,	  Clarke	  MJ,	  et	  al.	  Methods	  to	  increase	  response	  to	  postal	  and	  electronic	  questionnaires.	  Cochrane	  Database	  Syst	  Rev	  2009:MR000008.	  19.	   Wolf	  HK,	  Kuulasmaa	  K,	  Tolonen	  H,	  Sans	  S,	  Molarius	  A,	  Eastwood	  BJ.	  Effect	  of	  sampling	  frames	  on	  response	  rates	  in	  the	  WHO	  MONICA	  risk	  factor	  surveys.	  European	  journal	  of	  epidemiology	  2005;20:293-­‐9.	  20.	   Yetter	  G,	  Capaccioli	  K.	  Differences	  in	  responses	  to	  Web	  and	  paper	  surveys	  among	  school	  professionals.	  Behav	  Res	  Methods	  2010;42:266-­‐72.	  21.	   Tuten	  TL,	  Galesic	  M,	  Bosnjak	  M.	  Effects	  of	  Immediate	  Versus	  Delayed	  Notification	  of	  Prize	  Draw	  Results	  on	  Response	  Behavior	  in	  Web	  Surveys	  An	  Experiment.	  Social	  Science	  Computer	  Review	  2004;22:377-­‐84.	  22.	   Murray	  E,	  Khadjesari	  Z,	  White	  IR,	  et	  al.	  Methodological	  challenges	  in	  online	  trials.	  Journal	  of	  medical	  Internet	  research	  2009;11:e9.	  23.	   Kopec	  JA,	  Rahman	  MM,	  Berthelot	  JM,	  et	  al.	  Descriptive	  epidemiology	  of	  osteoarthritis	  in	  British	  Columbia,	  Canada.	  The	  Journal	  of	  rheumatology	  2007;34:386-­‐93.	  24.	   Mavis	  BE,	  Brocato	  JJ.	  Postal	  surveys	  versus	  electronic	  mail	  surveys.	  The	  tortoise	  and	  the	  hare	  revisited.	  Evaluation	  &	  the	  health	  professions	  1998;21:395-­‐408.	  25.	   Grava-­‐Gubins	  I,	  Scott	  S.	  Effects	  of	  various	  methodologic	  strategies:	  survey	  response	  rates	  among	  Canadian	  physicians	  and	  physicians-­‐in-­‐training.	  Canadian	  family	  physician	  Medecin	  de	  famille	  canadien	  2008;54:1424-­‐30.	  	   118	  26.	   Smith	  A.	  Older	  Adults	  and	  Technology	  Use.	  	  Pew	  Research	  	  Internet	  Project2014.	  27.	   Canadian	  Internet	  Use	  Survey	  2012.	  In:	  Canada	  S,	  ed.2013.	  28.	   Individaul	  Internet	  use	  and	  e-­‐commerce,	  2012.	  In:	  Canada	  S,	  ed.2013.	  29.	   Crawford	  SD,	  Couper	  MP,	  Lamias	  MJ.	  Web	  Surveys	  Perceptions	  of	  Burden.	  Social	  Science	  Computer	  Review	  2001;19:146-­‐62.	  30.	   Solomon	  DJ.	  Conducting	  Web-­‐Based	  Surveys.	  Practical	  Assessment	  Research	  and	  Evaluatoin	  2001;7.	  31.	   Wiseman	  F.	  On	  the	  Reporting	  of	  Response	  Rates	  in	  Extension	  Research.	  Journal	  of	  extension	  2003;41.	  32.	   Archer	  TM.	  Response	  Rates	  to	  Expect	  form	  Web-­‐Based	  Surveys	  and	  What	  to	  Do	  About	  It.	  Journal	  of	  extension	  2003;46.	  33.	   Monroe	  MC,	  Adams	  DC.	  Increasing	  Response	  Rates	  to	  Web-­‐Based	  Surveys.	  Journal	  of	  extension	  2012;50.	  34.	   Greenlaw	  C,	  Brown-­‐Welty	  S.	  A	  comparison	  of	  web-­‐based	  and	  paper-­‐based	  survey	  methods:	  testing	  assumptions	  of	  survey	  mode	  and	  response	  cost.	  Eval	  Rev	  2009;33:464-­‐80.	  35.	   Millar	  MM,	  Dillman	  DA.	  Improving	  Response	  to	  Web	  and	  Mixed-­‐Mode	  Surveys.	  Public	  Opinion	  Quartely	  2011;75:249-­‐69.	  36.	   Yammarino	  F,	  Skinner	  S,	  Childers	  T.	  Understanding	  mail	  survey	  response	  behaviour:	  A	  meta-­‐analysis.	  Public	  Opinion	  Quarterly	  1991;55:21.	  37.	   Doerfling	  P,	  Kopec	  JA,	  Liang	  MH,	  Esdaile	  JM.	  The	  effect	  of	  cash	  lottery	  on	  response	  rates	  to	  an	  online	  health	  survey	  among	  members	  of	  the	  Canadian	  Association	  of	  Retired	  Persons:	  a	  randomized	  experiment.	  Canadian	  journal	  of	  public	  health	  =	  Revue	  canadienne	  de	  sante	  publique	  2010;101:251-­‐4.	  38.	   Hackler	  JC,	  Bourgette	  P.	  Dollars,	  dissonance,	  and	  survey	  returns.	  The	  Public	  Opinion	  Quaterly	  1973;37:276-­‐81.	  39.	   Black	  GS.	  Internet	  surveys	  -­‐	  A	  replacement	  technology.	  	  52nd	  Annual	  Conference	  of	  the	  American	  Association	  for	  Public	  Opinion	  Research.	  St.	  Louis,	  MI1998.	  	   119	  40.	   Cleland	  K.	  Online	  research	  costs	  about	  half	  that	  of	  traditional	  methods.	  Advertising	  Age's	  Business	  Marketing	  1996;81:8-­‐9.	  41.	   Hollis	  NS.	  Can	  a	  picture	  save	  1,000	  words?	  Augmenting	  telephone	  tracking	  with	  online	  ad	  recognition.	  	  ARF's	  Online	  research	  day	  -­‐	  Towards	  Validation.	  New	  York,	  NY:	  Advertisig	  Research	  Foundation;	  1999.	  42.	   Jones	  TM.	  When	  market	  reseach	  turns	  into	  marketing.	  The	  Industry	  Standard	  1999.	  43.	   Leece	  P,	  Bhandari	  M,	  Sprague	  S,	  et	  al.	  Internet	  versus	  mailed	  questionnaires:	  a	  randomized	  comparison	  (2).	  Journal	  of	  medical	  Internet	  research	  2004;6:e30.	  44.	   McMahon	  SR,	  Iwamoto	  M,	  Massoudi	  MS,	  et	  al.	  Comparison	  of	  e-­‐mail,	  fax,	  and	  postal	  surveys	  of	  pediatricians.	  Pediatrics	  2003;111:e299-­‐303.	  45.	   Whitehead	  L.	  Methodological	  issues	  in	  Internet-­‐mediated	  research:	  a	  randomized	  comparison	  of	  internet	  versus	  mailed	  questionnaires.	  Journal	  of	  medical	  Internet	  research	  2011;13:e109.	  46.	   Harewood	  GC,	  Yacavone	  RF,	  Locke	  GR,	  3rd,	  Wiersema	  MJ.	  Prospective	  comparison	  of	  endoscopy	  patient	  satisfaction	  surveys:	  e-­‐mail	  versus	  standard	  mail	  versus	  telephone.	  The	  American	  journal	  of	  gastroenterology	  2001;96:3312-­‐7.	  47.	   Raziano	  DB,	  Jayadevappa	  R,	  Valenzula	  D,	  Weiner	  M,	  Lavizzo-­‐Mourey	  R.	  E-­‐mail	  versus	  conventional	  postal	  mail	  survey	  of	  geriatric	  chiefs.	  The	  Gerontologist	  2001;41:799-­‐804.	  48.	   Akl	  EA,	  Maroun	  N,	  Klocke	  RA,	  Montori	  V,	  Schunemann	  HJ.	  Electronic	  mail	  was	  not	  better	  than	  postal	  mail	  for	  surveying	  residents	  and	  faculty.	  Journal	  of	  clinical	  epidemiology	  2005;58:425-­‐9.	  49.	   Arch	  A.	  Web	  Accessibility	  for	  Older	  Users:	  A	  Literature	  Review.	  	  2009	  International	  Cross-­‐Disciplinary	  Conference	  on	  Web	  Accessibililty	  (W4A);	  2008;	  Madrid,	  Spain.	  50.	   Deutskens	  E,	  Ruyter	  KD,	  Wetzels	  M,	  Oosterveld	  P.	  Response	  Rate	  and	  Response	  Quality	  of	  Internet-­‐Based	  Surveys:	  An	  Experimental	  Study.	  Marketing	  Letters	  2004;15:21-­‐36.	  	   120	  51.	   McCambridge	  J,	  Kalaitzaki	  E,	  White	  IR,	  et	  al.	  Impact	  of	  length	  or	  relevance	  of	  questionnaires	  on	  attrition	  in	  online	  trials:	  randomized	  controlled	  trial.	  Journal	  of	  medical	  Internet	  research	  2011;13:e96.	  52.	   Marcus	  B,	  Bosnjak	  M,	  Linder	  S,	  Pilischenko	  S,	  Schutz	  A.	  Compensating	  for	  Low	  Topic	  Interest	  and	  Long	  Surveys:	  A	  Field	  Experiemnt	  on	  Nonresponse	  in	  Web	  Surveys.	  Social	  Science	  Computer	  Review	  2007;25:12.	  53.	   Kalantar	  JS,	  Talley	  NJ.	  The	  effects	  of	  lottery	  incentive	  and	  length	  of	  questionnaire	  on	  health	  survey	  response	  rates:	  a	  randomized	  study.	  Journal	  of	  clinical	  epidemiology	  1999;52:1117-­‐22.	  54.	   Mond	  JM,	  Rodgers	  B,	  Hay	  PJ,	  Owen	  C,	  Beumont	  PJ.	  Mode	  of	  delivery,	  but	  not	  questionnaire	  length,	  affected	  response	  in	  an	  epidemiological	  study	  of	  eating-­‐disordered	  behavior.	  Journal	  of	  clinical	  epidemiology	  2004;57:1167-­‐71.	  55.	   Goritz	  AS.	  Incentives	  in	  Web	  Studies:	  Methodological	  Issues	  and	  a	  Review	  International	  Journal	  of	  Internet	  Science	  2006;1:58-­‐70.	  56.	   Dogan	  R,	  Broekemier	  GM,	  Seshadri	  S.	  The	  Effects	  of	  Various	  Incentives	  and	  Survey	  Length	  on	  Managers/Executives	  Likelihood	  of	  Completing	  Online	  Surveys	  University	  of	  Texas	  at	  El	  Paso,	  TX;	  2012.	  57.	   Paul	  CL,	  Walsh	  RA,	  Tzelepis	  F.	  A	  monetary	  incentive	  increases	  postal	  survey	  response	  rates	  for	  pharmacists.	  J	  Epidemiol	  Community	  Health	  2005;59:1099-­‐101.	  58.	   Birnholtz	  JP,	  Horn	  DB,	  Finholt	  TA,	  Bae	  SJ.	  The	  Effects	  of	  Cash,	  Electronic,	  and	  Paper	  Gift	  Certificates	  as	  Respondent	  Incentives	  for	  a	  Web-­‐Based	  Survey	  of	  Technologically	  Sophisticated	  Respondents.	  Social	  Science	  Computer	  Review	  2004;22:355-­‐62.	  59.	   Perneger	  TV,	  Etter	  JF,	  Rougemont	  A.	  Randomized	  trial	  of	  use	  of	  a	  monetary	  incentive	  and	  a	  reminder	  card	  to	  increase	  the	  response	  rate	  to	  a	  mailed	  health	  survey.	  American	  journal	  of	  epidemiology	  1993;138:714-­‐22.	  60.	   Dillman	  DA,	  Parsons	  NL.	  Self-­‐administered	  paper	  questionnaire.	  London,	  UK:	  Sage;	  2008.	  61.	   Parsons	  NL,	  Manierre	  MJ.	  Investigating	  the	  Relationship	  among	  Prepaid	  Token	  Incentives,	  Response	  Rates,	  and	  Nonresponse	  Bias	  in	  a	  Web	  Survey.	  Field	  methods	  2013.	  	   121	  62.	   Church	  AH.	  Estimating	  the	  effect	  of	  incentives	  on	  mail	  survey	  response	  rates:	  A	  meta-­‐analysis.	  Public	  Opinion	  Quartely	  1993;57:62-­‐79.	  63.	   Beebe	  TJ,	  Davern	  ME,	  McAlpine	  DD,	  Call	  KT,	  Rockwood	  TH.	  Increasing	  response	  rates	  in	  a	  survey	  of	  Medicaid	  enrollees:	  the	  effect	  of	  a	  prepaid	  monetary	  incentive	  and	  mixed	  modes	  (mail	  and	  telephone).	  Medical	  care	  2005;43:411-­‐4.	  64.	   Cantor	  D,	  O'Hare	  BC,	  O'Connor	  KS.	  The	  use	  of	  monetary	  incentives	  to	  reduce	  non-­‐response	  in	  random	  digit	  dial	  telephone	  surveys.	  Hoboken,	  NJ:	  John	  Wiley	  &	  Sons;	  2007.	  65.	   Prescott	  RJ,	  Counsell	  CE,	  Gillespie	  WJ,	  et	  al.	  Factors	  that	  limit	  the	  quality,	  number	  and	  progress	  of	  randomised	  controlled	  trials.	  Health	  technology	  assessment	  1999;3:1-­‐143.	  66.	   Todd	  AM,	  Laird	  BJ,	  Boyle	  D,	  Boyd	  AC,	  Colvin	  LA,	  Fallon	  MT.	  A	  systematic	  review	  examining	  the	  literature	  on	  attitudes	  of	  patients	  with	  advanced	  cancer	  toward	  research.	  Journal	  of	  pain	  and	  symptom	  management	  2009;37:1078-­‐85.	  67.	   Singer	  E.	  The	  use	  and	  Effects	  of	  incentives	  in	  Surveys.	  	  National	  Science	  Foundation;	  2012	  October	  3-­‐4,	  2012;	  Washington	  DC.	  68.	   Cobanoglu	  N,	  Cobanoglu	  C.	  The	  effect	  of	  incentives	  in	  web	  surveys:	  application	  and	  ethical	  considerations.	  International	  Journal	  of	  Market	  Research	  2003;45:475-­‐88.	  69.	   Bosnnjak	  M,	  tuten	  TL.	  Prepaid	  and	  Promised	  Incentives	  in	  Web	  Surveys.	  Social	  Science	  Computer	  Review	  2003;21:208-­‐17.	  70.	   Frick	  A,	  Bachtiger	  MT,	  Reips	  UD.	  Financial	  Incentives,	  Personal	  Information	  and	  Drop-­‐Out	  in	  Online	  Studies.	  Dimensions	  of	  Internet	  Science	  2001:209-­‐19.	  71.	   Kolcic	  I,	  Polasek	  O.	  Do	  public	  health	  surveys	  provide	  representative	  data?	  Comparison	  of	  three	  different	  sampling	  approaches	  in	  the	  adult	  population	  of	  Croatia.	  Collegium	  antropologicum	  2009;33	  Suppl	  1:153-­‐8.	  72.	   Smith	  W,	  Mitchell	  P,	  Attebo	  K,	  Leeder	  S.	  Selection	  bias	  from	  sampling	  frames:	  telephone	  directory	  and	  electoral	  roll	  compared	  with	  door-­‐to-­‐door	  population	  census:	  results	  from	  the	  Blue	  Mountains	  Eye	  Study.	  Australian	  and	  New	  Zealand	  journal	  of	  public	  health	  1997;21:127-­‐33.	  	   122	  73.	   Valery	  PC,	  Williams	  G,	  McWhirter	  W,	  Sleigh	  A,	  Scott	  D,	  Bain	  C.	  Electronic	  telephone	  directory	  listings:	  preferred	  sampling	  frame	  for	  a	  population-­‐based	  study	  of	  childhood	  cancer	  in	  Australia.	  Annals	  of	  epidemiology	  2000;10:504-­‐8.	  74.	   Macfarlane	  GJ,	  Jones	  GT,	  Swafe	  L,	  Reid	  DM,	  Basu	  N.	  Alternative	  population	  sampling	  frames	  produced	  important	  differences	  in	  estimates	  of	  association:	  a	  case-­‐control	  study	  of	  vasculitis.	  Journal	  of	  clinical	  epidemiology	  2013;66:675-­‐80.	  75.	   infoGroup.	  Find	  new	  customers	  with:	  Sales	  Leads,	  Mailing	  Lists,	  Email	  Markets.	  In:	  infogroup,	  ed.	  Mississauga,ON2013.	  76.	   Heerwegh	  D,	  Loosveldt	  G.	  An	  Experimental	  Study	  on	  the	  Effects	  of	  Personalization,	  Survey	  Length	  Statements,	  Progress	  Indicators,	  and	  Survey	  Sponsor	  Logos	  in	  Web	  Surveys.	  Journal	  of	  Offical	  Statistics	  2006;22:191-­‐210.	  77.	   Joinson	  AN,	  Reips	  U-­‐D.	  Personalized	  salutation,	  power	  of	  sender	  and	  response	  rates	  to	  Web-­‐based	  surveys.	  Computers	  in	  Human	  Behavior	  2007;23:1372-­‐83.	  78.	   Link	  MW,	  Battaglia	  MP,	  Frankel	  LR,	  Osborn	  L,	  Mokdad	  AH.	  A	  Comparison	  of	  Address-­‐Based	  Sampling	  (ABS)	  Versus	  Random-­‐Digit	  Dialing	  (RDD)	  for	  General	  Population	  Surveys.	  Public	  Opinion	  Quartely	  2008;72:6-­‐27.	  79.	   Field	  TS,	  Cadoret	  CA,	  Brown	  ML,	  et	  al.	  Surveying	  physicians:	  do	  components	  of	  the	  "Total	  Design	  Approach"	  to	  optimizing	  survey	  response	  rates	  apply	  to	  physicians?	  Medical	  care	  2002;40:596-­‐605.	  80.	   Kaner	  EF,	  Haighton	  CA,	  McAvoy	  BR.	  'So	  much	  post,	  so	  busy	  with	  practice-­‐-­‐so,	  no	  time!':	  a	  telephone	  survey	  of	  general	  practitioners'	  reasons	  for	  not	  participating	  in	  postal	  questionnaire	  surveys.	  The	  British	  journal	  of	  general	  practice	  :	  the	  journal	  of	  the	  Royal	  College	  of	  General	  Practitioners	  1998;48:1067-­‐9.	  81.	   Levy	  RM,	  Shapiro	  M,	  Halpern	  SD,	  Ming	  ME.	  Effect	  of	  personalization	  and	  candy	  incentive	  on	  response	  rates	  for	  a	  mailed	  survey	  of	  dermatologists.	  The	  Journal	  of	  investigative	  dermatology	  2012;132:724-­‐6.	  82.	   Olson	  L,	  Schneiderman	  M,	  Armstrong	  R.	  Increasing	  physician	  survey	  response	  rates	  without	  biasing	  survey	  results.	  .	  	  Proceedings	  of	  the	  Section	  on	  Survey	  Research	  Methods	  of	  the	  American	  Statistical	  Association;	  1993;	  Alexandria,	  VA:	  American	  Statistical	  Association.	  p.	  1036-­‐41.	  83.	   Barrett	  A,	  Kelly	  E.	  Using	  a	  Census	  to	  Assess	  the	  Reliability	  of	  a	  National	  Household	  Survey	  for	  Migration	  Research:	  The	  Case	  of	  Ireland*.	  IZA	  Disscussion	  Paper	  2008:20.	  	   123	  84.	   Gonzalez-­‐Rabago	  Y,	  La	  Parra	  D,	  Martin	  U,	  Malmusi	  D.	  [Participation	  and	  representation	  of	  the	  immigrant	  population	  in	  the	  Spanish	  National	  Health	  Survey	  2011-­‐2012.].	  Gaceta	  sanitaria	  /	  SESPAS	  2014.	  85.	   Partin	  MR,	  Powell	  AA,	  Burgess	  DJ,	  et	  al.	  Adding	  Postal	  Follow-­‐Up	  to	  a	  Web-­‐Based	  Survey	  of	  Primary	  Care	  and	  Gastroenterology	  Clinic	  Physician	  Chiefs	  Improved	  Response	  Rates	  but	  not	  Response	  Quality	  or	  Representativeness.	  Evaluation	  &	  the	  health	  professions	  2013.	  86.	   Canadian	  Community	  Health	  Survey	  (CCHS)	  Annual	  Component	  User	  Guide.	  In:	  StatisticsCanada,	  ed.	  Ottawa,ON,	  Canada2011.	  87.	   Pfeffermann	  D.	  The	  Role	  of	  Sampling	  Weights	  When	  Modeling	  Survey	  Data.	  International	  Statisical	  Review	  1993;61:317-­‐37.	  88.	   Bandolier.	  Intention-­‐to-­‐treat	  analysis	  (ITT).	  2007.	  89.	   Berenson	  ML,	  Levine	  DM,	  Krehbiel	  TC.	  Chapter	  12:	  Chi-­‐square	  Tests	  and	  Nonparametric	  Tests.	  	  Basic	  Business	  Statisitcs,	  12/E:	  Pearson	  educations;	  2012.	  90.	   Kalsekar	  I,	  Wagner	  JS,	  DiBonaventura	  M,	  Bates	  J,	  Forbes	  R,	  Hebden	  T.	  Comparison	  of	  health-­‐related	  quality	  of	  life	  among	  patients	  using	  atypical	  antipsychotics	  for	  treatment	  of	  depression:	  results	  from	  the	  National	  Health	  and	  Wellness	  Survey.	  Health	  and	  quality	  of	  life	  outcomes	  2012;10:81.	  91.	   Fox	  CK,	  Eisenberg	  ME,	  McMorris	  BJ,	  Pettingell	  SL,	  Borowsky	  IW.	  Survey	  of	  Minnesota	  parent	  attitudes	  regarding	  school-­‐based	  depression	  and	  suicide	  screening	  and	  education.	  Maternal	  and	  child	  health	  journal	  2013;17:456-­‐62.	  92.	   Smyth	  JD,	  Dillman	  DA,	  Christian	  LM,	  O'Neil	  AC.	  Using	  the	  Internet	  to	  Survey	  Small	  Towns	  and	  Communities:	  Limitations	  and	  Possibilities	  in	  the	  Early	  21st	  Century.	  American	  Behavirol	  Scientist	  2010;53:1423-­‐48.	  93.	   Porter	  SR,	  Umbach	  PD.	  Student	  Survey	  Response	  Rates	  Across	  Institutions:	  Why	  Do	  They	  Vary?	  Research	  in	  Higher	  Education	  2006;47:229-­‐47.	  94.	   Boyer	  CN,	  Adams	  DC,	  Lucero	  J.	  Rural	  Coverage	  Bias	  in	  Online	  Surveys?:	  Evidence	  from	  Oklahoma	  Water	  Managers.	  Journal	  of	  extension	  2010;48.	  95.	   Dillman	  DA.	  Reconsidering	  mail	  Survey	  Methods	  in	  an	  Internet	  World.	  2011.	  96.	   Dillman	  DA,	  Dolsen	  DE,	  Machlis	  GE.	  Increasing	  Response	  to	  Personally-­‐Delivered	  Mail-­‐Back	  Questionnaires.	  Journal	  of	  Offical	  Statistics	  1995;11:129-­‐39.	  	   124	  97.	   Couper	  MP.	  Technology	  Trends	  in	  Survey	  Data	  Collection.	  Social	  Science	  Computer	  Review	  2005;23:486-­‐501.	  98.	   Biemer	  PP,	  Lyberg	  LE.	  Introudction	  to	  Suvey	  Quality.	  Hoboken,	  NJ:	  Wiley;	  2003.	  99.	   Bourque	  L,	  Fielder	  EP.	  How	  to	  Conduct	  Self-­‐Administered	  and	  Mail	  Surveys	  Thousand	  Oaks,	  CA:	  SAGE;	  2003.	  100.	   Lyberg	  LE,	  Kasprzyk	  D.	  Data	  Collection	  Methods	  and	  Measurement	  Error:	  An	  Overview.	  	  Measurement	  Errors	  in	  Surveys.	  New	  York,	  NY:	  Wiley;	  1991.	  101.	   Brennan	  M,	  Rae	  N,	  Parackal	  M.	  Survey-­‐Based	  Experimental	  Research	  via	  the	  Web:	  Some	  Observations.	  Marketing	  Bulleting	  1999;10:83-­‐92.	  102.	   Simmons	  E,	  WIlmot	  A.	  Incentive	  payments	  on	  social	  surveys;	  A	  literature	  review.	  Social	  survey	  methodology	  Bulletin	  2004;53:1-­‐11.	  103.	   Ji	  P,	  Flay	  BR,	  Dubois	  DL,	  Brechling	  V,	  Day	  J,	  Cantillon	  D.	  Consent	  form	  return	  rates	  for	  third-­‐grade	  urban	  elementary	  students.	  American	  journal	  of	  health	  behavior	  2006;30:467-­‐74.	  104.	   Khadjesari	  Z,	  Murray	  E,	  Kalaitzaki	  E,	  et	  al.	  Impact	  and	  costs	  of	  incentives	  to	  reduce	  attrition	  in	  online	  trials:	  two	  randomized	  controlled	  trials.	  Journal	  of	  medical	  Internet	  research	  2011;13:e26.	  105.	   Whiteman	  MK,	  Langenberg	  P,	  Kjerulff	  K,	  McCarter	  R,	  Flaws	  JA.	  A	  randomized	  trial	  of	  incentives	  to	  improve	  response	  rates	  to	  a	  mailed	  women's	  health	  questionnaire.	  Journal	  of	  women's	  health	  2003;12:821-­‐8.	  106.	   Siemiatycki	  J.	  A	  comparison	  of	  mail,	  telephone,	  and	  home	  interview	  strategies	  for	  household	  health	  surveys.	  American	  journal	  of	  public	  health	  1979;69:238-­‐45.	  107.	   McPhee	  C,	  Hastedt	  S.	  More	  Money?	  The	  Impact	  of	  Larger	  Incentives	  on	  Response	  Rates	  in	  a	  Two-­‐Phase	  Mail	  Survey.	  Federal	  Commitee	  on	  Statistical	  Methodology	  2012.	  108.	   Burchell	  B,	  Marsh	  C.	  Item	  nonresponse	  in	  mail	  surveys:	  Extent	  and	  correlates.	  Journal	  of	  Marketing	  Research	  1992;15.	  109.	   Helgeson	  JG,	  Ursic	  ML.	  The	  role	  of	  affective	  and	  cognitive	  decision-­‐making	  processes	  during	  questionnaire	  completion.	  Public	  Opinion	  Quartely	  1994;11:493-­‐510.	  	   125	  110.	   Herzog	  RA,	  Bachman	  JG.	  Effects	  of	  Questionnaire	  Length	  on	  Response	  Quality.	  Public	  Opinion	  Quartely	  1981;45.	  111.	   Lozar	  Manfreda	  K,	  Vehovar	  V.	  Survey	  Design	  Features	  Influencing	  Response	  Rates	  in	  Web	  Surveys.	  	  International	  Conference	  on	  Improving	  Surveys2002.	  112.	   Galesic	  M,	  Bosnjak	  M.	  Effects	  of	  Questionnaire	  Length	  on	  Participation	  and	  Indicators	  of	  Response	  Quality	  in	  a	  Web	  Survey.	  Public	  Opinion	  Quartely	  2009;73:349-­‐60.	  113.	   Houston	  MJ,	  Ford	  NM.	  Broadening	  the	  Scope	  of	  Methodological	  Research	  on	  Mail	  Surveys.	  Journal	  of	  Marketing	  Research	  1976;13:397-­‐403.	  114.	   James	  JM,	  Bolstein	  R.	  Large	  Monetary	  Incentives	  and	  their	  Effect	  on	  Mail	  Reesponse	  Surveys	  Rates.	  .	  Public	  Opinion	  Quartely	  1992;56:442-­‐53.	  	  115.	   MacElroy	  B.	  Variables	  influencing	  dropout	  rates	  in	  Web-­‐based	  surveys.	  Quirk's	  Marketing	  Research	  Review	  2000.	  116.	   Galesic	  M.	  Effects	  of	  questionnaire	  length	  on	  response	  rates:	  review	  of	  findings	  and	  guidelines	  for	  future	  research.	  	  General	  Online	  Research	  Conference	  (GOR)	  2002.	  Stuttgard/Hohenheim,	  Germany2002.	  117.	   Krosnick	  JA,	  Holbrook	  AL,	  Berent	  MK,	  et	  al.	  The	  Impact	  of	  "No	  Opinion"	  Response	  Options	  on	  Data	  Quality:	  Non-­‐Attitude	  Reduction	  or	  an	  Invitation	  to	  Satisfice?	  Public	  Opinion	  Quartely	  2002;66:371-­‐403.	  118.	   Yan	  T,	  Conrad	  GG,	  Tourangeau	  R,	  Couper	  MP.	  Should	  I	  Stay	  or	  Should	  I	  go:	  The	  effects	  of	  progress	  feedback,	  promised	  task	  duration,	  and	  length	  of	  questionnaire	  on	  completing	  web	  surveys.	  International	  Jounal	  of	  Public	  Opinion	  Research	  2011;23:131-­‐47.	  119.	   Boltz	  MG.	  Time	  estimation	  and	  expectancies.	  .	  Memory	  and	  Cognition	  1993;21:853-­‐63.	  120.	   Heerwegh	  D,	  Looseveldt	  G,	  Vanhove	  T,	  Matthijs	  T.	  Effects	  of	  personalization	  on	  web	  survey	  data	  response	  rates	  and	  data	  quality.	  International	  Journal	  of	  Social	  Research	  Methodology	  2004;8:85-­‐99.	  121.	   Porter	  SR,	  Whitecomb	  ME.	  Non-­‐response	  in	  student	  surveys:	  The	  role	  of	  demographics,	  engagement	  and	  personality.	  Research	  in	  Higher	  Education	  2005;46:127-­‐52.	  	   126	  122.	   Groves	  RM,	  Floyd	  J.	  Fowler	  J,	  Couper	  MP,	  Lepkowski	  JM,	  Singer	  E,	  Tourangeau	  R.	  Survey	  Methodology,	  2nd	  Edition.	  Hoboken,	  NJ	  2009.	  123.	   Anderson	  A,	  Tancreto	  J.	  Using	  the	  web	  for	  data	  collection	  at	  the	  United	  States	  Census	  Bureau.	  Washington,	  DC	  2011.	  124.	   Languilles	  JS,	  Williams	  EA.	  Can	  lottery	  incentives	  boost	  web	  survey	  response	  rates?	  Findings	  from	  four	  experiments.	  Research	  in	  Higher	  Education	  2011;52:537-­‐53.	  125.	   Singer	  E.	  The	  use	  of	  incentives	  to	  reduce	  nonresponse	  in	  household	  surveys.	  In	  Survey	  nonresponse.	  	  Survey	  Nonresponse.	  New	  York,	  NY:	  Wiley;	  2002.	  126.	   Rosoff	  PM,	  Werner	  C,	  Clipp	  EC,	  Guill	  AB,	  Bonner	  M,	  Demark-­‐Wahnefried	  W.	  Response	  rates	  to	  a	  mailed	  survey	  targeting	  childhood	  cancer	  survivors:	  a	  comparison	  of	  conditional	  versus	  unconditional	  incentives.	  Cancer	  epidemiology,	  biomarkers	  &	  prevention	  :	  a	  publication	  of	  the	  American	  Association	  for	  Cancer	  Research,	  cosponsored	  by	  the	  American	  Society	  of	  Preventive	  Oncology	  2005;14:1330-­‐2.	  127.	   Thomas	  D.	  The	  Census	  and	  the	  evolution	  of	  gender	  roles	  in	  early	  20th	  century	  Canada.	  In:	  StatisticsCanada,	  ed.	  Ottawa,ON,Canada2010.	  128.	   Dal	  Grande	  E,	  Taylor	  AW.	  Sampling	  and	  coverage	  issues	  of	  telephone	  surveys	  used	  for	  collecting	  health	  information	  in	  Australia:	  results	  from	  a	  face-­‐to-­‐face	  survey	  from	  1999	  to	  2008.	  BMC	  medical	  research	  methodology	  2010;10:77.	  129.	   Residential	  Telephone	  Service	  Survey.	  In:	  StatisticsCanada,	  ed.	  Ottawa,ON,Canada	  2010.	  130.	   Clark	  W.	  Delayed	  transitions	  of	  young	  adults.	  In:	  StatisticsCanada,	  ed.	  Ottawa	  2009.	  131.	   Zickuhr	  K,	  Madden	  M.	  Older	  adults	  and	  internet	  use.	  	  Pew	  Research	  Center's	  Internet	  and	  American	  Life	  Project	  2012.	  132.	   Niemi	  GG,	  Portney	  K,	  King	  D.	  Sampling	  Young	  Adults:	  The	  Effects	  of	  Survey	  Mode	  and	  Sampling	  Method	  on	  Inferences	  about	  the	  Political	  Behavior	  of	  College	  Students.	  	  Annual	  meeting	  of	  the	  American	  Political	  Science	  Association.	  Boston,	  MA2008.	  133.	   Kirby	  EH,	  Kawashima-­‐Ginsberg	  K.	  The	  Youth	  Vote	  in	  2008.	  Medford,	  MA2009.	  	   127	  134.	   Howe	  P.	  The	  Electoral	  Participation	  of	  Young	  Canadians.	  In:	  ElectionsCanada,	  ed.	  Ottawa,	  ON,Canada	  2007.	  135.	   J.H.Pammett,	  LeDuc	  L.	  Explaining	  the	  Turnout	  Decline	  in	  Canadian	  Federal	  Electrions;	  A	  New	  Survey	  of	  Non	  Voters.	  In:	  Canada	  E,	  ed.	  Ottawa,	  Canada2003.	  136.	   Adua	  L,	  Sharp	  JS.	  Examining	  survey	  participation	  and	  response	  quality:	  The	  significance	  of	  topic	  salience	  and	  incentives.	  Survey	  Methodology	  2010;36:95-­‐109.	  137.	   Blumberg	  S,	  Luke	  J.	  Reevaluating	  the	  need	  for	  concern	  regarding	  noncoverage	  bias	  in	  landline	  surveys.	  American	  journal	  of	  public	  health	  2009;99:1806-­‐10.	  138.	   Blumberg	  SJ,	  Luke	  JV,	  Cynamon	  ML.	  Telephone	  coverage	  and	  health	  survey	  estimates:	  evaluating	  the	  need	  for	  concern	  about	  wireless	  substitution.	  American	  journal	  of	  public	  health	  2006;96:926-­‐31.	  139.	   Hochstim	  JR.	  A	  Critical	  Comparison	  of	  Three	  Strategies	  of	  Collecting	  Data	  from	  Households.	  Journal	  of	  American	  Statistical	  Association	  2012;62:976-­‐89.	  140.	   Biemer	  PP.	  Health	  insurance	  finance	  agency	  evaluation.	  Chapel,	  NC:	  Research	  Triangle	  Institute;	  1997.	  141.	   Dillman	  DA,	  Tarnai	  J.	  Administrative	  Issues	  in	  Mixed-­‐Mode	  Surveys.	  	  Telephone	  Survey	  Methodologies.	  New	  York,	  NY:	  Wiley;	  1988.	  142.	   Dillman	  DA,	  Sangster	  RL,	  Tarnai	  J,	  Rockwood	  TH.	  Understanding	  differences	  in	  people's	  answers	  to	  telephone	  and	  mail	  surveys.	  New	  Directions	  for	  Evaluation	  1996;1996:45-­‐61.	  143.	   Clagett	  B,	  Nathanson	  KL,	  Ciosek	  SL,	  et	  al.	  Comparison	  of	  address-­‐based	  sampling	  and	  random-­‐digit	  dialing	  methods	  for	  recruiting	  young	  men	  as	  controls	  in	  a	  case-­‐control	  study	  of	  testicular	  cancer	  susceptibility.	  American	  journal	  of	  epidemiology	  2013;178:1638-­‐47.	  144.	   Boyle	  JM,	  Lewis	  F,	  Tefft	  B.	  Cell	  Phone	  Mainly	  Households:	  Coverage	  and	  Reach	  for	  Telephone	  Surveys	  Using	  RDD	  Landline	  Samples.	  Survey	  practice	  2009;2.	  145.	   Lambert	  D,	  Langer	  G,	  McMenemy	  M.	  Cell	  Phone	  Sampling:	  An	  Alternative	  Approach.	  	  Annual	  conference	  of	  the	  Ammerican	  Assoicatio	  for	  Public	  Opinion	  Research.	  Chicage,	  IL	  2010.	  	   128	  146.	   Bauman	  S,	  Jobity	  N,	  Airey	  J,	  Atak	  H.	  Invites,	  intros	  and	  incentives:	  Lessons	  from	  a	  Web	  survey.	  	  55th	  annual	  conference	  of	  American	  Association	  for	  Public	  Opinion	  Research.	  Portland,	  ON	  2000.	  147.	   Singer	  E.	  Introduction	  nonresponse	  bias	  in	  household	  surveys.	  Public	  Opinion	  Quartely	  2006;70:637-­‐45.	  	   129	  Appendix	  A	  –	  Demographics	  Analysis	  	  	  	  The	  demographics	  of	  survey	  group	  respondents	  (Table	  7.1)	  were	  examined.	  	   1) Age	  –	  One	  Way	  analysis	  of	  variance	  (ANOVA)	  	  Responder’s	  ages	  of	  all	  groups	  were	  compared.	  A	  subsequent	  Tukey’s-­‐HSD	  test	  was	  used	  to	  visualize	  the	  differences	  in	  the	  mean	  between	  different	  groups	  (Figure	  A1)	  	  	  	  H0	  	  =	  All	  mean	  ages	  are	  the	  same	  HA	  	  =	  At	  least	  one	  mean	  age	  is	  different	  from	  others	  	  P	  value	  =	  1.08e-­‐13	  ***	  	  Results	  from	  the	  ANOVA	  suggested	  that	  at	  least	  one	  group’s	  mean	  age	  is	  significantly	  different	  from	  the	  mean	  age	  of	  other	  groups.	  However,	  ANOVA	  is	  an	  incomplete	  test	  and	  therefore	  the	  Tukey’s	  HSD	  test	  was	  used	  to	  examine	  between	  which	  two	  groups	  the	  differences	  lie.	  	  Tukey’s	  HSD	  test	  shows	  that	  the	  mean	  ages	  of	  groups	  E	  and	  G	  are	  significantly	  different	  compared	  to	  the	  other	  groups	  (Figure	  A1).	  	   	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   130	  TableA1	  –	  Difference	  in	  mean	  age	  between	  survey	  groups,	  95%	  CI	  and	  p	  values	  	  	   Diff	   0.025	   0.975	   p-­‐value	  B-­‐A	  	  	  	  	  	   -­‐0.81	   -­‐5.21	   3.59	   1.00	  C-­‐A	  	   -­‐1.46	   -­‐5.81	   2.90	   0.96	  D-­‐A	  	   1.51	   -­‐3.12	   6.14	   0.96	  E-­‐A	  	   5.34	   1.90	   8.77	   <	  0.01	  F-­‐A	  	   -­‐0.63	   -­‐4.48	   3.21	   1.00	  G-­‐A	   5.41	   1.81	   9.02	   <	  0.01	  C-­‐B	  	   -­‐0.65	   -­‐5.37	   4.08	   1.00	  D-­‐B	  	   2.32	   -­‐2.65	   7.30	   0.81	  E-­‐B	  	   6.15	   2.26	   10.04	   <	  0.01	  F-­‐B	  	   0.18	   -­‐4.08	   4.44	   1.00	  G-­‐B	   6.23	   2.18	   10.27	   <	  0.01	  D-­‐C	  	  	   2.97	   -­‐1.97	   7.90	   0.57	  E-­‐C	  	   6.79	   2.95	   10.64	   <	  0.01	  F-­‐C	   0.82	   -­‐3.39	   5.04	   1.00	  G-­‐C	  	  	   6.87	   2.88	   10.86	   <	  0.01	  E-­‐D	   3.83	   -­‐0.32	   7.97	   0.09	  F-­‐D	   -­‐2.14	   -­‐6.63	   2.35	   0.80	  G-­‐D	  	   3.90	   -­‐0.38	   8.19	   0.10	  F-­‐E	  	   -­‐5.97	   -­‐9.22	   -­‐2.72	   <	  0.01	  G-­‐E	   0.08	   -­‐2.89	   3.04	   1.00	  G-­‐F	  	  	   6.05	   2.62	   9.48	   <	  0.0	  	  	   131	  	  Figure	  A1-­‐	  Differences	  in	  mean	  age	  between	  the	  7	  survey	  groups	  (95%	  confidence	  intervals	  based	  on	  Tukey’s	  Honest	  Significant	  Difference	  Test)	  	  	  	   2) Gender	  –	  Chi-­‐square	  test	  of	  independence	  	  The	  chi-­‐square	  test	  of	  independence	  was	  used	  to	  examine	  whether	  the	  proportions	  of	  genders	  between	  all	  BCHS	  surveys	  groups	  were	  comparable.	  	  Results	  suggested	  that	  the	  gender	  distribution	  of	  group	  E	  respondents	  significantly	  differs	  from	  the	  gender	  distribution	  of	  all	  other	  BCHS	  groups	  (Table	  A2)	  	   	  	  Table	  A2	  –	  Analysis	  of	  pairwise	  differences	  in	  the	  distribution	  of	  gender	  among	  the	  7	  survey	  groups	  (p-­‐values	  from	  a	  Chi-­‐square	  test	  for	  independence)	  	   Survey	  Groups	   A	   B	   C	   D	   E	   F	   G	  A	   	   0.8346	   0.9563	   0.1474	   <0.0001	   0.5061	   0.9283	  B	   	   	   0.7508	   0.5126	   <0.0001	   0.3707	   0.6952	  C	   	   	   	   0.1393	   <0.0001	   0.6609	   1.0000	  D	   	   	   	   	   <0.0001	   0.0334	   0.0870	  E	   	   	   	   	   	   <0.0001	   <0.0001	  F	   	   	   	   	   	   	   0.6569	  G	   	   	   	   	   	   	   	  	   132	  df	  =	  1	  	   3) Education	  –	  Chi-­‐square	  test	  of	  independence	  	  The	  chi-­‐square	  test	  of	  independence	  was	  used	  to	  examine	  whether	  the	  percentage	  distributions	  of	  education	  were	  comparable	  between	  all	  BCHS	  surveys	  groups.	  	  	  Result	  suggested	  that	  the	  education	  distributions	  of	  all	  BCHS	  survey	  groups	  are	  comparable.	  	  Table	  A3–	  Analysis	  of	  pairwise	  differences	  in	  the	  distribution	  of	  education	  among	  the	  7	  survey	  groups	  (p-­‐values	  from	  a	  Chi-­‐square	  test	  for	  independence)	  	   Survey	  Groups	   A	   B	   C	   D	   E	   F	   G	  A	   	  	   0.6005	   0.2989	   0.6614	   0.7074	   0.1623	   0.2301	  B	   	  	   	  	   0.6767	   0.8850	   0.7100	   0.0995	   0.8783	  C	   	  	   	  	   	  	   0.7188	   0.6652	   0.4735	   0.2998	  D	   	  	   	  	   	  	   	  	   0.9089	   0.0796	   0.9503	  E	   	  	   	  	   	  	   	  	   	  	   0.1071	   0.4156	  F	   	  	   	  	   	  	   	  	   	  	   	  	   0.0059*	  G	   	  	   	  	   	  	   	  	   	  	   	  	   	  	  df	  =1	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   133	  Appendix	  B	  –	  Multivariable	  Logistic	  Regression	  Model	  with	  Interaction	  	  Possible	  interaction	  between	  instant	  lottery	  and	  prepaid	  cash	  incentive	  was	  examined	  incorporating	  the	  interaction	  term	  into	  the	  multivariable	  logistic	  model.	  	  	  Table	  B1	  –	  Logistic	  regression	  analysis	  of	  the	  effect	  of	  5	  survey	  design	  factors	  on	  survey	  response	  with	  an	  interaction	  term	  between	  prepaid	  cash	  and	  instant	  lottery	  (coefficients	  and	  95%	  CI)	  	   Survey	  Factors	   Coefficient	   95%	  CI	   p	  value	  	  	   	  	   2.50%	   9.50%	   	  	  Intercept	   -­‐1.58	   -­‐1.75	   -­‐1.42	   <	  0.001***	  InfoCan	   0.09	   -­‐0.08	   0.26	   0.29	  Lottery	   0.18	   -­‐0.05	   0.41	   0.12	  Coin	   0.24	   0.02	   0.47	   0.04*	  Short	   0.26	   0.07	   0.45	   0.01**	  Paper	   0.81	   0.54	   1.09	   <	  0.001***	  Lottery	  *	  Coin	   0.22	   -­‐0.08	   0.53	   0.15	  	  	  Table	  B2	  –	  Logistic	  regression	  analysis	  of	  the	  effect	  of	  5	  survey	  design	  factors	  on	  survey	  response	  with	  an	  interaction	  term	  between	  prepaid	  cash	  and	  instant	  lottery	  (odds	  ratios	  and	  95%	  CI)	  	  	  	   Survey	  Factors	   Odds	  ratio	   95%	  CI	  	   	   2.50%	   97.50%	  InfoCan	  Canada	  Post	  (ref)	   1.09	  1.00	   0.92	   1.29	  Instant	  Lottery	  End-­‐of-­‐study	  lottery	  (ref)	   1.2	  1.00	   0.95	   1.50	  Coin	  No	  coin	  (ref)	   1.27	  1.00	   1.02	   1.59	  Short	  survey	  Long	  survey	  (ref)	   1.29	  1.00	   1.07	   1.57	  Paper	  survey	  Online	  survey	  (ref)	   2.26	  1.00	   1.72	   2.97	  Lottery	  *	  Coin	  No	  Lottery	  or	  Coin	   1.25	  1.00	   0.92	   1.70	  	  	  	  	  	   134	  Likelihood	  ratio	  test	  	  The	  likelihood	  ratio	  test	  was	  used	  to	  test	  whether	  the	  interaction	  is	  significant	  	  	  Ho	  =	  The	  two	  models	  are	  same	  HA	  =	  The	  full	  model	  with	  interaction	  term	  is	  significantly	  better	  than	  the	  original	  model	  	  Analysis	  of	  Deviance	  Table	  	  Model	  1	  (original):	  Response	  ~	  Form	  +	  Length	  +	  Lottery	  +	  Incentive	  +	  Source	  	  Model	  2	  (full):	  Response	  ~	  Form	  +	  Length	  +	  Lottery	  +	  Incentive	  +	  Source	  +	  Lottery	  *	  	  	  	  	  	  Incentive	  	  	  	  	  	  Resid.	  Df	  	  	  	  	  Resid.	  Dev	  	  	  	  	  Df	  	  	  	  	  Deviance	  	  	  	  Pr(>Chi)	  1	  	  	  	  	  	  7993	  	  	  	  	  	  	  	  	  	  	  9214.4	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  2	  	  	  	  	  	  7994	  	  	  	  	  	  	  	  	  	  	  	  9216.5	  	  	  	  	  	  	  	  -­‐1	  	  	  	  -­‐2.0367	  	  	  	  	  	  	  	  	  0.1535	  	  Because	  the	  p	  value	  is	  not	  <	  0.05,	  I	  do	  not	  have	  evidence	  to	  reject	  my	  null	  hypothesis.	  Therefore,	  the	  full	  model	  is	  not	  better	  than	  the	  original	  model.	  	  In	  conclusion,	  the	  interaction	  term	  was	  excluded	  from	  the	  final	  logistic	  regression.	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	   135	  Appendix	  C	  –	  Data	  Representativeness	  Subgroup	  Analysis	  	  	  CCHS	  and	  BCHS	  sub-­‐groups	  were	  collapsed	  to	  test	  for	  significant	  difference	  between	  specifically	  chosen	  subgroup	  categories	  within	  socio-­‐demographic	  variables.	  	  Analysis	  was	  conducted	  using	  the	  chi-­‐square	  test	  for	  independence.	  	  Table	  C1	  –Analysis	  of	  differences	  in	  the	  percentage	  of	  persons	  ≤	  29	  years	  of	  age	  between	  the	  CCHS	  and	  7	  BCHS	  sampling	  groups	  	   Survey	   N	   ≤	  29	  (%)	   >	  29	  (%)	   P	  value	  CCHS	   7102	   20.3	   79.7	   	  	  A	   171	   10.1	   89.9	   0.0012**	  B	   198	   9.1	   90.9	   	  	  0.0001***	  C	   208	   10.7	   89.3	   	  	  0.0008***	  D	   282	   8.5	   91.4	   	  	  <0.0001***	  E	   301	   4.7	   95.3	   <0.0001***	  F	   337	   14.8	   85.2	   	  	  	  	  	  0.0174*	  G	   434	   5.9	   93.4	   	  	  <0.0001***	  df	  =	  1	  Note:	  data	  excludes	  “not	  stated”	  responses	  *	  	  	  	  	  =	  p	  value	  <	  0.05	  **	  	  	  =	  p	  value	  <	  0.01	  ***	  =	  p	  value	  <	  0.001	  	  	  Table	  C2	  –	  Analysis	  of	  differences	  in	  the	  percentage	  of	  married	  individuals	  between	  the	  CCHS	  and	  7	  BCHS	  sampling	  groups	  	   Survey	   N	   Married	  (%)	   Other	  (%)	   P	  value	  CCHS	   7102	   55.2	   44.8	   	  	  A	   171	   54.8	   41.7	   0.6985	  B	   198	   53.5	   44.5	   0.9203	  C	   208	   58.0	   38.5	   0.1797	  D	   282	   51.2	   44.7	   0.5967	  E	   301	   70.4	   26.7	   	  	  	  	  	  <0.0001***	  F	   337	   54.2	   44.3	   1.0000	  G	   434	   51.9	   34.5	   0.0750	  df	  =	  1	  Note:	  data	  excludes	  “not	  stated”	  responses	  *	  	  	  	  	  =	  p	  value	  <	  0.05	  **	  	  	  =	  p	  value	  <	  0.01	  ***	  =	  p	  value	  <	  0.001	  	  	   136	  Table	  C3	  –	  Analysis	  of	  differences	  in	  the	  percentage	  of	  single/never	  married	  individuals	  between	  the	  CCHS	  and	  7	  BCHS	  sampling	  groups	  	   Survey	   N	   Single/	  Never	  Married	  (%)	   Other	  (%)	   P	  value	  CCHS	   7102	   22.9	   76.8	   	  A	   171	   16.4	   80.1	   0.0853	  B	   198	   17.2	   80.8	   0.0902	  C	   208	   18.0	   78.5	   0.1615	  D	   282	   17.3	   78.6	   0.0706	  E	   301	   8.8	   88.3	   	  	  	  	  <0.0001***	  F	   337	   16.9	   81.6	   	  	  0.0165*	  G	   434	   11.0	   75.4	   	  	  	  	  <0.0001***	  df	  =	  1	  	  Note:	  data	  excludes	  “not	  stated”	  responses	  *	  	  	  	  	  =	  p	  value	  <	  0.05	  **	  	  	  =	  p	  value	  <	  0.01	  ***	  =	  p	  value	  <	  0.001	  	  	  Table	  C4	  –	  Analysis	  of	  differences	  in	  the	  percentage	  of	  persons	  reporting	  excellent	  health	  between	  the	  CCHS	  and	  7	  BCHS	  sampling	  groups	  	   Survey	   N	   Excellent	  (%)	   Other	  (%)	   P	  value	  CCHS	   7097	   22.0	   77.9	   	  	  A	   171	   15.5	   84.5	   0.0600	  B	   198	   13.6	   85.8	   	  	  	  	  0.0071**	  C	   208	   13.7	   85.9	   	  	  	  	  0.0047**	  D	   282	   19.2	   80.8	   0.2857	  E	   301	   14.5	   84.9	   	  	  	  	  0.0032**	  F	   337	   16.3	   83.4	   	  	  0.0175*	  G	   434	   11.2	   79.5	   	  	  	  	  	  <0.0001***	  df	  =	  1	  Note:	  data	  excludes	  “not	  stated”	  responses	  *	  	  	  	  	  =	  p	  value	  <	  0.05	  **	  	  	  =	  p	  value	  <	  0.01	  ***	  =	  p	  value	  <	  0.001	  	  	  	  	  	  	  	   137	  Table	  C5	  -­‐	  Analysis	  of	  differences	  in	  the	  percentage	  of	  persons	  reporting	  total	  annual	  income	  ≥$80,000	  between	  the	  CCHS	  and	  7	  BCHS	  sampling	  groups	  	  	  	   Survey	   N	   ≥	  $80,000	   Other	   P	  value	  CCHS	   7102	   30.5	   49.0	   	  	  A	   171	   29.2	   57.7	   0.2674	  B	   198	   38.4	   51.5	   0.2753	  C	   208	   32.7	   53.2	   1.0000	  D	   282	   34.5	   52.0	   0.7784	  E	   301	   40.5	   47.1	   	  	  0.0128*	  F	   337	   33.1	   58.7	   0.4624	  G	   434	   28.4	   50.3	   0.4274	  df	  =	  1	  Note:	  data	  excludes	  “not	  stated”	  responses	  *	  	  	  	  	  =	  p	  value	  <	  0.05	  **	  	  	  =	  p	  value	  <	  0.01	  ***	  =	  p	  value	  <	  0.001	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  	  

Cite

Citation Scheme:

        

Citations by CSL (citeproc-js)

Usage Statistics

Share

Embed

Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                        
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            src="{[{embed.src}]}"
                            data-item="{[{embed.item}]}"
                            data-collection="{[{embed.collection}]}"
                            data-metadata="{[{embed.showMetadata}]}"
                            data-width="{[{embed.width}]}"
                            async >
                            </script>
                            </div>
                        
                    
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:
https://iiif.library.ubc.ca/presentation/dsp.24.1-0167507/manifest

Comment

Related Items