document
stringlengths
1
49.1k
summary
stringlengths
3
11.9k
BACKGROUND Both Malaysia and Singapore inherited the British-style National Health Service and continued to finance its healthcare system from its revenue. 1 Over the years, the dissimilarity in the rate of socioeconomic development has significantly impacted the way resources being channelled to public services, particularly the health service. The Malaysian government funded medical and public health services through revenues derived from taxes, government revenues and income earned from government corporatised enterprises. 2 While in Singapore, it adopts a hybrid system to finance the health system whereby, --- Strengths and limitations of this study <unk> The unprecedented cross-national comparison of patients with hypertension sharing similar sociocultural background but different economic and health financing environment contributes to this study's strength. <unk> The study examined the sociodemography and disease characteristics factors which influenced Hypertension Self-Care Profiles (HTN-SCP) domain scores in Malaysia and Singapore among people aged 40 years and above. <unk> Large overall sample size has enabled in-depth analyses of the individual domain of the HTN-SCP. <unk> Cross-sectional study design implies associations but not causation and does not allow conclusions about changes in behaviour during the course of illness. <unk> The sample size and recruitment methods differ between the countries; suggesting that the results may not be generalisable to the national population and the HTN-SCP domains mean scores comparison between the two countries must be interpreted with caution. on February 18, 2024 by guest. Protected by copyright. http://bmjopen.bmj.com/ BMJ Open: first published as 10.1136/bmjopen-2020-044192 on 14 June 2021. Downloaded from --- Open access the cost of care is funded jointly by the government and the individual through insurance, revenue from taxes and personal medical saving accounts. 3 The intent to cultivate personal responsibility towards taking charge of an individual's health underpins the shared healthcare financing concept. Such an approach is postulated to shape an individual's overall perceptions and attitudes towards self-efficacy and self-care, 4 which is pivotal for the successful management of long term, non-communicable diseases such as hypertension. Hypertension is a significant cause of morbidity and mortality arising from cardiovascular and kidney disease. 5 The prevalence of adults with hypertension as 30% in Malaysia and 21.5% in Singapore. 6 7 However, a significant proportion of the affected population has yet to attain treatment goal. In Singapore, 49.7% of patients treated for hypertension in primary care were reported with good blood pressure (BP) control. 7 While in Malaysia, the proportion of patients with good hypertension control is lower at 37.4%. 8 The differences in the proportion of good BP control warrants a comparison of their health behaviour profiles in the two countries as they have similar multi-ethnic background and culture. The management of hypertension does not only encompass pharmacological treatment prescribed from the attending physician, but patients themselves are required to perform self-care measures to improve their BP control. 9 Self-care includes maintaining a healthy diet, performing regular physical activities, achieving ideal body weight and avoiding unhealthy lifestyle such as smoking. 10 The success of these activities requires behaviour change, motivation and self-efficacy. Hypertension self-care profile among adult with hypertension aged 18 years and above were found to be moderate. 11 To date, such data in Singapore are relatively lacking. This study aimed to compare the sociodemography, disease characteristics and HTN-SCP between Malaysia and Singapore and to determine the factors influencing hypertension self-care among study populations in Malaysia and Singapore among people aged 40 years and above. --- METHODOLOGY The studies were cross-national surveys of self-care profiles of patients with hypertension conducted among two study populations in two countries between October 2016 and June 2017. In Malaysia, the study was conducted in three urban primary care clinics in Selangor, Malaysia and a polyclinic in Bukit Merah, Singapore. The studies' inclusion criteria were adults aged 40 years and above, with underlying hypertension diagnosed by a physician. Pregnant women and those with underlying psychiatric illness or cognitive impairment were excluded from this study. The estimated sample size in Malaysia was 720 based on the mean score of self-care management among hypertensive aged <unk>60 years and <unk>60 years 12 using Lemeshow et al formula 13 with 95% confidence level, 90% of power and 20% of non-response rate. However, in comparison with our previous reporting, in this study, we only include people aged 40 years and above, which gave us a total of 702 participants in Malaysia. As the Singapore study had no prior literature on the percentage of patients with high self-care, 50% was adopted to obtain the maximum sample size. With a 95% CI estimate and 5% precision, the sample size required was 385. The sample size was increased to 450 to account for 15% incomplete data and non-response rate. --- Study instrument In this study, the Hypertension Self-Care Profiles (HTN-SCP) was used to assess the hypertension self-care profile. In this tool's development, two underpinning theories were Orem's self-care model and motivational interviewing (MI). 14 15 Orem's model described how people enabled self-care by performing specific actions to manage their illness. 14 Understanding the reason behind these actions were crucial to self-care. MI facilitates the self-care process by promoting commitment and developing confidence for a behaviour change. 15 Thus, using HTN-SCP tool uses the domains of behaviour, motivation and self-efficacy to assess self-care among patients with hypertension. 16 The HTN-SCP tool is a reliable tool with good internal consistency. 16 It consists of 60-items and three domains which are behaviour, motivation and self-efficacy. It has been validated in Singapore, [17][18][19] and the Cronbach's alpha for the subdomain's ranges from 0.857 to 0.948. For the Malay version, the Cronbach's alpha for the subdomains ranges from 0.851 to 0.945 whereas, for the Mandarin version, the Cronbach's alpha ranges from 0.838 to 0.929. 18 19 There were 20-items in each domain, and the score of each domain ranges from 0 to 80 as each question using a 4-point Likert scale. Higher scores indicate a higher level of self-care behaviour, motivations and self-efficacy. A pretest of the questionnaire involving 30 participants was conducted in Malaysia to determine the questionnaire's feasibility. Following the pretest, minor changes were made to the questionnaire. The questionnaire was available in three languages which are English, Malay and Mandarin for participants to select their preferred version. [17][18][19] It consists of three sections: sociodemographic characteristics, medical profiles on hypertension and the HTN-SCP tool. BP reading was taken from the patient's medical records. The definition of body mass index (BMI) was based on the WHO recommendation for Asian population. 20 Underweight is defined if BMI <unk>18.5 kg/m 2, normal weight is defined if BMI is 18.5-22.9 kg/m 2, overweight is defined with BMI 23-27.4 kg/m 2 and obese is defined if BMI >27.5 kg/m 2. The definition of controlled BP was based on the Joint National Committee Eighth (JNC 8) guidelines. 21 The BP of patients with underlying hypertension without diabetes was considered controlled if their BP <unk>140/90 mm Hg, regardless of age. --- Open access --- Data collection Malaysia In Malaysia, participants were recruited using a systematic random sampling method. A sampling interval of two was used as a constant difference between participants. The first patient (the reference point) was chosen using a draw lot method. Subsequently, every alternate patient was approached for study participation. --- Singapore In Singapore, potential participants were screened for eligibility at the waiting area outside the clinic consultation rooms and were invited to participate in the study. Patients gave written informed consent to join the study. We obtained their sociodemographic data via selfadministered proforma. The HTN-SCP questionnaire was administered through a face-to-face interview. We verified the patients' clinical information through their latest medical records. --- Data analysis We used SPSS V.22.0 in the data analysis. We used descriptive statistics to describe the demographic and disease profiles of the patients. We used percentages and frequencies for the categorical variables; mean and SD were used for the continuous variables if they were normally distributed. The normality of the continuous data was based on z-score of skewness and kurtosis, Kolmogorov-Smirnov, histogram and Q-Q plot. We used independent t-test or one-way ANOVA (analysis of variance) to determine the association for numerical data. We performed a <unk> 2 or Fisher's exact test for the categorical data. The significant level was set at p<unk>0.05. multiple linear regression (MLR) model was performed to determine the predictors for hypertension self-care. Variables with p<unk>0.25 from the univariate analysis were included in the MLR model. Level of significance was set at p<unk>0.005. The MLR results were reported as beta coefficient, SE and 95% CI. --- Patient and public involvement This research was done with the involvement of the patients as research participants. They were not involved in the study design, recruitment, interpretation of the report's results and writing. --- RESULTS A total of 1123 adults with hypertension participated in this study, of which 702 and 421 were Malaysians and Singaporeans, respectively. The response rate in Malaysia was 93.8% (761/811). Non-participations were due to language barrier and time. The response rate was not reported in Singapore. The proportion of participants aged 60 years and above was higher in Singapore (63.4%) than Malaysia (54.6%). More female participants were recruited among the Singaporeans (50.6%) and Malaysians (49.4%). More Malays (42.9%) in the latter, whereas in the former, Chinese ethnicity constituted the highest proportion (69.6%). More participants were married in Malaysia (81.6%) than those in Singapore (67%). More patients had tertiary education in Singapore (22.3%), versus those in Malaysia (13.0%). Table 1 summarises the characteristics of the study participants. A higher proportion of the participants from Malaysia were on three or more than four antihypertensive medications (20.9%) compared with Singapore (14.3%) (p=0.023). Regarding BP control to target, Malaysia (33.8%) had a significantly lower proportion of the treatment goal compared with Singapore (74.6%). Significantly more Malaysia participants had diabetes mellitus (65.8%) compared with Singapore participants (46.8%) (p<unk>0.001) (table 2). Table 3 illustrates the score of the HTN-SCP of the participants. The mean total score of HTN-SCP was significantly higher among Singapore participants (mean 189.9, SD 27.6) compared with Malaysia participants (mean 184.1, SD 22.8) (p<unk>0.001). Similarly, for all the subdomains mean scores: motivation domain (p<unk>0.001), self-efficacy domain (p<unk>0.001) and behaviour score (p<unk>0.001) were significantly higher among the Singapore participants compared with Malaysia participants. Detail results of associations between HTN-SCP behaviour, motivation and self-efficacy scores and sociodemographic factors and disease characteristic in participants are in online supplemental files 1-3. As shown in table 4, in both countries, the factors that were significantly associated with HTN-SCP behaviour mean scores were participants aged 60 years and above (Malaysia: adjusted beta=2.047, 95% CI 0.728 to 3.365, p=0.002) (Singapore: adjusted beta=2.473, 95% CI 0.671 to 4.275, p=0.007), of Indian ethnicity (Malaysia: adjusted beta=4.389, 95% CI 2.614 to 6.164, p<unk>0.001) (Singapore: adjusted beta=3.271, 95% CI 1.09 to 5.452, p=0.003) and those with tertiary education (Malaysia: adjusted beta=4.274, 95% CI 2.175 to 6.373, p<unk>0.001) (Singapore: adjusted beta=4.243, 95% CI 1.857 to 6.629, p<unk>0.001). For Malaysia, Malay ethnicity (adjusted beta=3.192, 95% CI 1.719 to 4.665, p<unk>0.001) also was associated with higher HTN-SCP behaviour mean scores. For Singapore, other factors associated with HTN-SCP behaviour were women participants (adjusted beta=1.864, 95% CI 0.133 to 3.595, p<unk>0.035), of other ethnicities (adjusted beta=9.25, 95% CI 2.714 to 15.786, p=0.006), and those with secondary education (adjusted beta=3.184, 95% CI 1.28 to 5.09, p=0.001). Table 4 summarises the association between HTN-SCP motivation mean scores and sociodemographic and disease characteristics among Malaysia and Singapore participants. In both countries, the HTN-SCP motivation mean scores were significantly associated with Indian ethnicity (Malaysia: adjusted beta=5.099, 95% CI 3.359 to 6.838, p<unk>0.001) (Singapore: adjusted beta=3. 374 Open access p<unk>0.001) (Singapore: adjusted beta=5.528, 95% CI 2.863 to 8.193, p<unk>0.001). For Malaysia, Malay ethnicity (adjusted beta=4.339, 95% CI 2.857 to 5.82, p<unk>0.001) also was associated with higher HTN-SCP motivation mean scores. Table 4 summarises the association between HTN-SCP self-efficacy mean scores and our participants' sociodemographic factors and disease characteristics. In both countries, the factors that were significantly associated with HTN-SCP self-efficacy mean scores were participants of Indian ethnicity (Malaysia: adjusted beta=6.174, 95% CI 4.433 to 7.914, p<unk>0.001),(Singapore: adjusted beta=3.706, 95% CI 1.163 to 6.25, p=0.004) and those with tertiary education (Malaysia: adjusted beta=4.752, 95% CI 2.687 to 6.818, p<unk>0.001) (Singapore: adjusted beta=4.179, 95% CI 1.51 to 6.847, p<unk>0.001). For Malaysia, Malay ethnicity (adjusted beta=4.003, 95% CI 2.537 to 5.468, p<unk>0.001) and women (adjusted beta=1.747, 95% CI 0.475 to 3.02, p=0.007) were associated with higher HTN-SCP selfefficacy mean scores. For Singapore, other factors associated with HTN-SCP self-efficacy were of other ethnicities (adjusted beta=8.4, 95% CI 0.696 to 16.104, p=0.033) and those with secondary education (adjusted beta=3.921, 95% CI 1.698 to 6.145, p=0.001). --- DISCUSSION --- Summary of findings Compatible with the national population composition, more Malays and Chinese are present in the respective Malaysia and Singapore study populations. Demographically, the Singapore study population comprised higher proportions of those who are age 60 years (63.4% vs 54.6%) and older who were educated up to secondary level (87.0% vs 77.7%). Nearly half of the participants from both countries were treated with at least one antihypertensive medication, with a significantly higher proportion of those from Malaysia's study population on three or more such medications (p=0.023). In terms of control, more than half of Singaporean participants attained BP control goals based on JNC 8 guidelines, with fewer in Malaysia, attaining the mark. 21 Singapore's participants in this study had significantly higher mean total HTN-SCP scores. In both countries, HTN-SCP behaviour, motivation and self-efficacy were associated with Indian ethnicity and tertiary education. The HTN-SCP behaviour score was associated aged 60 years and above in both countries. The HTN-SCP motivation mean scores were associated with secondary education level in both countries. For Malaysia, Malay ethnicity was associated with higher HTN-SCP behaviour, motivation and self-efficacy scores. Other factors associated with HTN-SCP behaviour and mean self-efficacy scores were of other ethnicity and those with secondary education for Singapore. Women were associated with higher HTN-SCP behaviour mean scores in Singapore and HTN-SCP selfefficacy scores in Malaysia. --- Open access Hypertension: the impact of self-care on health outcomes In terms of control, three-quarter of the Singaporean participants in this study significantly attained BP control goals based on JNC 8 guidelines, with fewer in Malaysia reached the mark (73.6% vs 33.8%). The possible explanation could be due to the higher tertiary education background (22.3% vs 13% p=0.001) and lesser patients with diabetes (46.8% vs 65.8%, p<unk>0.001) among study population from Singapore. In this study, Singapore participants attained significantly higher mean total HTN-SCP scores than their counterparts in Malaysia. These results apply to all the domains of the tool; behaviour, motivation and self-efficacy. Nearly two-thirds (62.2%) of Singaporean participants achieved BP goal versus one-third of those from Malaysia (34.5%). These findings were similar to the prevalence of BP controlled to target in population-based studies. 17 Those with higher HPT-SCP scores reflects their higher levels of self-efficacy and self-care. Self-efficacy empowers them to take on daily self-care measures to control their BP and reduces cardiovascular risks. 22 The higher total HPT-SCP scores in Singapore patients may be partly due to its healthcare system and policy. 23 It is designed to enable the population to take on higher responsibility to manage their health through co-share healthcare financing, comprehensive individual and community empowerment and self-management programmes. [24][25][26] Nevertheless, the implementation of these programmes 27 28 remains challenging, with hindrance to reaching out to all patients, particularly those with lower health literacy and motivation. 29 30 Comparing self-care profiles In this study, being of Indian ethnicity is associated with better self-care scores in all HPT-SCP domains for both countries than Chinese and Malay ethnicities. Despite good progress in healthcare accessibility, ethnic health disparity is still a challenge in both countries. While good self-care will result in better health outcomes, studies have shown that the incidence of metabolic syndromes, including raised BP, is high among Indian ethnicity with significant mortality risk in both Indian and Malay ethnicities. [31][32][33] Although there may be a potential cultural influence of reporting desirable outcomes among patients of Indian ethnicity, further exploration may be of value to look at other factors, including the role of genetics on cardiovascular outcomes. We also found that Malay ethnicity in Malaysia's study population had better selfcare scores in all HPT-SCP domains than Singapore's study population. These findings may be related to ethnicity and medium of language used by healthcare staffs in the primary care setting. The majority of Malaysian healthcare staff are of Malay ethnicity with Malay language as the primary medium of communication. The similar medium of language used may have eased the access and understanding of health education across all aspect of self-care provided by the system. 34 Behaviour mean score was significantly associated with participants aged 60 years and above in both countries. Older patients with hypertension were reported to be more compliant in their BP monitoring and were more motivated to maintain weight. 35 36 The longer duration of diagnosis increased their engagement with the health system over time. Thus, this might improve their knowledge about hypertension, and improved their coping skills to manage a chronic condition. Studies have shown that higher education level is associated with adherence to self-care activities, 30 37 38 as reflected in this study's results. Being more educated allows people with hypertension to access and understand health information and resources to better manage their health better. 11 39 They may be mindful of higher healthcare expenditure if they are hospitalised for hypertension-related complications such as stroke and maybe more conscious of the cost of maintaining their health. 4 Thus, those with better education may be more likely to be motivated to adopt self-care practices to avoid such complications. In this study, women had a significantly higher mean score for behaviour domain scores in Singapore and Malaysia's self-efficacy. Studies have found that women are likely to adopt the behaviour, leading to favourable lifestyle change and the self-efficacy to monitor their BP. 38 40 It has been shown that monitoring BP alone is not enough to improve cardiovascular outcomes. 9 Self-care is not just about an individual responsibility to care for their health. 41 Based on this study, self-care may be supported by the education policy and the healthcare system 23 through better access to education and reducing gaps in health inequalities, that is, ethnicity and gender. Holistic management of hypertension is multi-faceted, including behaviour change approach and raising the motivation level. Enhancing self-efficacy to actualise self-care is one prerequisite for cost-effective and optimal long-term control of an individual's BP. --- Strengths and limitations The unprecedented cross-national comparison of patients with hypertension that shares similar sociocultural background but different economic and health financing environment contributes to this study's strength. This is a study of comparison between two populations from two different countries, but not a comparison between two countries. It adds to the literature on the association between self-efficacy, self-care and BP treatment goal achievement. Large sample size has enabled in-depth analyses of the individual domain of the HTN-SCP. This study is not without limitations. The cross-sectional study design does not allow causal effect relationship to be determined. The difference in assumptions (ie, conservative) resulted in different ways the sample size were calculated in both countries. Non-response rates adopted by the countries affected the sample size in each study. The small sample size in one country may inadequately power the study. As for recruitment, selection bias is inherent for convenience sampling in one study centre, suggesting that the results may not be generalisable to the national Open access population. Due to these reasons, the HTN-SCP domains mean scores comparison between the two countries must be interpreted with caution. We excluded an essential social variable, household income, due to the differences in how socio-economic categories being determined in each country. Malaysia has a national standard to categorise actual household income into three different levels (ie, low-income, middle-income and upper-income levels) and Singapore uses tax payment or housing type to ascertain this. --- CONCLUSION Patients with hypertension in Singapore's study population have a better overall self-care profile across behaviour, motivation and self-efficacy. In both study populations, being of Indian ethnicity and having tertiary education were predictors of higher self-care scores. Self-efficacy and skills in self-care are potentially modifiable. Future intervention to improve self-care among people with hypertension may need to be tailored to their behaviour, motivation and self-efficacy levels. This study's findings may be of interest for public health measures to tackle health inequality in multi-ethnic settings globally. --- Competing interests None declared. Patient consent for publication Not required. --- Ethics approval We obtained ethical approval from the Medical Research & Ethics Committee of the Ministry of Health Malaysia (NMRR-17-1508-36071). The study was approved by the SingHealth Centralised Institutional Review Board (CIRB Reference number: 2017/2197). Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data are available upon reasonable request. The data used in this study are available from the Universiti Putra Malaysia and SingHealth Polyclinics. These data are not publicly available; therefore, restrictions apply as to its availability. However, the datasets during and/or analysed during the current study available from the corresponding author on reasonable request and with permission from Universiti Putra Malaysia and SingHealth Polyclinics. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Comparing and determining factors associated with hypertension self-care profiles of patients in two multi-ethnic Asian countries: cross-sectional studies between two study populations.
Introduction Studies in both developed and developing countries has shown that education is an effective strategy to escape from poverty, as better-educated individuals earn higher wages, experience less unemployment, and work in better occupations than their lesseducated counterparts [1][2][3]. Though studies on the economic value of education for people with disabilities, particularly in developing countries, are rare, some studies have found education to be crucial not only in increasing the employability of this group but also in improving their occupational options, for example, by providing the opportunity to obtain white-collar or full-time jobs [4]. However, the value of education for people with disabilities is not widely recognized, especially in many developing countries. The prevailing belief is still that even if people with disabilities are educated they will be less likely to make use of their education, or will not be useful in the workforce [5]. One of the major obstacles to challenging this notion is the limited number of empirical studies on disability and the nexus between education and labor-market participation, resulting mainly from a lack of credible data. This is particularly the case for low-and middle-income countries (LMICs). These countries have, indeed, significantly limited information on the socioeconomic status of people with disabilities [6,7]. On the other hand, return to investment in education has been quantified for nondisabled people since the late 1950s [8][9][10]. There have also been numerous studies showing the link between education and employment for females. Several studies observed that, compared to their male counterparts, female participation in the labor market appears to depend much on the social environment in the developing countries [11][12][13]. This implies that for disadvantaged or marginalized groups, such as people with disabilities, ethnic minorities, females, or even Merits 2023, 3 683 migrants, labor-force participation is not only determined by levels of education, but is also influenced by discrimination and the support they receive in their direct environment. Furthermore, there are some studies on the labor-market participation of people with disabilities in LMICs. For example, Filmer [14] stated that young people with disabilities are less likely to start school, and, in some countries, have lower transition rates resulting in lower attainment. This study went on to observe that disability status has a stronger effect on school enrolment and participation than do gender and other socio-economic statuses. Likewise, Mitra and Sambamoorthi [15] compared wage disparities between males with and without disabilities in Tamil Nadu in India. Their study suggested that differences in education across disability statuses or labor-market discrimination were among the factors accounting for the employment gap between males with and without disabilities. They also examined the magnitude and determinants of wage differentials by disability status in the context of an agrarian labor market in India [16]. As a crosscountry analysis, Mizunoya and Mitra [17] examined differences in employment rates between persons with and without disabilities in 15 developing countries and showed that people with disabilities have lower employment rates than persons without disabilities in nine countries. There have been some studies examining the return to investment in education for people with disabilities. Hollenbeck and Kimmel [18] performed studies in the US; Lamichhane and Sawada [19] for Nepal; Albert et al. [20] for the Philippines; Pinilla-Roncancio et al. [21] for Latin American countries; and Tiwari [22] for Sub-Saharan Africa. Stern [23] examined the problems of measurement and endogeneity when creating a definition of disability for census-taking purposes, while DeLeire [24,25] and Hotchkiss [26] investigated employer discrimination in the labor market. The above study in Nepal [19] provides evidence on the return of education for persons with disabilities by over 19.3 percent. Their estimated return is significantly higher than that of their non-disabled counterparts (See Figure 1 for the return to investment in education in different regions) [27], but none of the studies have considered gender and disability in estimating the return of education. Although the literature has shed light on many aspects of disability, education, and employment in the developing world, studies examining the labor-force-participation gap between males and females with disabilities are rare. Labormarket participation of females with disabilities is challenging due to the possible double disadvantage they face, first as females and then as females with disabilities. Merits 2023, 3, FOR PEER REVIEW 2 or even migrants, labor-force participation is not only determined by levels of education, but is also influenced by discrimination and the support they receive in their direct environment. Furthermore, there are some studies on the labor-market participation of people with disabilities in LMICs. For example, Filmer [14] stated that young people with disabilities are less likely to start school, and, in some countries, have lower transition rates resulting in lower attainment. This study went on to observe that disability status has a stronger effect on school enrolment and participation than do gender and other socio-economic statuses. Likewise, Mitra and Sambamoorthi [15] compared wage disparities between males with and without disabilities in Tamil Nadu in India. Their study suggested that differences in education across disability statuses or labor-market discrimination were among the factors accounting for the employment gap between males with and without disabilities. They also examined the magnitude and determinants of wage differentials by disability status in the context of an agrarian labor market in India [16]. As a cross-country analysis, Mizunoya and Mitra [17] examined differences in employment rates between persons with and without disabilities in 15 developing countries and showed that people with disabilities have lower employment rates than persons without disabilities in nine countries. There have been some studies examining the return to investment in education for people with disabilities. Hollenbeck and Kimmel [18] performed studies in the US; Lamichhane and Sawada [19] for Nepal; Albert et al. [20] for the Philippines; Pinilla-Roncancio et al. [21] for Latin American countries; and Tiwari [22] for Sub-Saharan Africa. Stern [23] examined the problems of measurement and endogeneity when creating a definition of disability for census-taking purposes, while DeLeire [24,25] and Hotchkiss [26] investigated employer discrimination in the labor market. The above study in Nepal [19] provides evidence on the return of education for persons with disabilities by over 19.3 percent. Their estimated return is significantly higher than that of their non-disabled counterparts (See Figure 1 for the return to investment in education in different regions) [27], but none of the studies have considered gender and disability in estimating the return of education. Although the literature has shed light on many aspects of disability, education, and employment in the developing world, studies examining the labor-force-participation gap between males and females with disabilities are rare. Labor-market participation of females with disabilities is challenging due to the possible double disadvantage they face, first as females and then as females with disabilities. Therefore, this paper aims to at least partially fill this gap in existing knowledge by comparing the estimates for the wage return of education for males and females with disabilities in the Philippines. The central research question posed in this paper, thus, is empirical: does gender have any effect on the return to investment in education for persons with disabilities? We believe that the empirical work in this paper will help governments and other concerned authorities design policies to mitigate poverty among females with disabilities who are regarded as one of the most underserved groups. There are some important features of this study. To begin with, we use the data from persons with hearing, physical, or visual difficulties living in Metro Manila in the Philippines. The dataset was jointly collected by the Institute of Developing Economies (IDE) and the Philippine Institute for Development Studies (PIDS), using carefully structured questionnaires. The Philippines was ranked 116th in human development in 2021 [28], and is behind in many of the human-development indices, but, surprisingly, the situation of females in general is favorable, even compared to other developed countries. According to The Global Gender Gap Report by The World Economic Forum 2022 [29], the Philippines is among the top five countries in the world for female rate of participation in economic activities, female educational attainment, political empowerment, and access to other opportunities. The Philippines has kept a higher rate of participation by females in the labor market than that of other countries in Asia. Due to this unique situation for females in this country, we are interested to see if the situation of labor-market participation is same for females with disabilities compared to their male counterparts. Furthermore, to carefully check the dual effects of gender and disability on the returns to investment in education, various methods of estimation are utilized. First, estimations are completed with (i) standard ordinary lest squares (OLS) and (ii) Type-1 Tobit, where education is defined as a continuous variable, and an interaction variable with sex and each disability is also included. Second, redefining education as a discrete variable, we examine the role of the signaling effect in the returns to education. Similarly, we employ quantile regression for each conditional quantile wage group rather than the mean regression analysis, which enables us to address the more detailed relationship between schooling and wage, and in particular to check whether schooling would have an impact within levels of wage inequality or not. Beyond these methodological aspects, the topic of this study itself can be regarded as important. As stated above, to the best of our knowledge, there are no studies examining whether gender has any effect amongst persons with disabilities participating in the labor market on obtaining wage return to investment in education. While it is generally accepted that females with disabilities face a double disadvantage, first as females and then as persons with disabilities, the higher rate of females' participation in the Philippines's labor market indicates that there is no negative effect of gender for the general population. In this context, it is, therefore, important to examine whether disability has any negative effect on females' labor-market outcome. We hypothesize that the higher rate of labor-market participation by Filipino females is not mirrored by females with disabilities. It is plausible that due to the impairment parents put less faith in their children with disabilities and thus give them a lower priority for education compared to their children without disabilities. Consequently, females with disabilities suffer from having fewer years of schooling, which may result in the lower wage return of education compared to their male counterparts with disabilities. Studies such as Lamichhane and Takeda [30] have shown that parents' positive understanding of their children's disability is correlated to further years of education. When we are in the middle of implementing Sustainable Development Goals (SDGs), building human capital for females with disabilities should be given equal footing as other central development goals. From this perspective, this study is relevant in providing important new insights regarding the role of education in the labor-market participation of females with disabilities. The structure of this paper is as follows: Section 2 presents the dataset from the Philippines; Section 3 describes the empirical strategy; the results and findings are discussed in Section 4; and finally, in Section 5 the concluding remarks are presented. --- Dataset from Metro Manila in the Philippines We use a dataset on disability collected jointly by IDE in Japan and PIDS. The field survey was conducted in Metro Manila in the Philippines in August 2008. Metro Manila is the capital of the Philippines which is composed of units of seventeen cities called Local Government Units (LGUs). Among the seventeen LGUs, Makati, Pasay, Quezon, and Valenzuela were selected for this survey; they represent a spectrum of Metro Manila. The basic data for the sample was randomly selected. In Metro Manila, each city has many villages/towns, which are called barangays in the Philippines. A barangay is the minimum political unit for the Philippine government system. The population of each barangay varies, respectively. Importantly, each of the cities had enough people with disabilities (PWDs), and that includes those with hearing, physical, and visual impairments. So, after making a special unit for this survey with the same population size as these barangays, the survey team performed the random selection of the units. The IDE and PIDS collected the data on disability, acknowledging that the Philippines do not have complete registers of PWDs [31]. Considering some possible flaws in the potential sampling frames, the survey-management team of IDE utilized the verified National Statistics Office (NSO) list, supplemented by the LGU lists. The initial list of PWDs prepared by NSO based on the 2000 CPH results were verified by the LGU partners with the help of the research staff from the PIDS. Lists of PWDs from LGUs are basically administrative registers that have recently been developed by the local social-welfare units, which take the lead in the provision of services to PWDs within the locality. A total of 360 PWDs were targeted to be sampled through the assistance of the NSO; 120 PWDs represented each of the three types of impairment. In this survey, physical impairment refers to loss of one leg/foot or both, and quadriplegic is loss of one arm/hand or both. Visual impairment refers to total or partial blindness or low vision. Hearing impairment, refers to total or partial deafness or hard of hearing. For the sampling operations, neighboring barangays (i.e., villages) in each of the four cities were formed into groups of barangays in such a way that each group of barangays would have at least 300 of the targeted PWDs residing in those areas. This then comprised the primary sampling units (PSUs). At least five PSUs were designed to be selected within each city, with probability proportional to the total number of PWDs. Ten to fifteen PWDs were to be selected within each selected PSU. Taking into account the expected non-response and migration of PWDs as well as the proposed sampling design, the NSO was tasked to assist in drawing a list of 900 total possible respondents, and the targeted 300 PWDs for each type of impairment were distributed across the four cities. Based on the structured questionnaires, the survey team interviewed a total of 403 respondents: 125 in Makati City, 122 in Quezon City, 84 in Pasay City, and 72 in Valenzuela City. The survey covers a wide variety of questions including demographic, education and labor-market-related information together with information on disability. Questions were also included to capture the information of other members of the household of the respondents. Prior to the implementation of the final survey, the questionnaire underwent scrutiny from the Statistical Survey Review Clearance System (SSRCS), which is undertaken by the Technical Committee on Survey Design of the National Statistical Coordination Board (NSCB). SSRCS is a mechanism through which all surveys and censuses are to be conducted by or for all government units in the Philippines. Out of 403 participants, we used information on 365 respondents with hearing, physical, and visual impairments to clearly investigate the gender effect in disability in the labor market of the Philippines. We have excluded some participants from our analysis on the basis that information was missing. Some are excluded as the participants had multiple impairments. Table 1 shows the descriptive statistics. Of the total of 365 participants, they ranged in age from 15 years to 67 years, with the average age of 37.8 years. The majority of the respondents (62 percent) were male, and the remaining 38 percent were female. The disabilities of the participants were classified into three categories: hearing, physical, and visual impairments. Among the respondents, 29, 38, and 33 percent had hearing, physical, and visual impairments, respectively. Participants completed the average of 8.43 years of schooling, and males with disabilities on average completed more schooling than females: 8.77 and 7.87 years, respectively. Irrespective of the type of impairments, only a small percentage of participants (9 percent) completed their college level of education. Our results highlight the difference between females with and females without disabilities: females with disabilities are less likely to achieve more education than those without disabilities who are shown to have equal levels of education to their male counterparts in the Philippines [32]. Moreover, we also observe wage difference between them: the average wage for females is PHP 50,216, while it is PHP 67,167 for males. Additionally, Figure 2 presents them in a bar chart that shows the magnitude of the gender gap. The data also include information on the age of onset for the three types of impairment, revealing that the average age of onset of physical and visual impairments is 23.1 and 26.2 years, respectively. Hearing impairments were categorized according to the linguistic approach, i.e., born deaf (57%), before 3 years old (23%), or after 3 years old (14%). The survey also reveals that 33, 30, 19, and 18 percent of the respondents were from Makati, Quezon, Valenzuela, and Pasay, respectively. centage of participants (9 percent) completed their college level of education. Our results highlight the difference between females with and females without disabilities: females with disabilities are less likely to achieve more education than those without disabilities who are shown to have equal levels of education to their male counterparts in the Philippines [32]. Moreover, we also observe wage difference between them: the average wage for females is PHP 50,216, while it is PHP 67,167 for males. Additionally, Figure 2 presents them in a bar chart that shows the magnitude of the gender gap. The data also include information on the age of onset for the three types of impairment, revealing that the average age of onset of physical and visual impairments is 23.1 and 26.2 years, respectively. Hearing impairments were categorized according to the linguistic approach, i.e., born deaf (57%), before 3 years old (23%), or after 3 years old (14%). The survey also reveals that 33, 30, 19, and 18 percent of the respondents were from Makati, Quezon, Valenzuela, and Pasay, respectively. --- Empirical Strategies --- Mincerian Wage Equation with Continuous Education To establish the empirical settings, the return of education is estimated, and we define education, firstly, as a continuous variable (grades of schooling completed) using the following equation to regress log earnings on years of schooling. log W i = <unk> + <unk>S i + <unk>X i + <unk> k Y ik + <unk> i(1) The Equation ( 1) is a standard Mincerian wage equation used by the existing studies [3,8,9] with an underlining assumption that the return on schooling is the same for different attainment levels. Starting with the OLS model of earning functions for male and female respondents, a linear relationship is specified in Equation (1), where logW i is the log of individuals' earning, and <unk> is the intercept. This equation is added to S i, years of schooling. <unk> represents the returns to education, i.e., how much the wage rate increases in response to an additional year of schooling. X i is a set of covariates for each person; <unk> is its coefficient to be estimated and <unk> is an error term. Using these specifications, we obtain baseline estimations. However, one of the potential econometric problems is that the cross-sectional correlation between education and earnings may differ from the causal effect of education, owing to the correlation between the years of education and the error term that involves unobserved factors such as abilities. In order to mitigate endogenous concerns in this context with gender and disability, we employ the following steps. First, three main disability-specific dummy variables for each gender-visual, hearing, and physical impairments-are included to carefully identify disparities between males and females with disabilities. Y ik (k = 1...5) is a set of dummy variables with males with visual impairments as the baseline, and Y i1 and Y i2 representing hearing and physical impairments in males, respectively. Y i3, Y i4, and Y i5 represent visual, hearing, and physical impairments for females, respectively. Second, as establishing schooling effect is difficult and the existing studies have shown the possibility of inconsistent parameter estimation due to schooling years, which may be endogenous, the use of instrumental variables is preferable for credibility [8,33,34]. In examining return of education, there are several candidate instruments. For example, using family-background variables is one of the credible instruments, and Trostel et al. and Söderbom et al. used parents' educational levels [35,36]. For disability and return of education, the age at which the individual became impaired can be utilized as an instrumental variable. Lamichhane and Sawada controlled for endogenous bias arising from years of schooling and decisions by employing this novel instrument [19]. We use parents' years of education as the family background in our IV estimation. In our IV strategy, we did not use the age at which the individual became impaired, because in this dataset the onset year for those with hearing impairments is classified as birth, before 3 years old, and after 3 years old, and thus the age at which a person became impaired is not obtained. However, this classification is not suitable for our analysis. Another econometric consideration is that of sample-selection bias. Since many people with disabilities are unemployed in the Philippines, we cannot ignore the endogeneity problem arising from labor-market participation decisions. In order to control for the sample-selection bias, we employed Amemiya's Type 1 Tobit model with endogenous regressors [37]. However, we acknowledge that we do not have a control for general health conditions (e.g., nutritional status, height, chronic disease), which may affect females more than males. --- Discontinuous Wage Earnings and the Signaling Effect The return of education does not necessarily increase in a continuous, linear fashion; there is a possibility of discontinuous increases or decreases in wages. We also define another equation that is different from the Mincerian earning function described in Section 3.1, and as the model relaxes as does the assumption of the linear return of additional years of education. In this analysis, we also check the signaling effect in the return of education, and determine whether the possibility of obtaining a diploma serves as a signal of productivity or not. The signaling model [38][39][40] suggests that being certified as having completed an educational course is likely to reveal more to an employer about a worker's ability and productivity than a record of how many years the person has attended classes. The studies mentioned in Section 3, above, have clarified the signaling effect. A smaller body of literature compares the signaling effect for persons under double disadvantage. Some of them have focused on the socially disadvantaged people mentioned in Section 3.2. However, to the best of the authors' knowledge, none of the papers have examined the signaling effect focusing on double disadvantage arising from gender and disability, especially in developing countries. The analysis in this section provides another insight, which is different from Section 3.1. If the general model of signaling is reasonably linked with the difference in finding jobs and earning wages or promotions, it is expected that the return for females with disabilities becomes higher if they obtain diplomas rather than drop out, while males with disabilities, who are still considered to be favorable by the labor market, may enjoy a constant level of return even if they drop out. Thus, we also check whether obtaining diplomas has a different effect on wages for males and females with disabilities. Hungerford and Solon [41] proposed two earning factions to be formulated in order to capture any possible signaling effects. First choice is the spline function, which assumes that log earnings for any given amount of schooling grow linearly, while the inclination of the earning rates depends on the level of education completed, i.e., elementary school, high school, or university. The other choice is the use of the step function that treats log earnings as a function of years of education, with a separate step for each year without specifying particular function forms. For more flexibility, setting a step function for our analysis, we first classify each educational level to check the possibility of nonlinear-schooling return for respondents. Then we classify 10 groups according to the educational attainment for each gender to address the extent to which the return changes discontinuously based on the categories below: 1. individuals with no education; 2. individuals who do not complete either elementary or high school (this indicator represents a lower educational certificate dummy); 3. individuals who graduate from elementary or high school and obtain either diploma; 4. individuals who do not complete higher education, such as college, university, or graduate school; and 5. individuals who graduate from college, university, or graduate schools. Using the above classification, we add this new specification D is (s = 1...9) designating females with no education as the baseline and using educational-level dummy variables to measure the effect of both higher and lower levels of education in Equation (2) below: log W i = <unk> + <unk> s D is + <unk>X i + <unk> k I ik + <unk> i (2) where the dependent variable is the natural logarithm of annual earnings, and the data on highest qualifications enable the dummy variables to be defined for both males and females. Of particular interest for us is whether there exists any difference between the effects of lower/higher certificate dummies and not-completed dummies. Unlike most studies, the signaling effect in our analysis is not estimated indirectly from nonlinear wage return to years of schooling that correspond to the usual time taken to complete a qualification, as such methods are likely to be biased by measurement errors [42]. The carefully structured questionnaires used by this papers directly ask respondents whether or not they completed school, and if so the level from which they graduated, which enables us to directly search for the signaling effect. <unk> s coefficients estimate the marginal effect of each level of education, as based on the excluded group that has no school qualifications. The effects of disabilities are classified as follows: <unk> k (k = 1...3), <unk> 1 is for visual impairments, <unk> 2 for hearing impairments, and <unk> 3 for physical impairments. If empirical findings result in showing the signaling effect, we may conclude that there possibly exists imperfect information between employers and employees with disabilities. --- Quantile Regression Finally, the last part of our analysis deals with wage inequity separately among males and females with disabilities. While estimating how school resources on average affect educational outcomes yields straightforward interpretations, this study investigates wage dispersion by employing the quantile regression approach. Since the quantile regression approach analyzes the relationship between the conditional distribution of the response variable and the set of covariates, it offers more detailed insights into the effects of these countermeasures than the mean regression model; it could be the case that these dispersions vary across educational levels, which results in an impact of schooling upon the wage distribution through its inner channel. Following Martins and Pereira [43], the quantile regression model is written as Equation (3): log W i = <unk> <unk> X i + u <unk>i with Quant <unk> (log W i |X i ) = <unk> <unk> X i(3) where X i is the vector of exogenous variables, and <unk> <unk> is the vector of parameters. Quant <unk> (logW i |X) denotes the <unk>th conditional quantile of logW i given X i. The <unk>th regression quantile, 0 <unk> <unk> <unk> 1, is defined as a solution to the problem that follows: min <unk>R k <unk> i <unk> <unk> (log W i -<unk> <unk> X i )(4) where <unk> <unk> (<unk>) is the check function defined as <unk> <unk> (<unk>) = <unk> <unk> if <unk> 0 or <unk> <unk> (<unk>) = (1 -<unk> <unk> ) if <unk> 0. This can be solved using linear programming, and standard errors are calculated using bootstrap methods [44]. We obtain the estimates for different quantiles by setting the first quantiles as 0.25, second as 0.5, and third as 0.75. The empirical results are obtained by replacing the coefficient Equations ( 1) and (2) as the coefficient defined in Equation (3), above, e.g., the standard Mincerian wage equation is replaced by the following: log W i = <unk> <unk> + <unk> <unk> S i + <unk> <unk> X i + <unk> <unk> Y ik + <unk> i(5) where <unk> = 0.25, 0.5, 0.75 are the quantile for our analysis. Unlike OLS, the quantile regression model allows for a full characterization of the conditional distribution of the dependent variable. --- Results and Findings --- The Results for the Mincerian Wage Equation on Continuous Education Table 2 summarizes our estimated results of wage earning equations modeled by Section 3.1, with the first specification based on OLS estimates. Column 1 and 6 of Table 2 (specification (1) and ( 6)) shows a 24.9 percent rate of return of education, which is relatively higher than those for persons without disabilities, as was explained in previous studies such as Psacharopoulos and Patrinos [27]. However, these returns are consistent with the returns for persons with disabilities, as shown in developing countries [19] and developed countries [18]. While controlling for the endogenous sample-selection bias using the Tobit model (specification ( 2) and ( 7)), the estimated returns of education become slightly higher. In addition, the range for IV OLS and IV Tobit become slightly high (specification (3), (4) ( 7) and ( 8)). In the test of endogeneity, a Durbin-Wu-Hausman test shows the possibility of schooling years being endogenous. The Sargan test has been used for over-identification, and we do not thus reject the over-identifying restrictions; although, the partial R squared is around 0.08, and this casts concern over the strength of the instruments. An F statistic over 10 suggests the instruments are strong. Next, to estimate the effect of double disadvantages (i.e., gender and disability), we categorize each impairment type for males and females and classify the gender dummy variable (gender level effect) and the interaction dummy variable for both male and female respondents with visual, hearing, or physical impairments. We provide the point estimate of these dummy coefficients in Table 2, as visual impairment and males' visual impairment are used as base outcomes. A comparison of the coefficients of different dummy variables among the different estimations in OLS imply that females have more negative coefficient, and females with physical impairments are most seriously and negatively affected in the labor markets. The second most severely affected are females with hearing impairments, while coefficients of both males and females with visual impairments are not statistically significant. This is consistent with the casual observation that there exists a lot of barriers in the labor market in developing world. Lamichhane [45] showed that students with disabilities face problems such as inadequately available materials in sign language or Braille, or, in the case of those with physical impairments, inaccessible buildings. Lamichhane and Okubo [4] further discussed the labor-market participation of people with disabilities in Nepal and the role of education, and found that people with physical impairments have lower levels of labor-market participation than their visually impaired counterparts, and argued that disabling barriers were the most serious constraints for these people. Our findings in the Philippines suggest that the situation is likely the same. As long as disabling barriers are not removed, through the provision of facilities for communication including sign language and other supports based on the reasonable accommodations outlined in the Convention on the Rights of Persons with Disabilities, education alone may not be sufficient, particularly for those who have severe impairments. On the other hand, our findings observe the decreased likelihood of persons with visual impairments getting a job regardless of gender status. This finding is different from those reported by Lamichhane and Okubo [4] and Lamichhane [46] in Nepal, where teaching has been promoted by the government's affirmative action plans as a main job for educated individuals with visual impairments. This study indicates that some kinds of jobs that are promoted by the government's affirmative policies may not be available for this group in the Philippines. From the questionnaires for this survey on the particular job distribution for each respondent with impairments, we find that a large portion of participants with visual impairments work in the massage and acupuncture sectors. The findings are that around 65 percent of persons with visual impairment work as masseurs, while persons with hearing and physical impairments were unable to find any particular jobs. A similar situation is reported in the Country Report of Philippines, which identifies massage as a dominant source of employment for people with visual impairments [47]. The figures in parentheses are robust standard error. The coefficients with ***, **, and * are statistically significant at, respectively, the 0.01, 0.05, and 0.10 levels of probability (*** p <unk> 0.01, ** p <unk> 0.05, * p <unk> 0.1). Specifications ( 4) and ( 9) are based on the first-stage regression ( 5) and (10). The default category: Dummy = 1 if visually impaired in ( 1)-( 5), Dummy = 1 if visually impaired*male in ( 6)- (10), and Dummy = 1 if in the Pasay area in specification ( 1)- (10). * in the variable names represents interactions. --- The Results for Discontinuous Wage Earnings and the Signaling Effect The findings of discontinuous wage earnings in the return of education are shown in Table 3. As defined in Section 3.2, we relax the assumption of linear educational returns and categorize each educational level in order to check the possibility of nonlinearschooling return for whole respondents. Subsequently, we use the lower and higher educational-diploma dummy variables, when they graduated and obtained diplomas, and not-completed dummy variables, e.g., when a person leaves school during the lower or higher educational stage before obtaining a diploma for both male and female. 3) and ( 4). Table 3 compares different characteristics of the level of educational specifications and several points, emphasizing the results. In each educational-level setting, we observe the clear difference of educational returns (specification (1) and ( 2)). Next, we check the differences using an educational-diploma dummy and not-completed dummy variables (specification (3)). Based on females with no education, the coefficients of levels of education for females are only positive and statistically significant at least when their educational attainment is either of a lower or higher diploma (2.39 and 3.56 for lower and higher, respectively). If they do not complete these educational levels, the same result cannot be obtained. Second, the coefficients on levels of education for males are always positive and statistically significant even if they drop out before obtaining a diploma (2.08, 1.87, 2.95, and 4.55 for lower not completing, lower diploma, higher dropout, and higher diploma, respectively). Third, the increasing of educational returns indicates a convex relationship between education and wages. Moreover, coefficients at all education levels are still significantly higher for males than females except at the lower diploma level. When the Tobit model is employed (specification (4)), the equivalent results are obtained alike. Considering all of these findings, it can be argued that the disadvantage might be profound for females with disabilities as it is observed that obtaining a diploma may reduce asymmetric information, while not completing school does reduce earnings for only females, which may be a barrier that excludes females with disabilities from participating in the labor market. Furthermore, the result for males with disabilities in our analysis (i.e., finding increasing convexity in the earning function) is consistent with the existing literature, as Schady [48] found the convexity and the signaling effect in earning function for Filipino males (male without disabilities). These findings lead to further questions about the possibilities for Filipino females (
Utilizing a dataset from Metro Manila in the Philippines, we estimate the impact of gender on the return of education for individuals with disabilities, specifically focusing on visual, hearing, and walking difficulties. Controlling sample selection to address endogenous labor participation and accounting for the endogeneity of schooling decisions, our estimations reveal a significant rate of return to education, ranging from 25.7% to 38.1%. Importantly, examining the potential for nonlinear-schooling return, we observe a more pronounced effect of disability for females compared to their male counterparts, suggesting the presence of dual discrimination and signaling effects for females. Our research emphasizes the urgency for the Philippine government to not only improve educational opportunities but also to enhance employment prospects, particularly for females with disabilities. Some of the policy recommendations would include the implementation of equalopportunity measures, including antidiscrimination policies; an expanded quota system to boost employment opportunities; efforts to address accessibility issues; and subsidies for private-sector employment are also necessary for the economic empowerment of females with disabilities.
before obtaining a diploma (2.08, 1.87, 2.95, and 4.55 for lower not completing, lower diploma, higher dropout, and higher diploma, respectively). Third, the increasing of educational returns indicates a convex relationship between education and wages. Moreover, coefficients at all education levels are still significantly higher for males than females except at the lower diploma level. When the Tobit model is employed (specification (4)), the equivalent results are obtained alike. Considering all of these findings, it can be argued that the disadvantage might be profound for females with disabilities as it is observed that obtaining a diploma may reduce asymmetric information, while not completing school does reduce earnings for only females, which may be a barrier that excludes females with disabilities from participating in the labor market. Furthermore, the result for males with disabilities in our analysis (i.e., finding increasing convexity in the earning function) is consistent with the existing literature, as Schady [48] found the convexity and the signaling effect in earning function for Filipino males (male without disabilities). These findings lead to further questions about the possibilities for Filipino females (females without disabilities); this is important to address, because the Philippines represents a unique case in which females receive more schooling than males. --- The Results for Quantile Regression We present the results of the quantile regression on Sections 3.1 and 3.2 for the models of continuous educational returns and the other model of educational attainment levels, which relax the assumption of a linear increase in wages. In Table 4, we show the regression results of the specified quantiles, i.e., 0.25, 0.50, and 0.75. The findings of our analysis provide several characteristics of returns to education and the effect of gender in disabilities on conditional wage distribution, which appear in the quantile regression. We first show the estimations of quantile regression with gender and each disability dummy variables to check the possibility of inequality within levels. The estimation of schooling years varies from 42 percent at 25 decile to 10.06 percent at 75 decile in specification (1). Then, we present the results of the coefficients of quantile regression estimates corresponding to Equation (1) in Section 3.1. As reported above in Section 3.1, the average estimated educational return is 20.4 percent, whereas the return at 0.25 decile reaches 29.7 percent and 10.4 percent at 0.75 decile in specification (2). We observe that returns to educations are higher at lower points of the conditional wage distribution. This suggests that there is heterogeneity in the return of education which is larger for individuals at the lower quantile of the conditional wage distribution. This result is not yet well explained by the existing literature, most of which reports that schooling returns are higher for the more educated and more skilled individuals [42]. From another angle, we would say that lower-wage workers achieve more educational returns. Another important finding regards the disparity of the coefficients on disabilities' dummies for males and females. Using the OLS as baseline, we see the huge difference between each quantile at different points of the wage distribution. At the lower end of the distribution, the most severe case is found for females with physical impairments, for whom the coefficient is statistically significant and which is below the estimate of average estimated returns in Section 4.1, while the least severity is observed at the top of the conditional distribution. The similarity of findings is also consistent for the other impairment groups regardless of gender. Likewise, our analysis shows the coefficients of quantile regression estimates corresponding to Equation (2) in Section 3.2 as discontinuous wage earnings. A remarkable finding is the coefficient of each educational level at the 0.25 quantile. The coefficients for males in each education level are relatively larger and significant for the bottom tail of the distribution than the estimates from 3.2; the coefficients for females are only larger and statistically significant when obtaining lower or higher diplomas. An implication of our results is that the signaling effect may appear for the lower part of the conditional distribution, which implies that the effects of asymmetric information tends to increase in the lower conditional distribution. --- Concluding Remarks In this paper, we estimate the gender effect on return to investment in education among individuals with hearing, physical, and visual difficulties in the Philippines. After adjusting for sample selection to address endogenous labor participation and accounting for the endogeneity of schooling decisions, our estimations indicate a remarkably high rate of returns of education, ranging from 24.9 to 38.4 percent. However, upon classifying disability dummy variables for each gender, we observe a compounded effect of double disadvantage (gender and disability) in the labor-market participation of females with disabilities. Furthermore, our examination of the potential nonlinear-schooling returns suggests that the impact of disability is more pronounced for females compared to their male counterparts. These findings point to the existence of a double disadvantage and signaling effect for females with disabilities. Moreover, the wage disadvantage associated with disability and gender is disproportionately distributed within the population. While return of education is higher at lower points, the coefficients on disability dummies for females are more severe at the lower end of the distribution. The sizable gender gap in the labor-force participation for females with disabilities after education indicates that education alone cannot translate into labor-market returns in the same way as they do for male counterparts. Our research, therefore, underscores the importance of not only enhancing educational opportunities, but also significantly improving employment prospects, particularly for females with disabilities. This necessitates the implementation of equal opportunity provisions, such as antidiscrimination measures, an expansion of the quota system to enhance employment prospects, addressing accessibility issues, and subsidizing private sector employment. Additionally, our study highlights the importance of adopting an intersectionality framework. As posited by Brown and Moloney [49], females with disabilities face greater workplace disadvantages compared to males with disabilities and those without disabilities, irrespective of gender. In recent years, there has been an increased awareness of intersectionality, emphasizing the urgent need to understand and address the multiple forms of inequality and discrimination arising from both disabilities and gender. It is crucial not to overlook these aspects in the formulation and implementation of policies aimed at increasing labor-market participation for females with disabilities [50]. Finally, our study is limited to the urban area of the Philippines, with a dataset comprising only persons with disabilities. Therefore, we suggest further research in this area utilizing a nationally representative dataset, considering the Washington Group on Disability Statistics instruments, including the recently developed Washington Group/ILO Disability Module. As Lamichhane et al. [51] emphasized, efforts should be made to design surveys allowing data disaggregation by disability status and a range of contextrelevant demographic characteristics and equity dimensions (such as employment among youth with disabilities disaggregated by age, sex, gender identity, ethnicity, race, disability type, socio-economic status, and sexual orientation) to gain a deeper understanding of labor-market gaps by disability and gender. --- Data Availability Statement: The data presented in this study are available on request from the corresponding author. --- Author Contributions: Conceptualization, K.L.; methodology, T.W.; software, T.W.; validation, K.L. and T.W.; formal analysis, T.W.; investigation, K.L. and T.W.; data curation, K.L. and T.W.; writing-original draft preparation, K.L.; writing-review and editing, K.L. and T.W.; visualization, T.W.; supervision, K.L.; and project administration, K.L. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. --- Institutional Review Board Statement: This survey was approved by the National Statistical Coordination Board (NSCB) of the Philippines under "NSCB Approval No. PIDS-0815-01". It is important to note that NSCB has later merged into the Philippine Statistics Authority, establishing the legitimacy of the survey as recognized by the Philippine statistical authority. Further details on this organizational change can be found here: https://openstat.psa.gov.ph/Metadata/PSA-Structure-and-Organization (accessed on 20 August 2023). Furthermore, the survey adheres to ethical principles to safeguard the privacy and rights of the participants. Participants were informed that their involvement in the survey is entirely voluntary, and they may choose to participate or withdraw without facing any negative consequences. Furthermore, it is essential to emphasize that any information provided by the respondents will be treated with the utmost confidentiality. The collected data will be used exclusively for research purposes only, and respondents' identities will not be disclosed in any document resulting from this survey. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. --- Conflicts of Interest: The authors declare no conflict of interest.
Utilizing a dataset from Metro Manila in the Philippines, we estimate the impact of gender on the return of education for individuals with disabilities, specifically focusing on visual, hearing, and walking difficulties. Controlling sample selection to address endogenous labor participation and accounting for the endogeneity of schooling decisions, our estimations reveal a significant rate of return to education, ranging from 25.7% to 38.1%. Importantly, examining the potential for nonlinear-schooling return, we observe a more pronounced effect of disability for females compared to their male counterparts, suggesting the presence of dual discrimination and signaling effects for females. Our research emphasizes the urgency for the Philippine government to not only improve educational opportunities but also to enhance employment prospects, particularly for females with disabilities. Some of the policy recommendations would include the implementation of equalopportunity measures, including antidiscrimination policies; an expanded quota system to boost employment opportunities; efforts to address accessibility issues; and subsidies for private-sector employment are also necessary for the economic empowerment of females with disabilities.
Introduction Our understanding of how disease, knowledge, and many other phenomena spread through a population can often be improved by investigating the population's social or other contact structure, which can be naturally conceptualized as a network (Newman, 2002;Pastor-Satorras et al., 2015). In the case of human populations, this contact structure is often gathered through the use of questionnaires or surveys that typically ask respondents to name some of their contacts (Burt, 1984;Holland & Leinhardt, 1973). Generating population-level network structures from such data requires one of two possible approaches (Marsden, 2005). One approach is to delineate a population of interest, interview every person in the population, and collect unique identifiers for each respondent's contacts; this allows the mapping of the true sociocentric network within that population. The alternative is to sample the population of interest and collect information about each respondent and his or her contacts; this results in a collection of egocentric networks from that population. Either approach enables the extraction of network features that can be used to fit a graph model, such as one of the models in the family of exponential random graphs (ERGMs) (Lusher et al., 2012), which allows the subsequent generation of network graphs consistent with the fitted features of the observed networks. The features that may be extracted from egocentric networks are however quite limited, making sociocentric networks the preferred design, resources allowing. Both egocentric and sociocentric approaches can place a considerable burden on the respondent to recall numerous contacts and describe each in detail (McCarty et al., 2007). As a result, most sample survey questionnaires, in both egocentric and sociocentric designs, limit the contacts sought from a respondent, for example by the content, intimacy level, geographic location, or time frame of the relationship elucidated (Campbell & Lee, 1991). A common method is to limit the number of contacts a respondent describes. This may be done directly, e.g. by asking "who are your five closest friends with whom you regularly socialize?" It may also be done indirectly, e.g. by asking "who are the friends with whom you socialize" but then only asking follow-up questions about the first five named (Burt, 1984;Kogovsek et al., 2010). A less-common variant of the second approach is for the interviewer to ask follow-up questions on a random subset of named contacts. All of the above approaches potentially lead to truncation of the number of observed contacts. There is longstanding concern within the sociological literature that such truncation might affect estimates of network properties, including various forms of centrality (Holland & Leinhardt, 1973). However, there are countervailing resource and data quality benefits to avoiding respondent and interviewer fatigue via truncation (McCarty et al., 2007). While investigating the effect of degree truncation on observed structural properties of networks is an important problem, substantive interest often lies in making inferences about how a dynamical process on the network, such as the spread of an infectious disease, might be affected by truncation. Surprisingly, while both the impact of degree truncation on structural properties of networks and the impact of structural properties on the spread of a dynamic process through a networked population have been investigated, the joint implications of the two processes have not yet been elucidated. To integrate key ideas from the two corpora, we review first the literature on the impact of truncating reported contacts on structural network properties, and second the literature on the impact of structural network properties on spread dynamics, to arrive at hypotheses regarding how truncation might change expected spreading process outcomes. While our work is motivated by epidemic disease processes, our analysis should be applicable to any process that can be modeled using compartmental models of a spreading process. We test the predictions of our hypotheses with simulation models using both synthetic, structured networks, and empirically observed networks. Spreading processes on networks can be modeled on ensembles of networks (Jenness et al., 2015), using ERGMs or in a Bayesian framework (Goyal et al., 2014). However, using this modeling approach to explore the impact of truncation would conflate two processes: the truncation process and the network generation process. In order to focus on the former, we generate multiple realizations of synthetic full-network datasets with specific network properties, and additionally utilize a collection of empirically observed sociocentric networks that can be interpreted as multiple network realizations from a larger meta-population. As a result, we are able to isolate the effect of degree truncation and explore its impact on predictions of spreading processes on networks with very different structural properties. --- The impact of contact truncation on structural network properties Limiting the number of connections (alters) reported by a respondent (ego) is known as a fixed choice design (FCD) (Holland & Leinhardt, 1973). This limitation right-censors (imposes an upper bound on) an ego's out-degree (the number of alters nominated by an ego). In sociocentric studies out-degree truncation may in turn reduce the in-degree of others, because some true incoming ties may end up unreported due to the constraints on out-degree. Sociocentric networks are commonly analyzed as undirected networks in which an edge (or tie) exists between two nodes, i and j, if either node reports it (not least to minimize the impact of underreporting of edges). In such an undirected network, each node's total degree will consist of the union of all in-directed and out-directed nominations. FCD causes this total degree to be lowered in some circumstances, specifically when both i and j fail to report edge e ij between them. This can occur only when k i and k j are both larger than k fc, the FCD truncation value, and thus both potentially will not report e ij. If k i and k j are both larger than k fc, then whether e ij is observed will depend on how FCD is carried out. FCD can be conducted in two ways, as outlined above. The more-common approach of focusing on the first k fc or fewer names reported (weighted truncation) is likely to lead to bias towards stronger contacts, since stronger ties are likely to be more salient to a respondent. Here, e ij is more likely to be reported if it has higher weight. This approach should thus maximize the proportion of a respondent's social interactions that is captured. The less-common approach of drawing a random subset of all named contacts (unweighted truncation) will provide a broader picture of the types of contacts a respondent has-notably increasing the chance of observing weak ties-at the cost of observing a smaller proportion of the respondent's total social interaction. Here, whether e ij is observed depends on chance. A body of research has highlighted the substantial impact of sampling on network structural properties (Frank, 2011;Granovetter, 1976). For example, a recent study of nine different sampling methods found substantial variability in their ability to recover four structural network characteristics (Ebbes et al., 2015). FCD is known to impact several network characteristics, but its effects depend on the structure of the complete network graph (Kossinets, 2006); we consider next some key properties (we discuss these properties in more depth in Supplementary Content 1). --- Degree distribution and assortativity FCD's impact on the network degree distribution is almost always to reduce its mean-insofar as edges are dropped-and variance-insofar as higher-degree nodes will be forced to underreport outgoing edges, flattening the distribution. This latter https://doi.org/10.1017/nws.2017.30 Published online by Cambridge University Press effect will be strongest in degree-assortative networks, where both ends of an edge may be unable to report the connection; in contrast, in degree-disassortative networks then edges that might be censored by the high-degree end are likely to be maintained by the low-degree end (Kossinets, 2006;V ázquez & Moreno, 2003). FCD may therefore significantly affect human contact networks, which are typically somewhat degree-assortative (Newman, 2003a). Degree-assortativity itself is not systematically affected by FCD (Kossinets, 2006;Lee et al., 2006), unless individuals preferentially report stronger connections, and ties between individuals of similar degree are more likely to be strong (Louch, 2000;Marsden, 1987), in which case FCD may raise degree-assortativity. --- Clustering Local clustering can be measured in at least two different ways: (i) Triadic clustering: the mean of local clustering coefficient C i, where C i is the proportion of all the possible edges between neighbors of node i that are present (Watts & Strogatz, 1998); (ii) Focal clustering: the level of global triadic closure, that is the ratio of triangles to paths of length two (Newman, 2010). Clustering can also occur at higher levels of aggregation, for example, in the presence of network communities where, loosely speaking, the density of edges within a set of nodes belonging to a community is higher than the average density of edges across the whole graph (Fortunato, 2010;Porter et al., 2009). Unweighted FCD truncation should reduce clustering at the triadic and community levels as it effectively results in random edge removal. When truncation is weighted; however, FCD might lead to an increase in clustering: if within-cluster edges are stronger than others, they are more likely to be preserved. --- Path lengths In removing ties, unweighted FCD will reduce the fractional size of the largest connected component (LCC), S LCC, and will often increase the average path length between nodes of the LCC, LCC, insofar as the increased length between some pairs of nodes due to loss of edges is not offset by reductions in length due to peripheral nodes being dropped altogether from the LCC. These results are seen asymptotically for random and power law graphs (Fernholz & Ramachandran, 2007), and via simulation of edge removal on empirical networks (Onnela et al., 2007a). If FCD is weighted, this second factor will be stronger, as peripherally (weakly) connected nodes are preferentially dropped from the LCC. --- The impact of structural network properties on spreading processes There is a burgeoning literature on the effect of various network properties on spreading process outcomes (Barrat et al., 2008;Newman, 2002;Pastor-Satorras et al., 2015). We consider three key spreading process quantities, focusing on two aspects of an epidemic: the early stage and the final state. To simplify our analysis, we follow the tradition in this literature and focus on models that assume degree infectivity, where an infectious individual can infect all their neighbors in just one time step, rather than unit infectivity, where they can only infect one of their neighbors per time step (Staples et al., 2015). Quantity one is the basic reproduction number, R 0, the number of new incident cases (newly infected individuals) arising from each currently infected individual in a fully-susceptible population. R 0 is defined as a function of <unk>, the product of the probability of infection per period and the number of contacts per period, and <unk>, the rate at which individuals recover. In a homogeneous mass-action (i.e. fully mixed) model for an infection where recovery leads to immunity, i.e. a Susceptible-Infected-Recovered (SIR) model, R 0 = <unk>/v, where R 0 > 1 ensures a large epidemic with non-zero probability (Hethcote, 2000). Quantity two is the initial exponential (or faster) growth rate of an epidemic, r 0. This growth rate is conceptually equal to <unk> in the first period, but thereafter is not well-defined analytically-even in homogenous models; it is typically measured empirically as the second moment of the epidemic curve in its initial growth phase (Vynnycky & White, 2010). Quantity three is the attack rate A, the proportion of the population ever infected. Under assumptions of population homogeneity, relatively simple solutions can be found for key network properties; however, these results rarely hold with non-trivial network structure (Keeling & Eames, 2005). We consider how key structural network properties impact the above spreading process quantities (we discuss these effects in more depth in Supplementary Content 1). --- Degree distribution and assortativity R 0 can be viewed as the average number of edges through which an individual infects their neighbors across the whole period of their infectiousness, if all their neighbors are susceptible. The probability of infection for each node can, conversely, be conceptualized in terms of their degree and their neighbors' infection statuses. The more degree-heterogeneous a network is, the higher the likelihood of a large epidemic occurring, since R 0 is a function of the first and second moments of the degree distribution (Pastor-Satorras & Vespignani, 2002). Similarly, higher degree-assortativity increases the expected epidemic size, since the probabilistic threshold for epidemic take-off has a lower-bound of the average degree of nearest neighbors (Bogu<unk> á et al., 2003). This is intuitive, since the number of one's neighbors bounds the number of infections one can generate. Conditional on the number of nodes and edges in a network, degree-assortative networks will have a faster initial growth rate-occurring within a dense core of high-degree nodes-but a lower attack rate-due to having longer paths to peripheral, low-degree nodes where chains of infection are more likely to die out (Gupta et al., 1989). --- Clustering For any given degree distribution, triadic clustering reduces the average number of infections each infected person causes, R e. This reduction is due to newly infected individuals having fewer susceptible neighbors: the contact who infected you is likely also have had the opportunity to infect your other contacts (Keeling, 2005;Miller, 2009;Molina & Stone, 2012). This will slow the epidemic growth rate r 0 since newly infected individuals in clustered networks have fewer susceptible alters (Eames, 2008), and while not lowering R 0 clustering will increase the epidemic threshold in the same manner that a fall in R 0 would (Molina & Stone, 2012). In many networks, for a given network density, increased clustering also leads to a smaller S LCC, which necessarily reduces the maximum possible attack rate (Newman, 2003b), although this result appears to be a by-product of clustering leading to increased degree-assortativity (Miller, 2009). Overall, cliques alone appear to have marginal effects on epidemic dynamics. However, the processes which drive clique formation-such as homophily by nodal attributes or geographic proximitymean that networks displaying clustering also often contain topological features such as degree-assortativity or heterogeneity that do significantly affect epidemic. As a result, processes on clustered networks can look very different from those on non-clustered ones (Badham & Stocker, 2010;Molina & Stone, 2012;Volz et al., 2011). Broader community structure acts in much the same fashion as cliques, reducing r 0 due to limited capacity to pass infection from one community to the next (Salathé & Jones, 2010), although epidemics are unhindered, or even sped up, by inter-community ties when communities are overlap (Reid & Hurley, 2011). --- Path lengths Although networks with increased LCC will often have lower r 0, much of this effect is due simply to lower network density. For LCCs of equal density, high LCC is likely to be due to a dense core with long peripheral arms; in such a scenario r 0 will be high once the epidemic reaches the core, but will take longer to reach all parts of the LCC (Moore & Newman, 2000). However, since random spreading processes rarely follow shortest paths between any two nodes, the shortest path typically underestimates the length of the path taken by a spreading process. Since truncation inflates the length of observed shortest paths, the shortest path seen in truncated networks may paradoxically more closely reflect actual path lengths taken than those observed in fully observed networks (Onnela & Christakis, 2012). As a result, the lower r 0 predicted from truncated networks may in fact be more accurate. --- Potential impact of degree truncation on spreading processes Based on the above results, we formulate some initial hypotheses about the likely impact of out-degree truncation on predictions of the behavior of spreading processes on the resulting network. First and foremost, truncation will reduce the number of edges in the network, since some edges are now not observed. This leads to a reduction in mean degree and is likely to increase average path lengths and reduce the size of the s LCC ; as a result, both r 0 and A will be reduced. The reduction in r 0 may, however, be offset by reduced variance in degree-since out-degree variance is strictly reduced by truncation and in-degree variance is likely to drop too. Second, degree truncation by tie strength may lead to an inflation of degree-assortativity, if assortative ties are stronger on average and thus more likely to be preserved. This should lead to smaller, faster ending epidemics-especially if assortativity is created by preferentially dropping core-periphery links. Finally, degree truncation by tie strength will have an unpredictable effect on clustering-depending on the Fig. 1. Schematic of study methodology. (1) For synthetic networks, 100 degree sequences were generated. For the Karnataka village data, 75 empirical village datasets were used, and step 2 skipped. (2) Each degree sequence was converted into a network graph using the configuration model, and then each synthetic graph was calibrated based on target network values. (3) All networks were truncated at twice mean, mean, and half mean degree. (4) 100 spreading processes were run across each full and truncated network. (Color online) relationship between tie strength and community structure. Notably, if the two are strongly positively correlated, truncation may increase community structure as weak ties are preferentially dropped. If clustering is increased, both r 0 and A are likely to fall. --- Methods To test the above hypotheses about the impact of degree truncation on predicted spreading process outcomes, we: (1) simulated a tie-strength truncation process on a range of networks; (2) simulated a spreading process on the original (fully observed or full network) and truncated networks a large number of times; and (3) compared spreading process outcome values for the full and truncated networks (Figure 1). In the following, we describe in detail the following: (A) the network generation process; (B) the truncation process; and (C) the spreading process. --- Network structures We considered four types of synthetic networks that we call degree-assortative, triadic clustering, focal clustering, and Power-Law networks, and in addition we considered networks based on empirical data (details below). The empirical social networks were collected from a stratified random sample of 46% of households in each of 75 villages in Karnataka, India, which were surveyed as part of a microfinance intervention study in 2006 (Banerjee et al., 2013a(Banerjee et al.,, 2013b)). We defined an edge between two individuals in the sample to exist if either person reported any of the 12 types of social interaction asked about in the study. We began synthetic network construction by generating a collection of degree sequences, where a degree sequence is a list of node degrees of a network. To generate 100 degree-assortative, triadic clustering, and focal clustering networks, each consisting of N = 1000 nodes, we drew 100 degree sequences of length N from a Poisson distribution P (<unk>) where <unk> = 8, as an approximation to a binomial distribution with large N. We used the configuration model to generate an initial graph realization for each degree sequence (Molloy & Reed, 1995), and then rewired the networks, edge by edge, in order to obtain a collection of calibrated networks such that each network closely matches a target value of a chosen characteristic, specifically: 1. Degree-assortative. This was achieved by: (i) selecting two disjoint edges (u, v) and (x, y) uniformly at random; (ii) computing whether removing the two edges and replacing them with edges (u, y) and (x, v) would increase network assortativity; and if so (iii) making this change. 2. Triadic clustering. This was achieved by: (i) choosing an ego i and two of its alters, j and k, who were not connected to one-another; (ii) adding the edge (j, k) to the network, thus forming a triangle; and (iii) removing an edge selected uniformly at random conditional on that edge not being part of a triangle, thus ensuring increased triadic clustering. 3. Focal clustering. This was achieved by: (i) selecting three nodes i, j and k uniformly at random; (ii) adding edges (i, j), (i, k) and (j, k) if they did not already exist; (iii) choosing uniformly at random in the network the same number of edges that were just added (excluding edges (i, j), (i, k) and (j, k) in the selection); (iv) computing whether removing this second set of edges would result in a net increase in focal clustering-if so, removing them; if not, repeating steps (iii) and (iv). We generated three versions of each type of synthetic network by calibrating assortativity, triadic clustering, and focal clustering to the minimum, median, and maximum values of these quantities observed in the 75 Karnataka villages (Table 1, column 1). To generate Power-Law networks, the fourth type of synthetic network, we drew degree sequences from a power-law distribution P (k) <unk> k -<unk>, using the values 3, 2.5, and 2 for the degree exponent <unk>. We discarded any ungraphable sequences, i.e. those where any value greater than N -1 = 999 was drawn. We again used the configuration model to generate an initial graph realization for each degree sequence. Note that lower values of <unk> are associated with degree distributions that have increasingly fat tails. For each of the four types of synthetic networks, for each level of calibration we generated 100 independent representative networks using the above methods, for a total of 1,200 networks. Mean values for a range of network characteristics for each set of 100 networks are shown in Table 1. --- Truncation We simulated degree truncation of the form typically seen in surveys, by placing a ceiling on the number of contacts, k fc, that can be reported by a respondent, and then reconstructed the contact graph created from all sampled contacts. To do this, we first converted the network into a directed graph. We then selectively removed (k i -k fc ) directed edges starting from each individual i, beginning with the edge having the smallest edge overlap value. We used edge overlap as proxy for tie strength, defined as the fraction of shared network neighbors of a connected dyad: O ij = n ij /[(k i -1) + (k j -1) -n ij ], where n ij is the number of neighbors i and j have in common, and k i and k j are their degrees (Onnela et al., 2007b). Overlap has previously been shown to be strongly correlated with tie strength, as conjectured by the weak ties hypothesis several decades earlier (Granovetter, 1973). We were thus conducting truncation by tie strength. We truncated at k fc = qk, taking values of q = 0.5, 1, 2, so that the maximum out-degree of individuals was half the mean degree in the full network, the same as its mean degree, or twice its mean degree. After truncating each individual's out-degree, we collapsed the directed graph into an undirected one based on all remaining ties. Examples of this truncation process on 20-node networks are shown in Figure 2. We measured a range of network properties for each full and truncated network, including mean degree, degree-assortativity, triadic and focal clustering, s LCC and a measure of community clustering -normalized modularity Q n (Newman, 2010); this last based on a graph partition for each network using the Louvain method (Blondel et al., 2008). --- Spreading process We ran a SIR model using degree infectivity on the networks defined by the per-period (per time step) probabilities <unk> = 0.03 (the probability of an infectious individual infecting each susceptible contact) and <unk> = 0.05 (the probability of an infectious individual recovering). These values were not selected to mimic any particular disease, but were rather chosen to give a high probability of epidemic take-off in untruncated networks, without regularly hitting the ceiling of 100% cumulative incidence. In our networks, with a mean degree of eight, these values give a mean infectious period of 14 time steps, and an R 0 of approximately 2.8. Each spreading process began with five initial infections, chosen uniformly at random among the nodes of a network, and each SIR model was run 100 times on the full and degree truncated variants of each of the 100 networks. We measured two categories of outcomes across all of the 10,000 runs (100 runs per network for 100 networks) of each synthetic network type (7,500 for the Karnataka village data), including results from those runs for which at least 10% of individuals were ever infected: first, time to infection of the 10th percentile of the population (epidemic growth r 0 : mean and 95% range); and second, the proportion of nodes ever infected (the attack rate A : mean and 95% range). --- Results Summary statistics for all networks at all levels of truncation are shown in Table S1. In all networks, both synthetic and empirical, out-degree truncation consistently reduced mean degree as expected, most strongly in Power-Law and focal clustering networks. Truncation strongly reduced degree-assortativity in all cases except for Power-Law networks, which were already degree-disassortative, overwhelming any differences originally seen across levels of calibration; this effect was weaker for the Karnataka networks than for synthetic networks other than Power-Law. Modularity increased with truncation in all networks except for degree-assortative ones (which had very high initial modularity). With the exception of Power-Law and Karnataka networks, where modularity rose smoothly with increasing truncation, most of the increase only occurred once networks were truncated at half mean degree. Both triadic and focal clustering fell, and the LCC rose, consistently with increasing truncation for all networks in which clustering was initially present. When spreading processes were simulated on the full networks, at least 10% of the network became infected (attack rate A > 10% ) in almost every simulation (over 97.5%), with the exception of degree-assortative networks where only around 90% of simulations reached A > 10% (Table S2). Truncating networks at 2k had almost no impact on the proportion of epidemics with A > 10% for any network, Fig. 3. Epidemic outcomes for simulation runs infecting at least 10% of the population across six network structures. (A) Proportion of all nodes ever infectious; (B) time to infection of 10% of all nodes. Figures show mean and 95% ranges for all runs from 10,000 simulations (7,500 for Karnataka villages) for which at least of 10% of individuals were ever infected. Simulation types are defined by out-degree truncation (Circles: no truncation; Hexagons: truncation at twice mean degree; Squares: truncation at mean degree; Triangles: truncation at half mean degree). All network structures are those with highest network properties in each category (see Methods and Table 1; full results for each network structure are available in Figure S1 and Figure S2). Empty lines represent simulation types where no runs reached the 10% threshold. (Color online) but further truncation led to a sharp fall-off. At 0.5k truncation none of the clustered network epidemics reached A > 10%, and only the Power-Law networks, the degreeassortative networks calibrated to the lowest level of assortativity and the Karnataka networks had more than 2% of their epidemics reach the A > 10% threshold. Without truncation, 10% of all nodes were infected within 20 time steps on all networks except for the degree-assortative ones-which also showed the greatest range of initial epidemic growth rates (r 0 ) (Table 2). Truncation at 2k increased r 0 in all cases, but not by large amounts; however, truncation at k raised both mean r 0 and its variance-notably in the cases of degree-assortative and triadic clustering networks (Figure 3(a)). For those networks in which any runs reached A > 10% at 0.5k truncation, both the mean and variance of r 0 increased as networks became highly fractured. Network structure had a greater impact on A than on r 0, with clear differences even on full networks (Figure 3(b)). Truncation at 2k had almost no impact on A except in the cases of Power-Law, and to a lesser extent degree-assortative, networks. However, truncation at k leads to a mean A roughly halving for all cases except the Karnataka networks, where A only falls by about a quarter. Once truncation reached 0.5k, no network type averaged A > 16%. --- Discussion Simulating a generic spreading process on a range of networks containing different structures, we find that truncating the number of contacts that each person can report via a FCD (out-degree truncation) has a substantial impact on both initial growth rates (r 0 ) and attack rates (A), even at the commonly used level of k (the mean degree of the network). Our investigations show that the level of inaccuracy introduced into predicted epidemic outcomes by a given level of truncation varied depending on the structure of the network under consideration, partly due to the impact of truncation on network properties, and partly due to the impact of network properties on process outcomes. Truncation on all network types eventually led to under-predictions of both r 0 and A; however, the level of underprediction at each truncation level, and the level of truncation at which such under-prediction became substantial, varied across network types. Notably, our ability to predict process outcomes is degraded more rapidly on stylized synthetic networks than on a set of empirical social contact networks from villages in Karnataka state, India. Central to understanding the effect of out-degree truncation on predictions of spreading process outcomes is the transition when the network becomes fragmented and the size of the LCC rapidly decreases. In our analyses, the Power-Law and degree-assortative networks showed slow declines in predicted process outcomes as truncation increased, while the loss of accuracy was more rapid for both triadic clustering and focal clustering networks-which lost fidelity early on-and the Karnataka networks-which maintained fidelity for longer (Figure 3). The speed of initial growth was notably more variable for degree-assortative compared to all other network types for both no truncation and truncation at 2k, reflecting the importance of the initial infection sites when networks contain both highly and lowly connected regions. This variation in findings suggests that knowledge of the structure of a network for which one wishes to predict process spread is crucial in determining the level of resources that should be placed into measuring the full extent of the network itself: locally clustered networks may require more contacts, while those with fat-tailed degree distributions may require fewer. Of course, knowing the mean out-degree of a network is a pre-requisite to determining the level of truncation that can be tolerated. In contrast to our conjectures, in no case did truncation increase the speed of process spread. The impact of truncation in reducing the number of observed ties appeared to overwhelm all other processes, not least by affecting the network characteristics of the truncation networks: truncation at k led to the degree-assortative networks being entirely non-assortative and the triadic clustering and focal clustering networks displaying very limited clustering; only modularity appeared to be maintained or even increased as the FCD threshold was loweredpotentially because of the breakup of the network into increasingly numbers of unconnected components. Further investigation might find levels of truncation at which epidemic severity is over-estimated, but in practical terms our findings point to a consistent underestimate of speed and attack rate using data truncated by strength. In addition to network-level outcomes, it is instructive to consider variability in outcomes at the individual level. While it is clear that individuals with higher out-degree are more likely to become infected, it is also likely that those with more-connected neighbors will become infected more often, since these connected neighbors are more likely to be infected in the first place. This association can be seen in Figure 4 for the Karnataka networks (and Figure S3 for synthetic networks). Low degree individuals are unlikely to be infected regardless of how well-connected their neighbors are, but for our exemplar infection neighbor degree has little impact for those with own degree greater than 10 (Figure 4(b)). As truncation increases-and has a disproportionate impact on ties dropped
Understanding how person-to-person contagious processes spread through a population requires accurate information on connections between population members. However, such connectivity data, when collected via interview, is often incomplete due to partial recall, respondent fatigue, or study design, e.g. fixed choice designs (FCD) truncate out-degree by limiting the number of contacts each respondent can report. Research has shown how FCD affects network properties, but its implications for predicted speed and size of spreading processes remain largely unexplored. To study the impact of degree truncation on predictions of spreading process outcomes, we generated collections of synthetic networks containing specific properties (degree distribution, degree-assortativity, clustering), and used empirical social network data from 75 villages in Karnataka, India. We simulated FCD using various truncation thresholds and ran a susceptible-infectious-recovered (SIR) process on each network. We found that spreading processes on truncated networks resulted in slower and smaller epidemics, with a sudden decrease in prediction accuracy at a level of truncation that varied by network type. Our results have implications beyond FCD to truncation due to any limited sampling from a larger network. We conclude that knowledge of network structure is important for understanding the accuracy of predictions of process spread on degree truncated networks.
clustering; only modularity appeared to be maintained or even increased as the FCD threshold was loweredpotentially because of the breakup of the network into increasingly numbers of unconnected components. Further investigation might find levels of truncation at which epidemic severity is over-estimated, but in practical terms our findings point to a consistent underestimate of speed and attack rate using data truncated by strength. In addition to network-level outcomes, it is instructive to consider variability in outcomes at the individual level. While it is clear that individuals with higher out-degree are more likely to become infected, it is also likely that those with more-connected neighbors will become infected more often, since these connected neighbors are more likely to be infected in the first place. This association can be seen in Figure 4 for the Karnataka networks (and Figure S3 for synthetic networks). Low degree individuals are unlikely to be infected regardless of how well-connected their neighbors are, but for our exemplar infection neighbor degree has little impact for those with own degree greater than 10 (Figure 4(b)). As truncation increases-and has a disproportionate impact on ties dropped to higherdegree neighbors-individuals with lower mean degree neighbors are predicted to be infected less often than those with the same degree, but lower mean neighbor degree (Figure 4(c) and (d)). This effect is particularly visible at the common FCD value of k. These findings highlight that not only can truncation impact population-level predictions of infection risk, but they may also differentially affect individual-level predictions. There are several ways in which this analysis could be extended. First, it might be informative to consider unweighted, rather than weighted, truncation. Weighted truncation is likely to minimize mis-estimation of local spreading processes, since close-knit groups are likely to be maintained at the expense of a realistic picture of cross-community connections. Unweighted truncation, in contrast, is likely to reduce the speed of process spread generally, but maintain weak ties that span structural holes in the network (Burt, 2004). Second, one could investigate spreading processes based on edge weights, or using unit infectivity. Third, it might be worthwhile to run these analyses for a wide range of truncation levels, in order to evaluate which networks have more or less rapid transitions from relatively accurate spreading process predictions to relatively inaccurate ones, and at what level of truncation these transitions occur. Such an analysis would be particularly useful in the context of a specific empirical network and spreading process, rather than in the theoretical cases presented in this paper, as a precursor to the conduct of data collection in a survey. While we have used a range of network structures and a standard spreading process, our results are limited to the cases we have considered and notably to a single level of network density, and thus investigation of other structures and processes might be worthwhile. Finally, we used only one set of transmission parameters, and thus the absolute impact of truncation may well be different for other infection processes. Nevertheless, we would not expect different transmission rates to change our central finding that network structure is an important determinant of the impact of truncation on predicted epidemic outcomes. The ultimate goal of our analysis is to arrive at more accurate predictions of process outcomes in the context of truncated contact data, the type of data that are common in the study of infectious diseases and public health interventions. In addition to our simulation approach, there is the potential for analytic work to evaluate the level of mis-prediction likely to arise under a given level of degree truncation, for given network structures. Ultimately, this should allow for us to adjust predictions for truncation. Such an approach might use statistical or mechanistic network models to simulate full networks congruent with both the estimated rate of truncation, and observed characteristics of the truncated network; simulations could then be run on these simulated networks to predict process outcomes. As noted above, although we have framed out-degree truncation here as resulting from the adoption of FCD, our methods are agnostic to the cause of truncation. Consequently, our results may generalize to settings where some other mechanism, such as social stigma in the case of self-reported sexual networks, might lead to outdegree truncation. Additionally, we have focused this work on sociocentric network Fig. 4. Mean neighbor degree vs. own degree for full and truncated Karnataka village contact networks. All plots are heatmaps, i.e. depth of color represents frequency of occurrence at the given location. (a) Density of ties in full graph (log-scale); (b-d) Mean proportion of all runs in which the node was infected (linear scale). The black diagonal line shows points of equal node and mean neighbor degree. In the full graph, most nodes are infected most of the time, except those with either very low degree or very low mean neighbor degree. When truncated at mean degree those with middling degree and mean neighbor degree are infected less often. When truncated at half mean degree almost no nodes are ever infected. (Color online) data collection. Truncation and edge non-reporting may also arise within egocentric data collection, requiring the use of ERGMs or other methods to infer global network structure. While beyond the scope of this paper, investigation of the impact on epidemic prediction of degree truncation within egocentric data collection may also be of interest. Similarly, empirical networks (both sociocentric and egocentric) also often suffer missingness due to other mechanisms, such as missing nodes, reporting of non-existent alters and edges linking population members to nonmembers; future investigation of the impact these mechanisms-both alone and in concert with truncation-may be an important avenue of investigation in evaluating possible errors in predictions of spreading processes. Finally, while our focus here has been on degree truncation in sociocentric studies resulting from study design, effective truncation may occur in sociocentric networks for other reasons. For example, there has been increasing research activity in the past few years into digitally mediated social networks, such as those resulting from mobile phone call and communication patterns (Blondel et al., 2015;Onnela et al., 2007a;Onnela et al., 2007b). Social networks are constructed from these data typically by aggregating longitudinal interactions over a time window of fixed length, where the features of the resulting networks are fairly sensitive to the width of the aggregation window (Krings et al., 2012). This leads to effective network degree truncation that is not a consequence of study design per se but rather is induced by the network construction process. It seems plausible that some of the insights we have obtained here, as well as some of our methods, could be translated to this research context. --- Conclusion We have shown via simulation that truncation of a network via FCD has a systematic impact on how processes are predicted to spread across this network, reducing predicted speed of epidemic take-off and the final attack rate, relative to values obtained from a fully observed network. However, the degree of impact varies strongly by the level of truncation, and we find that the transition level-at which impact on predicted process outcomes shifts from small to considerablevaries by network structure. Supplementary information on the structure of the full network-potentially estimated from past egocentric or sociocentric studies in the same or similar populations-will thus often be crucial for increasing the accuracy of predictions of process spread for truncated network data. --- ranges for all runs from 10,000 simulations (7,500 for Karnataka villages) for which at least of 10% of individuals were ever infected. Note that the proportion of retained networks falls as the level of truncation rises (Table S2 for details); empty cells represent simulation types where no runs reached the 10% threshold. All network structures are those with highest network properties in each category (see Methods and Table 1). --- Supplementary Material To view supplementary material for this article, please visit https://doi.org/10.1017/ nws.2017.30.
Understanding how person-to-person contagious processes spread through a population requires accurate information on connections between population members. However, such connectivity data, when collected via interview, is often incomplete due to partial recall, respondent fatigue, or study design, e.g. fixed choice designs (FCD) truncate out-degree by limiting the number of contacts each respondent can report. Research has shown how FCD affects network properties, but its implications for predicted speed and size of spreading processes remain largely unexplored. To study the impact of degree truncation on predictions of spreading process outcomes, we generated collections of synthetic networks containing specific properties (degree distribution, degree-assortativity, clustering), and used empirical social network data from 75 villages in Karnataka, India. We simulated FCD using various truncation thresholds and ran a susceptible-infectious-recovered (SIR) process on each network. We found that spreading processes on truncated networks resulted in slower and smaller epidemics, with a sudden decrease in prediction accuracy at a level of truncation that varied by network type. Our results have implications beyond FCD to truncation due to any limited sampling from a larger network. We conclude that knowledge of network structure is important for understanding the accuracy of predictions of process spread on degree truncated networks.
Introduction Studies of money management and control would have more cross-cultural relevance if they considered the family context of money across generations. Much previous research on money management has focused on the married couple and at one point in time. However, for a fuller understanding of the variations of money management and control in diverse family structures and practices, we need to look at the wider family and take a cross-generational perspective. In some countries in the West, men and women use money as a means by which they 'construct themselves as a couple' (Nyman and Reinikainen, 2007). In many countries in the Asia-Pacific and Africa, it is essential to go beyond the couple in the household to understand the allocation of household money, for money can be one of the ways people present themselves as a family. We illustrate the usefulness of this broader family approach to money through a focus on urban, middle-income, Indian patrilineal nuclear and joint family households in North India. Unlike money in middle-income Anglo-Celtic families in Australia and the United Kingdom, there is a two-way flow of money and information between parents, children and other kin. The generational perspective is important in extended family households for it distinguishes between money management and control at the level of the component couples and the household. We argue that a focus on generation and gender in money management and control will better address the complexities of money and power in transnational families as well as diverse versions of the extended family across cultures. --- Money management and control in the literature The current typology is built around the distinction between money management and control. Money management is widely interpreted as organising money in the household on a day to day basis, whereas money control is linked to the power to make major financial decisions or prevent discussion about these decisions (Pahl, 1989, Vogler, 1998, Vogler and Pahl, 1993, Vogler et al., 2008, Lukes, 1974). The initial thrust of the study of money management and control, particularly in the United Kingdom, was to go beyond the household as a 'black box' to study the allocation of money between the marital couple. Jan Pahl writing in 1989 distinguished between the whole wage system where the husband gave most of his wages to his wife to manage; and the housekeeping allowance system, where the man gave his wife an agreed amount to cover household expenses. Sometimes it included a margin for personal spending money. About half the couples used pooled systems, sharing overall management and control (Pahl, 1989). This collective approach to money differed from the more individualistic partial pooling where some money was shared and the rest kept separate and independent management and control (Vogler et al., 2008, Pahl, 1989, Vogler and Pahl, 1993). More recently the focus has moved to charting the more individualistic management and control of money in intimate relationships, particularly among younger and more affluent couples (Pahl, 2008). Cohabiting couples prefer separate money as a reflection of equality even if it does not make for equity in relationships (Vogler et al., 2008, Vogler, 2005, Elizabeth, 2001, Singh and Lindsay, 1996). This tension between equality and equity was seen particularly among childless and postmarital cohabiting unions (VOGLER, 2009). This conflict between equality and equity is also found in Sweden where couple relationships present themselves as equal despite inequalities in male and female earning power (Nyman, 2003, Nyman andReinikainen, 2007). Part of this tension is because of the perceived ownership of money (BURGOYNE and SONNENBERG, 2009). Remarried couples also preferred a more individualised approach to money as they addressed financial obligations arising from previous relationships (Pahl, 2008, Vogler et al., 2008, Burgoyne, 2004, Lown and Dolan, 1988, Burgoyne and Morison, 1997). There is some discussion that the individualisation thesis paints too'monochrome' a picture as it does not capture money relationships in transnational families, where family and money relationships continue across national borders (Smart and Shipman, 2004). Pahl says that 'Assumptions about family finances developed in Europe and North America may not apply in other parts of the world' (Pahl, 2008, p. 558). She continues, 'We need to move from seeing the household as a bounded unit towards a view that stresses its permeability and its links with wider social and economic structures'(pp. 586-587). It is particularly important in Asia, the Pacific and Africa to recognise that the boundaries of domestic money can be broader than the couple and the nuclear household unit. In order to adequately study money within the household, it is important to study the money flows between the household and the wider family. A 1991 study of the Aboriginal Ngukurr community in south-east Arnhem Land (Senior et al., 2002) showed that money was distributed within the fluid household cluster rather than the household. Senior et al. said this cluster may 'vary in composition from a couple, nuclear family, extended family through to one based on a set of siblings or other close relatives' (p. 5). Gifts, mainly of money comprised an average of 16 per cent of the income of the household cluster. To understand Maori money, it is also important to take into account the money that goes from households to the wh<unk>nau, a group of kin descended from a common ancestor or an extended family group. Money is gifted up and down generations with younger people giving to 'parents, grandparents or others in their parent's generation as well as to brothers, sisters or cousins' (Taiapa, 1994). Money is gifted to the wh<unk>nau for ritual gatherings to mark crises in the lives of wh<unk>nau members. The obligation to gift money for the funeral meeting at the wh<unk>nau takes priority over every day household expenses. There is such a strong moral imperative to share money with extended kin and clan networks that migrants from many countries in Africa are subject to intense pressure. Somali refugees in London (Lindley, 2009) remit money not only to parents and siblings but also to 'uncles, aunts, in-laws, nephews, nieces, grandparents, cousins and others' (p. 1324). A study of Dinka migrants in the United States (Akuei, 2005) shows how a Dinka man is expected to contribute to bride price for three immediate generations on his father's side. He also has obligations to his wife's kin. Akuei, speaking of one of the participants, says 'Within the first two years of resettlement to San Diego, Joseph became directly responsible for 24 male and female extended family members and indirectly 62 persons displaced across a number of locations' (p. 7). Not meeting these obligations means that he is not a 'good moral person' (p. 4). In Fiji and Tonga, remittances also go to non-migrant households -nearly 20 and 80 per cent respectively (The World Bank, 2006). --- The Indian joint family In this paper we focus on the Indian patrilineal joint family household as one example of the generational complexities of an extended family household and the broader boundaries of domestic money. The most common form of the Indian joint family household is a three generation household marked by male descent. It comprises parents, sons and their children. It is this patrilineal joint family household which stands for the Indian family celebrated in popular culture (Uberoi, 2004: 297, Uberoi, 2006, Uberoi, 1998). The matrilineal joint family, marked by female descent, is more narrowly distributed in India among castes such as the Nayars in the south and the Khasis in the north-east. The tarawad among the Nayars is the most celebrated form of the traditional matrilineal joint family household (Patel, 2005). It used to have 20-30 members or more, owned property in common and consisted of 'all the matrilineal descendents of a common female ancestress' (pp 42-43). Women had greater rights and entitlements to property than in the patrilineal joint family, though it was the senior male, a woman's brother who controlled the affairs of the tarawad. The sister-brother tie was the central one, with the husband living in his own tarawad rather than with his wife. The importance of the tarawad has been declining as the bond between father and child gains more importance (Puthenkalam, 2005). Joint family households have always been outnumbered by nuclear households, though it is likely that most people in India live in the bigger joint family households, particularly in rural areas and in North India (Uberoi, 2004). The joint family's importance is greater than its actual prevalence at any one point of time, for most individuals spend some part of their lives in a joint family household. Women most often start their married life in the patrilineal joint family household. When the joint family household disperses over time, it gives rise to various combinations of joint and/or nuclear family households. Ties of property and norms of joint family etiquette often remain (Shah, 2005, Uberoi, 2004, Das, 1976). Sociological literature has confirmed the popular picture of the male control of money and property in the family. Money in the Indian family is studied with reference to women and paid work in middle-income households (Bhandari, 2005, Ramu, 1989, Sekaran, 1992, Sharma, 1986, Bhandari, 2004, Indira Devi M., 1987); gifts and presentations related to life cycle events (Madan, 1993); the discussion of women and property (Basu, 2005b, Basu, 2005a, Kishwar, 2005, Agarwal, 1994, Panda and Agarwal, 2005, Misra and Thukral, 2005, Palriwala and Uberoi, 2005); and the impact of remittances on women's money management roles in the household (Zachariah andRajan, 2001, Kurien, 2002). In the following sections we describe how our qualitative study of money and information in Indian urban family households helped us recognise the wider importance of the familial context of money and generational complexity in joint family households. We then detail the implications of these dimensions for the analysis of the management and control of money. In the concluding section, we propose a broader cross-cultural typology to study money management and control in diverse forms of family across cultures. --- The qualitative study of money in the Indian family We conducted open-ended interviews between November 2007 and January 2008 in English, Hindi and Punjabi with 40 predominantly middle-income and upper middle income persons from 27 households. The interview sample includes 25 people living in metropolitan Delhi which in 2001 had a population of 12.9 million (Census of India Office of the Registrar General India, 2001), seven in a peri-urban area that is being developed into urban housing in the Delhi region, and eight people from Dharamshala, a small Himalayan town in India with some 20,000 people in 2001 (the latest data available). These three sites also broadened the study to include metropolitan, periurban and small town family households. We chose these three sites because both of us had personal, family and professional connections in these places, and so were confident we could have access to suitable participants. We did not interview members of our immediate family or close friends, but sought references from family, friends and colleagues to direct us to their networks. Their reference assured the participants about the nature of our study and allowed us access to examine the private nature of money in families. Having these connections with the participants meant that we did not ask about the quantum of money earned, spent or saved, but talked of broad ranges of household income, how information about money was shared, and how they perceived money was managed and controlled in their household. The interviews ranged from 3.5 hours to just more than half an hour, usually conducted in the person's home. Most interviews lasted an hour to an hour and a half, with the formal interview flowing into a social visit. In 13 of the 27 households we interviewed more than one person -12 married couples (husbands and wives separately for three couples), and in one household a mother-in-law and daughter-in-law. When husbands and wives were interviewed together we most likely got a different picture of household money, than if we had spoken to each of them individually. We also recognise we would have heard different representations of control and management if we had talked to all the adults in the household or had been able to observe the management and control of money in the household. Our focus on representations of money management and control in the urban patrilineal joint family household in North India arose from an initial interest in the privacy of money in urban, middle-income family households. Hence our initial questions related to the way information about money was or was not shared across gender and generations, within the household and wider kin group. We asked about bank accounts and what happened to money earned by members of different generations in the household. These questions led to the family context of money and further probing of the generation and gender divide in information and access to money. Money and information flows within the two, three and four generation joint family households were particularly complex. In our study, 14 of our 40 participants lived in joint family households and 17 had experience of them. --- The participants The characteristics of our participants are set out in Table 1. Thirty-five of the 40 people in our study defined themselves as upper middle, middle or lower middle. We used the National Council of Applied Economic Research's (NCAER) definition of the middle class in 2000-2001 as having an annual household income of between INR 2-10 lakhs (INR200,000 -INR 1,000,000) i (Shukla et al., August 9, 2005). We also used our participants' perception of where they fit, using a mixture of income and the capacity to spend. --------Table 1 here --------There are more women in our study, reflecting that as female researchers we had easier access to women. Men were often unavailable at the time of the interview. In two cases, the women said their husbands would be uncomfortable talking about money. Our participants are predominantly from urban North India. It thus excludes the rural agrarian patrilineal joint family households and the matrilineal joint family households found in South and North-East India. This sample is not representative of Indian joint family households. However, the sample is diverse enough to cover varied dimensions of money management and control in the urban Indian patrilineal joint family. --- Coding and analysis in the grounded study This is a grounded study (Charmaz, 2000, Strauss andCorbin, 1990), in that the emphasis is on a transparent fit between data and theory, rather than a testing of hypotheses. We recorded and selectively transcribed the interviews, noting aspects of the interview that were more like a social visit We used a computer program NVIVO 7 for the analysis of qualitative data. The use of the computer program involved a broad coding of data, analysing the coded data for the main themes, and transparently fitting the data to theory. The program allowed us to identify not only what was said about money management and control, but even what was not said. It was in this process of coding the flow of information and money outside the household, in both nuclear and joint family households that we realised representations of money control at the household level needed to be distinguished from those pertinent to the couple and the individual. The focus had to be on generation as well as gender. Management of the household was also not unitary. Managing the kitchen did not necessarily translate to managing household or couple money. --- The family context of money People in our study speak of the importance of financial obligations to family. This is true for our metropolitan, peri-urban and small town participants. Money is shared between parents and adult children, and in some cases between siblings. This is a key family practice, irrespective of whether it is a nuclear or joint family household. This sense of filial obligation and mutual help is an important factor behind the estimated US$55 billion in international remittances that flowed primarily to families in India in 2010 (Ratha et al., 2011). A two-way flow of money between parents and children is central to family money. Parents acknowledge an obligation to help their adult children. Adult working children, particularly sons, including those not living at home, recognise an obligation to help their parents, even when their parents can do without. This obligation is couched in terms of 'duty' (dharma) on the children's part and a 'right' (haq) on the parents' part. In our study, there are five instances where a parent has a joint account with his or her adult children, making it easier for money to flow between parents and children. The two-way flow is accepted by parents and children. Our participants tell of money that is offered, rather than requested. In our study we have two cases of upper middle-income parents who have the ability and the wish to be financially independent, saying how their adult children keep offering them money. The family context of money means information about money is shared beyond the couple and the household with wider kin. Unlike middle-income Anglo Celtic couples, information about money is not private to the marital couple (Singh, 1997). This family context may mean that information is shared across generation between father and son and between brothers. On the other hand, information about money may not be shared between husband and wife, leading to a greater gender divide in the management and control of money. Urmila (all the names from the qualitative study are pseudonyms), 55, in a Dharamshala middle-income nuclear family household says she and her husband have substantially helped with building her parents-in-law's house. They have also helped with the marriages of her two sisters-in-law and a brother-in-law. Urmila is one of the few persons in our study who has not previously lived in a joint family household, as her husband had a transferable job. Deepak, in his early 30s, a high earning professional in Delhi, living in an upper middle-income joint family with an annual household income of more than INR30 lakhs, says, 'In my house, they expect me to give and I also want to give.... We have to take care of our elders.' Deepak and his wife had recently taken money out of their savings to put towards another house his father bought. Deepak says, 'We bought the entire house together in Dad's name. He will give it back to me in a couple of years. It is all in the same family'. The money also flows from parents to children as Jagdish's story shows. Jagdish, 74, retired from a senior government position, and now working as a consultant, says his father-in-law helped them with money for building their house. He later returned the money. His daughters do not give them money and Jagdish and his wife do not ask. But one daughter makes her car and driver available to them, and also paid INR 30,000 for Jagdish's recent extra hospital expenses. Mutual financial help between male and female siblings -on the husband and wife's sides -is shaped by relationships of reciprocity, need and capacity. It is usual to discuss money with kin from whom a person can expect help and advice. Ritu, 45, a school teacher in the Delhi metropolitan region relates how her brother and father helped substantially with money when Ritu and her husband were building their house. So she shares information about money with her brother and father. Money conversations do not take place with her husband's brothers, as they were neither asked nor did they offer to help with the house. Tara, in her early 50s, in Dharamshala also received help from her husband's brother and sister when Tara's husband had to be hospitalized. She says together (ral mil kai), they were able to pay the INR 70,000 that was needed. This expense represented most of her household's annual income. Her husband's siblings and her sister helped with Tara's son's first year expenses for the engineering college. The norms did not work out as expected in Santokh's family and for Avinash and Asha. Santokh, 81, is from Dharamshala, with an annual household income of below INR 90,000. Loans to family members had not been returned and adult children continue to remain financially dependent on their parents. He therefore talks of money only with his wife and not with his children and their families. Avinash and Asha's story is one where the family context of money led to intrigue, lost business and inheritance, adoption and return, brother against brother, and parents fearing the loss of home. Even as the wider family unravelled, Avinash, 62, and his wife Asha, 60, received money from their parents for their business, the building of their house and the wedding expenses of their daughters. Now that their daughters are married, Avinash and Asha watch whether their children are doing without, so that they can help. --- Management and control of money in nuclear and joint family households In our study we linked money control with the power to make major financial decisions or prevent discussion about these decisions (Pahl, 1989, Vogler, 1998, Vogler and Pahl, 1993, Vogler et al., 2008, Lukes, 1974). Having information about money was a necessary condition for the control of money (Singh, 1997). We asked our participants about recent decisions about savings and investments. We also asked them how they saw the control of money in the household. Based on these factors, the households were seen to have male, female, joint or independent control. We initially approached money management as organising money in the household on a day to day basis. We saw women as managing the money, if they had regular access to money either through the whole wage, housekeeping allowance or a banking account (Pahl, 1989). Access to money for personal expenditure was a key factor in women's representations of money management. We then discovered that in the Indian family, there was another category -the 'irregular dole'. The irregular dole differs from the whole wage and the housekeeping allowance systems, in that money is given as a gift, rather than an entitlement. It is similar to what happened in the 19 th century in the United States (Zelizer, 1994). The woman has to ask for money and justify the need. In our joint family households in peri-urban Delhi and Dharamshala, women have the keys to the kitchen. But the management of the kitchen does not translate to managing the money in the household or the couple, that is, organizing the routine grocery and household purchases, payment of bills and doing the banking. Women may have individual or joint bank accounts but they do not regularly conduct transactions or receive information from these accounts. Where women managed the kitchen only and had no regular access to money, we have classified it as male management. Women have least influence on money in the household when male control is combined with male management. The generational dimension of control and management in the joint family household The generational dimension is important for probing the characteristics of money management and control in the joint family household. Money control and management at the household level may be follow the same pattern for the junior and senior couples, as happens in the joint family households with male control and male management. However, in our study, we also have two instances where the management and control of money for the joint family household is different from that followed by the junior couple. The junior couple contributes a part of their money towards household expenses. This means that the couple continues to be able to control a substantial part of their money. In one case, at the household and senior couple level, control is joint with female management. But with the junior couple, though the control is joint, money management is independent. In the second case, it is female control at the household and senior couple level, but the senior woman thinks her son and daughter-in-law control their money jointly. We also have two cases where the widowed mother-in-law has her own income and controls and manages her own money, whereas the junior couple jointly or independently control their money and that of the household. Being aware of the generational dimension of money management and control in joint family households ensures that variations within the household are taken into account. --- Patterns of money control and management Male control of money is the dominant pattern and found in about half the households we studied (see table 2). Joint control was found in a third of our households. Independent control and female control were of lesser importance with two and three households respectively. Female management was found in nearly three-fourths of our households. When accompanied by male control, it was predominantly the housekeeping allowance, with one household following the whole wage system. When found with joint, independent or female control women accessed money through separate or joint bank accounts. --------Table 2 here -------- As noted above, we also found male management of money in our nonmetropolitan joint family households, ranging from the middle class to those that categorised themselves as'struggling/deprived'. The women received money through the irregular dole. Male management was always accompanied by male control. There were no upper middle class households who had male control and male management of money. The incidence of male management of money among our households was greater than joint management, which was only found in three of the 27 households. --- Male control and male management in non-metropolitan joint family households In the peri-urban and small town joint family households, male control is accompanied by male household management. Male control and management are found at both the levels of the household and the component couples. This pattern is different from Western studies of money which show that male control is often accompanied by female management via the whole wage management and the housekeeping allowance (Pahl, 1989). Male control in our study has a greater spread across household income than in the West. It is found among the middle class, lower middle and the struggling households, rather than just in the lowest income group. Male management and control is not found among households that categorise themselves as upper middle class or global. Only one metropolitan upper middle class nuclear household has male control though accompanied by female management. The woman in this case, sees it as her choice, not wanting to know more about household money as it is complicated since her husband is in business with his father. They previously lived in a joint family household. In joint family households with male control and male management, all or a major part of the money from the other members of the family is given to the male who is in control of the joint household's expenditure, savings and most of its investments. The male who does not control the household income, controls whatever money is left for the couple. Women in joint family households with male control and male management have little information about money in the household or money that belongs to the couple. Information about money flows between the father and son or among brothers. Women also do not have assured access to money. Despite the mother-in-law being the archetypal figure representing power in the Indian patrilineal joint family household, our interviews show that in the non-metropolitan joint family households with male management and control, both the mother-in-law and daughter-in-law depend on the younger and/or older generation male for the 'irregular dole'. Both the mother-in-law and the daughter-in-law have minimal access to money or information as shown in the stories of Amar and Amrit and Rana and Rina below. Amar is over 65 years old, and lives in Dharamshala with her son, daughter-inlaw and a grandson in a middle class household. Amar says when her husband was alive, he '...used to keep the money. He used to buy the rations. If I wanted to spend, I would ask for what was needed.' Amar only discovered she and her husband had a joint account after he died. Now, her son looks after the money in their joint account. Her daughter-in-law Amrit, a graduate in her late 40s or early 50s, says, The pattern is the same with me even in this generation...I take from my husband what I need..... It is not that I get a certain amount every month. If I need to buy a shawl, I ask for money. If he says no, then there is no money. Amrit says she knows about the major investment decisions in the household, such as the purchase of land, but only after the fact. Though she helps out occasionally in the family business, the information about business money is shared between the father and son. Amrit says, He speaks with his son, not with me. I am also not interested in finding out. Even if I did take an interest, he will say, "It is not your concern. Why do you want to know? What will you get if you know?" Rina is similarly excluded from information about money. She is 24, also a graduate, and lives with her husband Rana, 28, in a three generation household which includes her parents-in-law, Rana's younger brother and sister, and Rina and Rana's two sons -one and three years old. Rana controls the household income and is responsible for all the expenditure. His father gives his salary to Rana, according to the custom of his village, but keeps the tips from his government job and the revenue from land in the village, for himself and his ill wife. Rana's father manages and controls the money that is with him. Rana discusses his income and investments partially with his father but is not sure about his father's money. The gender divide in information about money is impermeable. Rana does not talk of money with his mother or his wife. Rina does not have a bank account. She also does not ask him questions about money in his account. She says, 'If I ask, he would feel that I am trying to know his inner most secrets (dil ki baat). All I have to do is cook and feed the family.' --- Joint and independent control in metropolitan Delhi Joint and independent control is found mainly in upper middle class households in metropolitan Delhi. One of the 11 households is in the small town of Dharamshala. Only one of 11 households in this category sees itself as lower middle class. The women in 10 of the 11 households are graduates. In the joint family households this is true of women of both the senior and junior generations. The woman in the 11 th household has an advanced diploma. The three instances of female control belonged to middle-class households. Of these three households, one woman was a single parent. The other two women were in salaried paid work, with the husband either not wanting to manage and control the money and/or recognising that the wife was more expert at it. Most of the women in the households with joint or independent control -8 of 11 -are in paid work. In households with joint control and joint management; and independent control, all the women are in highly paid jobs outside the home. The women have access to money through personal and/or joint bank accounts. Information is shared between the husband and wife. This picture is the one that is closest to that found among middle-income and affluent couples in the West (Pahl, 1989;Vogler, 2009). --- Analysing patterns of money management and control These patterns of money management and control are primarily explained by the same two factors that explain the 'allocative systems' of the West, but in different measure. As Vogler (2009) says, these systems...are largely the result of two inter-related factors: first, the relative economic resources each partner contributes to the household (as measured by household employment status rather than income, because not all couples are economically active) and, secondly, cultural ideologies/discourses of gender, particularly those of male breadwinning versus newly emerging discourses of co-provisioning (pp 66-67). In our study, the role of the ideology of male dominance looms large, and can lead to women not being permitted to work outside the home. This ideology of male dominance is central to male control across a broad spectrum of household income ranging from'struggling' households to middle class households. This pattern goes against the pattern in the United Kingdom where male control and the whole wage system is found only with the lowest household income (Vogler, 2009). The ideology of male dominance often prevents women earning an income outside the home. A woman's income contribution to the household becomes possible, not only with education, but also with an ideology that values women's work, and one where marriage is seen along the lines of a partnership. This is found mainly in metropolitan middle and upper middle class households with female, joint and independent control. --- Linking the ideology of male dominance and male control The ideology of male dominance expresses itself in the ownership of property; male dominance of the public sphere, particularly that of money; the man as the breadwinner with the woman looking after the family and home ((See (Basu, 2005b, Basu, 2005a, Kishwar, 2005, Agarwal, 1994, Panda and Agarwal, 2005, Misra and Thukral, 2005, Palriwala and Uberoi, 2005)). Being a good son and brother can take priority over being a good husband, particularly in the context of the joint family. Unlike the pattern in the West (Pahl 1989), male control is not confined to the lowest income group. Among our households, the ideology of male dominance is most prevalent among the middle, lower middle and the'struggling' households, particularly in non-metropolitan Delhi. Women with a bachelors' degree are prevented from going into paid work, for their primary role is to look after the family. Women in nuclear families may engage in small scale business activities at home, but even this is not permitted in the joint family households we studied. When male dominance is linked to male management, then a woman's access to money and information about money is minimal. An ideology of male control was important in 10 of the 13 households with male control. In three of the eight nuclear households, it was the women who said the husband knew more about money and handled it in the interests of the whole household. In the fourth -Jagdish and Jaya's household -Jagdish, 74, assumed that women were not interested in controlling money. The fifth case is that of Peu, 24, in a lower-middle class household in peri-urban Delhi who wants a career as a beautician. Her parents-inlaw told her 'our daughters-in-law have to observe purdah and cannot venture out on their own'. She says her husband does not object. For the present, Peu continues to be a housewife, saying her son is still too young. They are still linked financially with the parents-in-law as they have only recently separated from the joint family household, The remaining three households differ as to the reasons for male control. In two households in Dharamshala, with an annual household income of less than INR 2 lakhs, there was little money to control. In the third household, Navin, 29, newly married, hopes that his wife Naina, 24 would begin to be independent with money once she finishes her law degree and gets familiar with metropolitan Delhi. In the five joint family households, male control was accompanied by male management. The ideology of male dominance and the woman's place in the home is the common factor. None of the junior or senior women are in paid work, even if they have skilled qualifications and would like to be in paid work. Balbir and Bina's story below illustrates male control across generations and the role of ideology. --- Balbir and Bina's story Balbir, 28, and Bina, 23, live in a three generational patrilineal lower middle income joint family household in a peri-urban area near Delhi in a house that Balbir's grandfather built. Balbir and Bina have been married for a year and live with Balbir's parents, two sisters who are still going to university and two brothers still in school. Balbir continues to give all his monthly salary of INR 5,000-6,000 to his father and then gets back from him some money for himself and his wife. He is also doing his MBA. Balbir's father controls the household money and the money for himself and his wife. He makes all the decisions on major expenditure items. --- An ideology of partnership and women'
Studies of money management and control will have more cross-cultural relevance if the family context of money across generations is taken into account. The study of money management and control in middle-income nuclear and joint family households in urban India illustrates the importance of examining money flows within the wider family context because there is a two-way flow of money beyond the married couplebetween parents and adult children, siblings and other members of the extended family. In the three or four generational joint family, control and management at the household level is not necessarily duplicated for the constituent couples. We draw on open-ended interviews of 40 persons from 27 urban middle-income households in North India, between November 2007 and January 2008, to show that the male control of money is the dominant pattern. This pattern is linked to the ideology of male dominance that is found among the middle, lower middle and struggling households, particularly in nonmetropolitan households. The upper middle class households predominantly in metropolitan households show a pattern of joint or independent control. The focus is on the couple's money decisions within the context of the wider family.
. In the five joint family households, male control was accompanied by male management. The ideology of male dominance and the woman's place in the home is the common factor. None of the junior or senior women are in paid work, even if they have skilled qualifications and would like to be in paid work. Balbir and Bina's story below illustrates male control across generations and the role of ideology. --- Balbir and Bina's story Balbir, 28, and Bina, 23, live in a three generational patrilineal lower middle income joint family household in a peri-urban area near Delhi in a house that Balbir's grandfather built. Balbir and Bina have been married for a year and live with Balbir's parents, two sisters who are still going to university and two brothers still in school. Balbir continues to give all his monthly salary of INR 5,000-6,000 to his father and then gets back from him some money for himself and his wife. He is also doing his MBA. Balbir's father controls the household money and the money for himself and his wife. He makes all the decisions on major expenditure items. --- An ideology of partnership and women's paid work It is mainly in the upper middle class families that we heard stories of women's work being valued. Women were seen and saw themselves as partners in marriage and the household. Women in these families were highly educated and earned salaries that allowed them to manage their money independently if they so chose. The ideology of the partnership of marriage does not focus solely on the togetherness of the couple, but is placed within the context of a harmonious extended family. In the West, the move is from an ideology of marriage as an equal partnership, to the growing importance of independence in relationships. Unlike the discussions about equality and equity; partnership and individuation among couples in the West (Nyman, 2003;Pahl, 2008;Vogler, 2009), in our study we heard more about the couple's place in the wider family. This is true of nuclear and joint family households. It is important to remember that 11 of the 18 nuclear households we studied used to be part of joint family households. The conversations then are of managing and controlling money so that the couple can help parents on the one hand and adult children on the other. The story of Deepak's family is very different from that of Amar and Amri, illustrating the difference between the ideology of male dominance and partnership. Deepak is in his early 30s and works with a multinational company. He belongs to a two generation upper middle class joint family household. When Deepak began working, he used to give his whole salary to his mother. He got married seven months ago. He and his wife, who is in financial services, jointly control their money, while his parents jointly control the money of the household. Deepak and his wife have separate accounts where their individual salaries are deposited. Neither controls the other's personal spending. The separate accounts earmark separate salaries for taxation purposes. They also have a joint account where they are saving for a future home and children. Deepak and his wife discuss their future, in the context of the joint family's welfare. As noted above, they jointly decided to contribute to the house that Deepak's father was buying. They also openly discuss the amount of money they need to contribute for the running of the household. Deepak's story is replicated in Preeta's story below, where Preeta and her husband jointly control their money and Preeta manages the household that includes her mother-in-law. --- Generational change in the management and control of money People often describe their present money management and control in the context of generational change. People in the 11 nuclear households that were previously joint usually begin their story with money management and control in the joint family. Though some joint families have continued with the tradition of male control and male management, in some joint families, this pattern has changed over the generation to one of joint control and female management of the joint family household. This is illustrated in Preeta's story below. --- Preeta's story Preeta, in her late 40s or early 50s, is part of an upper middle-class three-generational joint family with an annual household income of over INR 30 lakhs. She is married to the only son of the family. Her mother-in-law lives on the first floor. Preeta, her husband and their two boys, still in school, live on the ground floor. They have all their meals together on the first floor. Preeta and her husband jointly control the money for the household and for themselves. Preeta manages the money for the household, and manages the couple's money. Preeta's mother-in-law controls and manages her own money. Preeta's management role has emerged over time as her mother-in-law has withdrawn from the role because of old age, illness and the death of her husband. When Preeta's mother-in-law (a graduate) got married, her husband, a professional would give most of his money to his mother. He gave his wife a small amount when he wanted to. If she wanted anything above that amount'she had to go to her mother-in-law and ask her. Then the mother-in-law would give her money if she was in a mood to.' It was only after Preeta's mother-in-law died that Preeta's father-inlaw began giving his wife the money he earned. But even then, he discussed his investments with his son, but not with his wife. Preeta's father-in-law and mother-inlaw had a joint bank account, but the family only discovered it after the father-in-law's death. However, her mother-in-law at that stage was placing money in fixed deposits for herself or together with her daughters. Preeta says her father-in-law felt 'it was very important to be a good son. He forgot it was important to be a good husband as well.' Unlike her mother-in-law, Preeta has had no problem with personal spending money. Preeta's husband gave Preeta the money to give to his mother. Preeta has access to and transacts via the joint account she and her husband have together. She says she knows 'absolutely what my husband has.' She keeps herself well informed so that she can be part of the decision making about money. The key to this generational change is neither income nor education, for both Preeta and her mother-in-law had a BA degree and were part of high-income households. Preeta says the shift happened because her husband thinks it is equally important to be a good son and a good husband. --- Conclusion Our study of money in urban Indian middle-income patrilineal households has added two additional ways of examining money management and control -the family context and the generational dimensions of money. These two factors are important for the cross-cultural study of families and particularly for extended family households. The family context of money recognizes that across cultures, the couple is not Men controlled the money in nearly half the households we studied. In our joint family households in peri urban Delhi and Dharamshala, male control was accompanied by male management where the woman received money by 'irregular dole'. The male control of money was found in households with the ideology of male dominance with the woman's place being in the home. This ideology was found across a broad spectrum of household income and actively prevented educated women from paid work. Patterns of joint and independent money control and management were found in a smaller proportion of our households. This pattern is most often found in higher income households where the women are in paid work and earn an independent income. It mimics the demographics for independent management and control in the West. The difference is that the ideology behind a couple's joint control and management of their money is the welfare of the wider family, rather than only signifying the couple's togetherness, independence and/or equality. A broader framework of money that includes the family and the generational context of money, will help us understand the control and management of money across cultures. It will also connect the literature on money management and control with that on the transnational family, where money flows across generations and borders. Once the frameworks are in place, it will be possible to undertake generalisable studies of the management and control of money in Asia, the Pacific and Africa. We will then be able to examine patterns of money management and control against the role of ideology, household income, women's education, paid work, active bank accounts, and information about household money. We may find that the meanings of jointness and separateness in relationships are different within and between cultures. These studies will help us understand the relative influence and demographic spread of ideologies and world views, thus placing Western literature in a more global context. --- References
Studies of money management and control will have more cross-cultural relevance if the family context of money across generations is taken into account. The study of money management and control in middle-income nuclear and joint family households in urban India illustrates the importance of examining money flows within the wider family context because there is a two-way flow of money beyond the married couplebetween parents and adult children, siblings and other members of the extended family. In the three or four generational joint family, control and management at the household level is not necessarily duplicated for the constituent couples. We draw on open-ended interviews of 40 persons from 27 urban middle-income households in North India, between November 2007 and January 2008, to show that the male control of money is the dominant pattern. This pattern is linked to the ideology of male dominance that is found among the middle, lower middle and struggling households, particularly in nonmetropolitan households. The upper middle class households predominantly in metropolitan households show a pattern of joint or independent control. The focus is on the couple's money decisions within the context of the wider family.
Background In Brazil, an increase in life expectancy and a decrease in the fertility rate have led to a significant aging population. In South America, the aging population/proportion of older people is increasing at a more rapid rate than in most developed countries [1,2]. Aging is a complex phenomenon that requires increasing numbers of multidisciplinary studies. The term "active aging", which was adopted by the World Health Organization (WHO), involves optimizing the opportunities for health, participation and security to improve the quality of life (QOL) as individuals age [3]. The challenge for aging studies is to understand the conditions associated with aging as a positive process and old age as a stage of life in which health, well-being, pleasure and QOL can be increased [4][5][6]. The QOL of older adults could be good, or at least preserved, provided they have autonomy, independence and good physical health and provided they fulfill social roles, remain active and enjoy a sense of personal meaning [7]. Epidemiological population-based studies are important for identifying the determinants and etiological factors associated with aging. To investigate the determinants of aging, questions must be answered using longitudinal surveys [8]. Longitudinal studies specifically designed to assess health, QOL and associated risk factors are not abundant in the literature, particularly those performed in underdeveloped countries in which poverty and a low educational level might lead to a different set of variables that affect the aging process [9]. In Brazil, a country that is rapidly aging and that suffers from large inequalities, the study of the QOL among aged people is important for the future health. This study sought to examine the association between QOL, gender and physical and psychosocial health among older Brazilian community-dwelling adults, with the aim to identify potential factors associated with better QOL. --- Methods The Aging, Gender and Quality of Life (AGEQOL) study is an observational, cohort study of a community-dwelling population aged 60 years and older. The sample is representative of the city of Sete Lagoas in the state of Minas Gerais, Brazil, which has a population of approximately 21,000 older adults (10.2% of the population) [10]. This city is divided into 17 administrative regions, one district and four rural areas [11]. --- Sample A complex sampling design was adopted for this study and consisted of a combination of probabilistic sampling methods for selecting a representative sample of the population [12]. For this sampling, the following two calculations were performed: an estimation of the number of older adults and an estimation of the number of households to be visited. The sample size calculation was performed to compare genders by considering the prevalence of functional impairment in instrumental activities for males (86.6%) and females (72.9%) [13]. The estimated error was up to 5%, with a power of 80% at 95% confidence intervals (95% CI) when considering a design effect of two. An estimated additional 20% of the sample size was added to compensate for refusals. The samples from each group (men and women) were stratified by age in relation to the population and were corrected based on the probability of dying. Of the total potential participants living in the selected dwellings, 25 (1.2%) were excluded because they could not answer the questionnaire or because of cognitive impairment/dementia or difficulty speaking. One hundred and twenty-five subjects (5.8%) refused to participate in the study, and 100 (4.8%) could not be located or had died. The final sample consisted of 2,052 individuals, of whom 59.7% were female. The sampling process was conducted in two stages. The census tracts were first selected, and the households within each sector were then selected [10]. In each household, all residents aged 60 years or older were interviewed, regardless of marital status or kinship. --- Data collection A pilot study including 107 older adults (approximately 10% of the sample) was conducted prior to data collection. All of the instruments were validated for Portuguese in Brazil, and the test/retest method was used to assess reliability and concordance. Coefficients greater than 0.80 were obtained (p <unk> 0.001) and included a weighted Kappa (95%) value of 0.81 (0.71 to 0.91) and an adjusted Kappa value of 0.86. The data collection was conducted in the homes of the older adults between January and July 2012 and involved household interviews and examinations conducted by three examiners and three annotators. All persons 60+ years in the selected households were informed of the study and were asked to sign an informed consent form that had been previously approved by the Ethical Committee of the Federal University of Minas Gerais. The interviews lasted 40 to 60 minutes. At the end of the interviews, each subject in the city received guidance regarding health care and activity options as well as the personal contact information of the researcher responsible for the questionnaire. --- Measures The socioeconomic and demographic data included age, gender, marital status, income categorized by the median value, years of education, residence and occupation. Most independent variables were dichotomized to enhance the interpretability of the logistic regression coefficients. Physical activity and social participation were measured using a single question with a dichotomous answer (yes or no). The health-related component included self-reported health conditions, which were assessed using a Likert scale, and access to and utilization of health services. For this study, the categories were grouped into poor (very poor and poor), regular and good (good and very good). With regard to the chronic diseases previously reported to be most relevant to the loss of functionality in aging subjects (hypertension, diabetes, cardiovascular disease, musculoskeletal disorders and respiratory diseases), the number of diseases was recorded as 0, 1 or <unk>2. Functional limitations were evaluated by combining the participants' responses to questions about six basic activities of daily living (eating, dressing and undressing, grooming, walking, getting in and out of bed, bathing and continence) [14] and seven instrumental activities (using the telephone, travel, shopping, meal preparation, housework, taking medicine and management of finances) [15]. To evaluate the cognitive status of the older people, we used the Mini Mental State Examination, which has been validated in Brazil [16] and has a cut-off of 21/22 points [17]. A score <unk>21 indicated cognitive impairment. The presence or absence of a functional limitation was determined depending on the type of daily living activity and cognitive status, as adapted from Albala [18]. The subjects were classified as restricted if they had one or more limitation in basic or instrumental activities or if they had cognitive impairment. The presence of depressive symptoms was assessed using the short version of the Geriatric Depression Scale (GDS-15) [19], with a cutoff of 5/6; a score <unk>6 indicated suspected depression. Family functioning was assessed using the five-item Family Adaptability, Partnership, Growth, Affection, and Resolve (APGAR) scale, which measures the satisfaction of older adults in relation to various aspects of family life [20]. The responses consist of values between 1 (hardly) and 3 (but not always), and the total score ranges from 5 to 15. A score <unk>10 indicates family satisfaction [21]. --- QOL We used the World Health Organization Quality of Life Assessment-Brief Instrument (WHOQOL-BREF) [22] and the World Health Organization Quality of Life Instrument-Older Adults Module (WHOQOL-Old) to evaluate QOL [23]. The first instrument is composed of 24 facets that are grouped into four domains that focus on physical, psychological, social and environmental aspects. There is no total score for this instrument, and each item contains five Likert response options that are recorded as scores of 1-5. The WHOQOL-Old module consists of 24 items that are divided into the following six domains: sensory abilities (SAB); autonomy (AUT); past, present and future activities (PPF); social participation (SOP); death and dying (DAD); and intimacy (INT). The scores of all domains are combined to produce an overall score for QOL in older adults, with higher scores indicating good QOL. The instruments were previously validated by Fleck et al. [24,25] and showed good reliability and validity in the assessment of QOL of Brazilian older adults (the Cronbach's alpha score ranged from 0.7 to 0.8 for the WHOQOL-Bref and from 0.7 to 0.9 for the WHOQOL-Old). --- Statistical analysis SPSS software (SPSS Institute, Chicago, IL, USA) version 19.0 was used for the analysis and included <unk> 2 tests and ordinal logistic regression. K-means clustering analysis was used to obtain three groups by considering the better distance between the mean scores of the four dimensions in WHOQOL-BREF and the mean of the total WHOQOL-Old score (Figure 1). The F test was used to analyze the differences and characterize the groups with a significance level of 5%. This type of analysis is an analytical statistical tool that is used to define the development of mutually exclusive, significant subgroups based on the similarities among individuals, without prior knowledge of the allocation within the groups. In cases in which the grouping of the data is successful, the groups are internally homogeneous but have high external heterogeneity [26]. Canonical discriminant analysis was used posteriorly to validate the cluster analysis described by two functions. The objective of discrimination is to maximize the variance between and within groups and to verify the efficiency of the overall correct classification of the model [26]. The QOL level among the clusters was adapted from Oliveira et al. [27]; for all the WHOQOL domains, there was a group with good QOL scores, a group with intermediate QOL scores and a group with worse QOL scores. Ordinal logistic regression was used to test the association between QOL and physical and psychosocial health after controlling for age and socioeconomic status. All analyses were performed separately for each gender. In this study, we applied the Polytomous Universal Model (PLUM), which incorporates the ordinal nature of the dependent variable in the analysis; thus, a logistic regression model with proportional-odds and Logit function [28] was performed. The odds between the categories of the dependent variable were compared by calculating the crude and adjusted odds ratio (OR), and tests evaluating the homogeneity of slopes and multicollinearity were conducted using Pearson's adjustment to analyze the validity of the model. To ascertain the possible interference of a small number of observations, we used residual analysis for ordinal data, as proposed by McCullagh [29]. All of these tests showed that the model satisfied all of the assumptions, and the effect of the complex sample design was considered in all of the analyses. --- Results The age of the total sample at baseline ranged from 60 to 106 years old, and the mean age of all participants at baseline was 70.89 <unk> 8.14 years (71.03 <unk> 8.35 for women and 70.69 <unk> 7.83 for men). Table 1 shows the descriptive statistics of the socioeconomic and health conditions of the participants according to gender. Thirty percent (625) of the participants were more than 74 years old, and 317 (15.4%) older adults were octogenarians. Most men (70.8%) and women (68.7%) were between 60 and 74 years old, and there was no difference in age distribution between genders. Forty-eight types of living arrangements were identified among older adults in the city under study. When taking the three groups of living arrangements that were established in this study into account, it was observed that the majority of older adults who lived alone were women (71.5%), whereas 75.5% of men lived with their partners (p <unk> 0.001). There were no differences in the years of education between the different genders; however, 10.4% of men and 8.6% of women had completed over 4 years of study (Table 1). Additionally, there were significant differences related to marital status, income, retirement, and living arrangement between genders. The majority of men in the sample were married (74.5%), while 61.7% of women were single, separated or widowed. Most older adults had low monthly income (66.1%), and this percentage was higher for females (71.5%) compared to males (58.1%) (Table 1). The self-perceived health status was different between men and women (p <unk>0.001). While 50.8% of men rated their health as good, most women rated their health as fair (37.8%) or good (41.8%). Only 15.9% of the older adults did not have chronic diseases; however, the percentage of women (59.6%) with more than two diseases was statistically higher (p <unk> 0.001) than that of men (44.6%). The prevalence of cognitive impairment was 35.3%, with a slightly larger proportion of women (36.0%) than men (34.3%) reporting this condition. In relation to depression, there was a 30.2% prevalence of depressive symptoms and a statistically significant (p <unk> 0.001) difference between genders (23.8% for men and 34.4% for women). There was a high prevalence of functional limitations (36.7%) and a significant difference (p = 0.001) in functional limitations between men (32.6%) and women (39.6%) (Table 1). Cluster analysis (k-means) resulted in the formation of the following three groups of older adults in relation to QOL (Table 2): subjects with poor, fair and good QOL. The majority of the older adults were included in the fair QOL group (51.4%), which corresponded to the average level of scores in the WHOQOL. The group with worse QOL included 371 people (18.1%), whereas the good QOL group included 627 subjects (30.6%). The results of the test for equality of the group means between the groups were significant, indicating that the groups differed in all QOL domains. The overall correct classification of the canonical discriminant functions was 97.9%, with a correlation coefficient of 0.89. Differences were observed in all of the QOL variables, except for those of retired individuals. In this case, the socioeconomic distribution between the genders was reversed, with 47.7% of the older adults in the higher QOL group being male and 67.7% of those in the lower QOL group being female. Additionally, there was a gradient association between low QOL and worse health perception, cognitive impairment, depressive symptoms, family dysfunction and functional limitation. Most of the older adults who reported two or more chronic diseases (70.4%) were allocated to the low QOL group (Table 3). The results of the ordinal regression model, which estimates the OR of good QOL by gender, are shown in Table 4. Age, marital status, income, and cognitive impairment did not remain associated with QOL in the final model. There was an education gradient for the QOL of men. Men with 1-4 and >5 years of education were 2.2 and 4.2 times more likely to have a better QOL than illiterate men. Similarly, women with five or more years of education were associated with good QOL (OR = 2.2; p <unk> 0.001) (Table 4). Retired men had better QOL when compared to nonretired men (OR = 2.2; 95% CI = 1.4-3.2), but this association was not observed in females. Men living in mixed arrangements (OR = 0.5; p = 0.033) and women who did not practice physical activity (OR = 0.7; p = 0.022) tended to have a poorer QOL (Table 4). As shown in Table 4, there was an increase in the OR for the association between QOL and self-rated health for both genders once the model was adjusted for demographic variables and psychosocial health. Men with fair health (OR = 3.0; 95% CI = 2.2-4.3) and, in particular, good health (OR = 5.0; 95% CI = 3.5-9.4) were associated with good QOL. Women with good and fair health were 4.2 (OR = 4.2; 95% CI = 2.8-6.2) and 3.0 (OR = 3.0; 95% CI = 2.3-4.0) times more likely to have a good QOL, respectively. For both genders, there was a robust association between QOL and all psychosocial variables, except cognitive impairment. Men without depressive symptoms and women without family dysfunction were 3.6 (OR = 3.6; 95% CI = 2.5-5.2) and 3.0 (OR = 3.0; 95% CI = 2.3-4.0) times more likely to have good QOL, respectively (Table 4). --- Discussion The physical and psychosocial health and sociodemographic variables examined in this study were evaluated using ordinal logistic regression, which resulted in the following five variables being associated with good QOL for both genders: self-rated health, depressive symptoms, years of education, chronic diseases, and family dysfunction. Additionally, good QOL for men was associated with retirement, mixed living arrangements, and physical activity, whereas good QOL for women was associated with physical activity; these results are similar to those of other studies [27,30,31]. These factors represent targets for policy action because they have the potential to affect the health of older individuals in the general population. A number of studies have been performed on QOL in older adults. This study is original and innovative because it used a representative sample to provide information regarding an ordinal positive relationship between QOL and self-rated health. Furthermore, our results indicate that the most important factors for a good QOL for both genders is a good health perception and a lack of depression, even when the model was adjusted for socioeconomic conditions. We observed a significant difference of 4.4% when comparing good self-rated health between the low and high QOL groups. In the ordinal regression, the men and women who reported having good health were 5.7 and 4.2 times more likely to have good QOL, respectively. Previous studies on QOL in older adults have also shown a direct relationship with self-rated health [27,32,33]. In particular, older adults who evaluated themselves as having good health tended to have good QOL [27]. The perception of health in older adults was generally positive because most of the older adults in this sample rated their health as good (52.1%), including 58.0% of the men and 48.2% of the women (48.2%). However, the percentage of poorer self-rated health was higher in women compared to men. This study provides further evidence that QOL can be explained by self-rated health and its associated factors among older men and women. In the SABE study (Salud, Bienestar y Envejecimiento) in S<unk>o Paulo, Brazil, 8.9% of women and 7.2% of men demonstrated poor health. In other SABE study countries, the participants reporting good/very good health ranged from 27.9% of women (Mexico) to 69.0% of men (Uruguay) [2]. A previous study on the components of self-rated health among adults suggested that physical health (chronic diseases and functional limitations) most likely comprises the majority of an individual's perception of health status [34], and this result was observed in this study. Health perception involves an individual's evaluation of his/her body in relation to his/her feelings, including feelings regarding health and well-being, and this perception can be altered by environmental stressors and the social context [35]. For older adults, the concept of self-rated health remains stable despite significant health problems, although over time, there might be a reduction in the standard of good self-rated health [36]. Self-rated health has been shown to be a reliable method for measuring health status [37] and to be a consistent predictor of mortality in older adults [38]. It is essential to use the association between perceived health and QOL in patients, especially in regards to the dual direction of this association. We observed a very strong association between QOL and depressive symptoms, which corroborates the findings of other studies [32,33,39,40]. Thus, the choice of good QOL was 3.6 and 2.2 times higher for men and women without depressive symptoms than for those experiencing depressive symptoms, respectively. This finding could be explained by the high prevalence of depression (30.2%) in this sample; this figure reached 34.4% in women and 70.4% in women with poor QOL. These disorders are more prevalent in females, but this gender vulnerability varies with age [41]. In a study conducted in <unk>ód<unk>, Poland, 30.9% of older adults (56.5% females) were found to suffer from depression. According to the authors, the chances of good selfrated QOL were 9.9 (95% CI = 5.0-19.6) times higher in older adults without depression [42]. Considering the importance of the physical and psychosocial aspects of active aging and of QOL in older adults, other results of this study should be briefly discussed. In this study, an increase in the number of chronic diseases was associated with a decrease in QOL, and statistically significant gender differences were observed between chronic diseases and QOL. Most women who reported two or more diseases were classified as having poor (73.3%) or fair (62.3%) QOL (data not shown). In general, the prevalence of chronic disease among older people in Brazil is high and differs between genders [11], resulting in negative repercussions on QOL [43]. Preventive actions and the promotion of policies for controlling the effect on health conditions could result in good QOL in this population [30]. Physical activity is a protective factor for QOL and has been previously discussed in the literature [44,45]. For women, we observed a significant association between QOL and physical activity (OR = 0.7; 95% CI = 0.5-1.0), i.e., the choice of good QOL was 1.4 times higher for women who practiced physical activity. However, this association was not accurate once the confidence intervals included a value of 1.0. Physical activity was measured using a single yes/no question, which is an important limitation of this study because these results assume that any level of physical activity will be associated with health. We did not observe an association between marital status and QOL, although we observed an inverse association between QOL and family dysfunction. Men and women who were satisfied with their family relationship had 1.8 and 3.0 times higher odds of good QOL, respectively. Frequent contacts and visits with friends or family have been shown to motivate activity and increase selfrated QOL [46]. Additionally, we found a high percentage of individuals with poor QOL living in mixed arrangements, i.e., sharing the household with their sons and frequently with sons and grandchildren (44.6%). This situation, which is common in other Brazilian regions, is in contrast to the living arrangements in developed countries [11]. As shown in Table 4, men living in mixed arrangements had worse QOL than those living alone. In our dataset, most men who lived in mixed arrangements had functional limitations and reported more than two chronic diseases (63.3%). It is possible that men in our sample could have been living in mixed arrangements because they had poorer health and therefore needed daily assistance; however, these results should be interpreted with caution, as there was a low percentage of men living alone (9.7%). Size, sample stratification and corrections minimized these effects, thus permitting comparisons in this study. A mixed living arrangement could have a negative effect on the older population [20,47]. However, living alone presents a greater risk of loneliness and isolation because loneliness increases as the social contacts of older individuals decrease [46]. Similar to the results of other studies [33,48,49], we found an association between QOL and education. Sete Lagoas is a Brazilian city with high life expectancy (73.9 years) and good social indicators [50]. In addition, illiteracy is high in this sample (28.2%) compared to the current national data (24%) [51]. These results are often found in most Latin American countries [2] and in some regions in Brazil, where very different educational opportunities are available for the rich and poor. A low level of education is an important aspect to be considered when developing public policies for older adults and a proposed collective action. In our study, the illiteracy rate was similar between genders (29.1% for men and 27.7% for women). A previous study investigated trends in educational inequalities in terms of old-age mortality in Norway from 1961 to 2009, and the authors observed that relative educational inequalities in old-age mortality were increased for both genders [52]. The association of years of education with QOL was different between the genders. We observed an ordinal crescent impact of years of education on QOL for men, indicating that education can be a protective factor for good QOL among men. The QOL among women with 1-4 years of education was no different than that of illiterate women. Our results correspond to the baseline data reported for the AGEQOL study. However, the lack of understanding of the ways in which specific levels of education interfere in the association between SES and QOL is the first limitation of this study. A longitudinal follow-up study of older adults would permit better comparisons of this study with others, although such comparisons might be hampered by differences in the QOL models and measures that are employed across studies. It is not yet possible to determine whether there is a temporal relationship between the studied variables. The response rate in this study could be considered high (98.8%); therefore, this study is one of the few studies that have been performed using a probabilistic sample of older adult community residents with an adequate number of participants to perform an ordinal logistic regression. Our results are valid and representative of the population living in the community that lacks significant cognitive and/or physical deficits. In addition to the limitations of this being a crosssectional study, it should be emphasized that the evaluation of QOL presupposes the quantification of a construct that is sensibly marked by the subjectivity of individual experiences, beliefs, expectations and perceptions [24]. In this sense, it is necessary to discuss the instruments used to measure QOL in older adults. We used the WHOQOL-BREF and WHOQOL-Old, which were developed by the WHO, are widely reported in the scientific literature and have been validated in Brazil [24,25]. The results of this study corroborate those reported by the Brazilian WHOQOL group. For older Brazilian adults, a positive QOL includes several aspects such as activity, income, social life and family relationships, whereas a negative QOL is related to poor health, which differs between individuals [53]. WHOQOL-Old is a supplementary module for older adults and can be added to the existing WHOQOL instruments [22]. Bowling [7] compared generic QOL scales used for older adults and showed that the WHOQOL-Old was the most comprehensive instrument; the questions in this instrument are based on measuring suffering, but the questionnaire is relatively long, and the Likert scale format might be boring to the subjects (although there is no evidence that this characteristic has adversely affected responses to date). Additionally, Bowling emphasized the need for a generic, truly multidimensional QOL measure with minimal respondent burden for evaluating the outcomes of health and social care in older populations [7]. The reason for the existing difficulties in the assessment of QOL that limits its inclusion in clinical practice and public health services is relevant [54]. To minimize these limitations, one specific method of analysis was conducted in this study. Based on a Brazilian study [27], we used cluster analyses and canonical discriminant analyses to compile both WHOQOL instruments into a unique measure for QOL. This analysis was performed to provide an ordinal variable with three internally more homogeneous groups that were distinct from each other. Additionally, we minimized the variations between the mean scores of the five dimensions of QOL that were considered. We found a high percentage of correct classification (97.9%) and a high correlation coefficient (0.89), which indicated the likelihood that we had constructed a good measure of QOL for older adults in this sample. In future studies, we suggest replicating this statistical model, considering gender and age stratification variations and including other independent variables concerning nutrition and lifestyle. Adaptation and resilience might also play a role in maintaining good QOL [55]. Despite these limitations, this study confirmed that the QOL of older adults differed between the three clusters that were formed, with a good QOL being strongly associated with good self-rated health, the absence of depressive symptoms, and family satisfaction. Overall, the results demonstrate that active aging in Sete Lagoas, Brazil, does not occur evenly across genders. Better healthcare requires the inclusion of such differences as part of the comprehensive evaluation of older adults [56]. The discussions of aging in the different genders in relation to living conditions and perceived health that are presented in this study need to be further explored, as there are particularities of each group that may have been missed during routine analysis. We believe that this study may contribute to the formulation of new public health and social care policies for older adults in the medium and long term. Older adults will benefit from interdisciplinary monitoring that focuses on promoting health, improving QOL and active aging. --- Conclusions We conclude that there are gender differences related to better QOL in this sample cohort. Women with good physical and psychosocial health are more likely to have a better QOL. For men, the best QOL was associated with high socioeconomic conditions and good physical and psychosocial health. We hope that our study contributes to future discussions on the most important predictors for assessing QOL in older adults and on long-term changes in the perception of QOL in this population. --- Competing interests The authors declare that they have no competing interests. --- Authors' contributions ACVC conducted all data analyses and drafted the manuscript. AMDV and EFF contributed to the conception and design of the study. CA participated in the interpretation and discussion of the data and made critical revisions to the manuscript. All authors read and approved the final manuscript.
Background: In Brazil, a rapidly aging country suffering from large inequalities, the study of the quality of life (QOL) of aged people is important for the future health. The aim of this study was to examine the associations among QOL, gender, and physical and psychosocial health in older Brazilian community-dwelling adults to identify factors that are associated with better QOL.The "Aging, Gender and Quality of Life (AGEQOL)" study, which included 2,052 respondents aged 60 or older, was conducted in Sete Lagoas, Brazil between January and July 2012. The respondents answered questions regarding their socioeconomic and demographic information, health and social situations, cognitive impairment, depressive symptoms and family satisfaction. The authors also applied the Brazilian version the World Health Organization Quality of Life QOL Assessment-Brief Instrument (WHOQOL-BREF) and the World Health Organization Quality of Life Instrument-Older Adults Module (WHOQOL-Old). Ordinal logistic regression with the Proportional-Odds and Logit function was used to test the association between QOL and physical and psychosocial health according to age and socioeconomic status. Results: Older adults of both genders with five or more years of education, good self-rated health, an absence of depressive symptoms, and no family dysfunction reported better QOL. Retired men had a better QOL compared to non-retired men (OR = 2.2; 95% CI = 1.4-3.2), but this association was not observed in females. Men living in mixed arrangements (OR = 0.5; p = 0.033) and women who did not practice physical activity (OR = 0.7; p = 0.022) tended to have poorer QOL. Conclusions: We conclude that there are gender differences related to better QOL in this sample. Women with good physical and psychosocial health are more likely to have a better QOL. For men, the best QOL was associated with high socioeconomic conditions and good physical and psychosocial health.
I. INTRODUCTION Engineers play a crucial role in solving complex sustainability problems, such as climate change, resource scarcity, and social injustice [1,2]. These problems are characterized by a high degree of uncertainty, ambiguity, and conflicts of interest and are therefore often called "wicked problems" [3]. Unfortunately, most contemporary engineering education does not adequately prepare students to address wicked problems and thus to assume professional responsibility for the societal and environmental impacts of technological development [4,5]. Emotions play a vital role in engineering education that aims to prepare students to address wicked problems [6,7] and in ethically responsible engineering work [8,9]. At the same time, engineering education and practice are often described as purely rational activities [10] and there is very little research on emotions in engineering education. This study contributes to an emerging body of research on the role of emotions in engineering education. We use positioning theory [11] to explore the role of emotions in learning to address wicked problems in engineering education. More specifically, we use the concept of emotional positioning, which refers to the construction and negotiation of subject positions in and through emotion(al) discourse [12], i.e. emotional subject positions [13]. We answer the following research question: How do engineering students construct and negotiate emotional subject positions in discussions about wicked problems? II. BACKGROUND Almost all of the few existing studies on emotions in engineering education have focused on emotions as individual competencies or experiences, such as empathy [14], shame [15], and frustration [16]. However, research has also suggested that expressing emotions in social contexts may play an important role in explicating personal values [17] and ethical judgment [8,10]. Such explicit discussion of values has, in turn, been described as an important precondition for constructive and collaborative discussions about wicked problems [18]. There is therefore a need for research in engineering education that studies emotions in and as social interaction [19], for example from discourse analytic perspectives [20,21]. A discursive focus is particularly important for studying the role of emotions in teaching and learning processes involving controversial topics and high levels of social interaction [22], such as discussions about wicked problems. --- III. THEORETICAL FRAMEWORK Our starting point is that addressing wicked problems is an inherently social process and therefore needs to be studied in social interaction [18,21]. We explore this interaction through the lens of positioning theory [11]. Positioning theory is based on social constructionist perspectives of identity and learning as constructed and negotiated in and through interaction and, therefore, offers a suitable lens for exploring emotions as discursive phenomena [12]. Positioning theory provides a practical analytic tool to study discourse through triangulation of three units of analysis-storylines, positions, and speech acts-which are often illustrated in the form of a "positioning triangle" (Fig. 1) [23,24]. Storylines are collaboratively constructed narratives about what is going on in the interaction. These storylines make available certain positions that people can relate to in different ways. Each position is characterized by a set of rights and duties to perform certain types of speech acts but not others. Speech acts are understood as socially constructed meanings of actions of speech, but also non-verbal communication, such as intonation, pausing, body movement, facial expressions, and gestures [24,25]. In this paper, we apply the positioning triangle specifically to the analysis of emotions [12]. In the remainder of this paper, we therefore use Walton et al.'s [13] term "emotional subject positions" and we talk of "emotion-acts" rather than "speechacts". We further differentiate between two forms of emotionacts: We use the term "emotion discourse" to denote emotionacts that express something about emotions through verbal communication, for example using words that explicitly refer to emotions, such as "happy" or "frustrated" [c.f. 26]. We use the term "emotional discourse" to denote emotion-acts that express emotions through non-verbal communication, for example through verbal stress or facial expressions [c.f. 27]. Fig. 1. The positioning triangle as described by Davies and Harré [24]. --- IV. METHODS We analyzed empirical material from a previous study [7] for which ten third-year engineering students were individually interviewed about how to they would address the wicked problem of water-shortage in Jordan. During the interviews, the students received a problem description and a set of solution alternatives. They were then challenged to discuss the problem from as many different perspectives as possible to fully appreciate not only the technical but also the social and environmental complexity of the problem. During the interviews, both the students and the interviewer expressed a range of emotions related to the problem and the task of addressing the problem. Each interview lasted for about one hour and was video-recorded and transcribed verbatim. We read through the transcripts multiple times and selected all (n=26) excerpts in which students used emotion(al) discourse in talking about engineering, engineers, and/or the wicked problem. To identify these excerpts, we used Hufnagel and Kelly's [20] description of indicators of emotional expressions, which include semantics, prosody, facial expressions, gestures, and linguistic features. In analyzing the selected excerpts, we used storylines as the primary unit of analysis because they provide the necessary narrative context within which positions and emotion-acts can be understood and described [24,28]. Thus, we first formulated preliminary descriptions of emotionrelated storylines. Based on these descriptions, we then developed preliminary descriptions of emotional subject positions in each storyline and emotion-acts through which the suggested storylines and positions were constructed and negotiated. If necessary, we divided excerpts into subsections with different storylines and analyzed each subsection individually. In an iterative process, we refined the descriptions by constantly comparing and triangulating across the three units of analysis [23]. --- V. RESULTS The results provide illustrative examples of how the dominant discourse of rationality is reconstructed, and thus perpetuated, in engineering education. The results also provide examples of how counter-discourses are used to construct emotions as important for engineers and engineering. In this section, we will first illustrate the in-depth analysis with one empirical extract (EXTRACT 1, TABLE II.. We then describe the overall results in terms of storylines, subject positions, and emotion-acts in dominant discourses (TABLE III EXTRACT 1 is taken from the end of one of the interviews. Shortly before the extract, the interviewer had asked the student to discuss the problem from the perspectives of, first, a professional engineer and, second, a local politician. The extract contains three different (but closely connected) storylines, and we have therefore divided it into three parts (lines 1-2, 3-8 and 9-15) and analyzed each part separately. In each of the three storylines, at least one emotional subject position was made available (TABLE II.. Below, we explain how emotion-acts contribute to constructing these storylines and subject positions. In lines 1-2, the words "coldly" and "warmly" are clear examples of emotion discourse. These words construct an opposition between emotional and rational approaches to problem solving in the storyline. This opposition is strengthened through the use of emotional discourse: both words are spoken with verbal stress, which contributes to constructing them as belonging to a pair of opposites. In lines 3-8, the student's use of emotion discourse constructs a focus on rationality and efficiency. For example, the expression "exactly what I was thinking" focuses the discussion on cognition and the expressions "cold" and "left, right, no matter, no matter, reduce lives [lost], tap, tap, tap" construct a focus on rationality and efficiency. Again, emotional discourse strengthens the storyline: The words "coldly" and "exactly" are spoken with verbal stress, which, respectively, constructs rationality and precision as important-and which thus strengthens the focus of the storyline on rationality. Further, the expression that starts with "left, right,..." is spoken in a rapid voice, which strengthens the focus on efficiency. In lines 9-15, the student uses a lot of explicit emotion discourse to construct emotions as something that should be avoided in problem solving. They state that "I don't want to be influenced [by emotions]", "people who don't do that [turn a deaf ear and look away], they become engrossed by (...) every little concern", and "I'll take some of my coldness (...) to solve something." The focus is on actions ("solve", "did") and outcomes ("happened") rather than emotions: the student wants to solve and do. At the same time, the expression "turns a deaf ear and looks away" carries negative connotations and thus constructs complete emotionlessness as undesirable. Similarly, the use of emotion(al) discourse in "a terrible situation" constructs empathy as generally important-if one is able to bracket this emotion during problem solving. The student constructs their own position as someone who is able to consciously choose between being empathic (i.e. recognizing that people can be in terrible situations) or rational ("cold"). In EXTRACT 1 as a whole, the student thus constructs an overarching storyline according to which engineers are competent problem solvers who solve problems rationally and efficiently rather than allowing themselves to be influenced by emotions. The student positions the ideal engineer as a highly intelligent, rational problem solver who wants to do the best for society from a utilitarian perspective. To be able to solve problems, the ideal engineer needs to bracket both their own and others' emotions and concentrate on identifying the most efficient solution for achieving a predefined aim, such as minimizing the risk of losing human lives. In contrast to the ideal engineer, the student positions "others" (presumably non-engineers) as reasonably intelligent, but prone to becoming overly emotional, which reduces their ability to solve problems. Finally, the student positions themselves by combining aspects of the two prior positions: as someone who is able to switch between acting as an engineer who is able to rationally solve problems and as an empathic and emotional human being. Thus, the student simultaneously draws on the dominant discourse-engineering as purely rational-and a counter-discourse-emotions, in this case empathy, are important for engineering work. --- DISCUSSION In this Work-in-Progress research paper, we have presented results from a pilot study that aimed to explore the role of emotions when engineering students discuss wicked problems. It should be noted that the results are preliminary and that the analysis does not cover all types of storylines and positioning that can be expected to be present in the data. Most importantly, the analysis has only focused on emotional storylines related to the student's positioning of themselves, engineers, and "others". An exhaustive treatment of the empirical material should include an analysis of how the interviewer is positioned [29]. For example, in lines 1-2, the interviewer could be said to be positioned as someone who should tell the student whether they should approach the given problem emotionally or rationally. Transferred to an engineering education context, such a positioning could imply that students expect instructors to specify what emotionapproach students should use for a given problem-in much the same way as instructors often are expected to specify the algorithm that students should use to solve problems in engineering education [30]. An interesting result is that students often draw on several conflicting discourses: On the one hand, they construct the ideal engineer in a way that mirrors powerful cultural stereotypes of engineers as emotionless and, sometimes, excessively rational-much like the cartoon character Dilbert or the Star Trek series character Mr. Spock. This image of the ideal engineer also matches previous descriptions in the literature, according to which engineering often is described as purely rational [8,10]. On the other hand, the students seem to perceive these stereotypes as problematic: several students carefully position themselves as not quite like this typical engineer. Instead of positioning themselves as rational beings (i.e., someone who always is rational), they position themselves as able to choose a rational approach in order to solve a problem, but as also able to choose an emotional, empathetic approach. This double positioning is particularly clear in EXTRACT 1, where the student explicitly positions themselves as someone who is able to consciously switch between rational and emotional approaches to problem solving. As far as we know, this more nuanced emotional positioning of engineering students has not yet been reported in the literature. Another interesting conclusion from the results is that students in this study talked about emotions in a rather unnuanced way. They talked about emotions as if all emotions were the same and as if they would have the same impact on problem solving. Emotions are also described in a dualistic manner as something that is either switched on or off and that can be consciously controlled. This unnuanced understanding of emotions is in stark contrast to how emotions are described in the educational research literature [31]. However, the analysis also suggests that at least some students have an intuitive understanding that emotions may be important for some aspects of engineering work, such as managing others' emotions (EXTRACTS 6,7), deciding between different solution approaches (EXTRACT 8), encouraging professional responsibility (EXTRACT 9), and strengthening personal motivation to do good and solve problems (EXTRACTS 11,12). These results broaden descriptions in previously literature according to which emotions are important for addressing sustainability problems [6,7] and ethically responsible engineering work [8,9]. -> strengthen Some tentative implications for practice can be drawn from the results presented in this paper. First, engineering students should receive explicit teaching on the role of emotions in problem solving to allow them to develop a more nuanced understanding of emotions. To do so, engineering educators could build on the intuitive understandings that some students have of situations in which emotions are important for engineering work. Second, engineering educators should help students to develop their ability to identify and apply an appropriate emotion-approach to a given problem; students need to learn to take responsibility for how they use and communicate emotions in engineering problem solving. Third, engineering educators should involve students in discussions about common stereotypes of what an ideal engineer is; if students have a false (and slightly negative) image of the ideal engineer as someone who is unemotional, they may feel alienated from engineering and even choose to not complete their studies and/or not work as engineers after graduation. In future research, we want to explore engineering students' emotional positioning in group discussions. Such an approach is particularly important in analyzing students' positioning in discussions about sustainability problems; addressing such problems requires discussion and collaboration among multiple stakeholders and is thus an inherently social process that should be studied in social interaction. We also expect that studying positioning in group discussions makes it possible to explore how multiple, competing storylines are constructed and negotiated, and what kinds of storylines become dominant or inferior [32].
Progress research paper describes the results from a pilot study that aims to explore the role of emotions in engineering students' discussions about a wicked sustainability problem, i.e. a problem that is characterized by a high degree of uncertainty and ambiguity and for which it is not possible to develop a perfect solution. There is strong evidence from educational research that emotions are important for learning at all levels of education and particularly in education related to sustainability and wicked problems. At the same time, dominant discourses and stereotypes in engineering and engineering education construct engineering as purely rational and unemotional. In this study, we explore how engineering students re-construct-but also challenge-this dominant discourse in interviews about a wicked problem. We use discourse analytic tools from positioning theory to analyze how the students construct and negotiate emotional subject positions for themselves and others. The results provide illustrative examples of how emotional positioning can strengthen and/or challenge the dominant discourse: examples from the dominant discourse illustrate how students position emotions as irrelevant or even detrimental for engineering work, while examples from the counter-discourse illustrate how students sometimes construct emotions as part of what it means to be an engineer and as important for engineering work.
Introduction Respect for teachers varies worldwide, with some countries like Japan holding teachers in high regard for their contribution to students' integrity and achievements, which might result in teachers' job satisfaction [1]. However, in many countries, including the United States [2], China [3], and South Korea [4], despite cultural respect for this profession, teachers have experienced incidents of teacher-directed violence [5] or teacher victimization [6,7], encompassing physical, social, verbal, and cyber violence, sexual harassment, and personal property offenses [8]. Teacher-directed violence or teacher victimization is a relatively new research field that has recently received considerable attention, especially when national surveys revealed that in some countries, e.g., the USA, the majority of teachers had experienced some form of victimization at school, including verbal harassment, theft, damage to property, or physical abuse [2]. Many recent studies have reported that teacher victimization is related to adverse outcomes, e.g., lower job satisfaction and reduced school connectedness [7,9], which might eventually affect the school climate [10], student achievements, and teachers' Educ. Sci. 2024, 14, 163 2 of 19 life satisfaction [11]. However, the links between different forms of teacher victimization by students, their parents and school staff, and teachers' life satisfaction are still underresearched. The purpose of this study was to examine the links between various forms of teacher victimization by students, their parents, and school staff, and teachers' life satisfaction. --- Teacher Victimization Teacher victimization is a multifaceted phenomenon [12][13][14][15] that refers to situations where teachers experience various forms of mistreatment, harassment, or aggression in the workplace, coming from various sources, including students, parents, colleagues, or administrators [3,[16][17][18][19]. Teachers may be victimized by student misbehavior, including verbal abuse, disrespect, bullying, or physical aggression [3,18]. Teachers may also face conflicts with parents, including confrontations, accusations, or disrespectful behavior, arising from misunderstandings, academic concerns, or disagreements about teaching methods [19,20]. Interactions with colleagues involving bullying, undermining behavior, or conflicts can also contribute to teacher victimization [4,21,22]. Finally, teachers may feel victimized by administrative decisions or actions, and by a lack of communication or appreciation that is perceived as unfair or unsupportive [23]. On the whole, teacher victimization can take the form of workplace bullying, which involves repeated mistreatment, humiliation, or intimidation, coming from colleagues, administrators, students, or their parents, and can have broader psychological effects [12,24,25]. The categorization of teacher victimization encompasses various forms of teacher-directed violence, such as physical, social, verbal, and cyber violence, sexual harassment, and personal property offenses [7,8]. The prevailing socioecological conceptual framework suggests that schools implementing positive, evidence-based strategies and fair discipline policies promote positive interactions between students and teachers [26]. Previous studies have provided significant insights on teacher perceptions of victimization and safety, school hardening strategies to increase physical safety, school programs or policies to enhance school climate, positive discipline policies, as well as teacher-student relationships [23,26]. Previous studies have also revealed that teachers' suffering more forms of violence increases the risk of suffering any future violence [27]. Besides, teachers who reported recent or multiyear victimization had lower connectedness to school and job satisfaction, and more often thought about ending their teaching careers [15,25]. Studies have consistently shown an association between higher levels of bullying and teacher victimization and lower levels of teacher job satisfaction [28]. Additionally, teacher victimization experiences have been correlated with lower self-reported job performance, diminished student trust, a perception of reduced safety at school, and an increased likelihood of contemplating leaving the profession [29,30]. Moreover, school-violence-related stress was found to be negatively associated with teachers' quality of life, acting through mechanisms such as coping self-efficacy and job satisfaction [31]. Numerous studies have revealed that teachers who experience mistreatment at their workplace may suffer from stress, anxiety, depression, burnout [24,32,33], and a decline in overall psychological well-being [34], so the impact of teacher victimization extends beyond the professional realm. Victimized teachers are more likely to suffer from psychological distress, impaired personal relationships, and heightened fear, all of which harm job performance and relationships with students [4,[35][36][37]. Teacher victimization has been consistently linked to adverse effects on emotional and physical well-being, job performance, and retention [11,25,30,35]. Verbal and physical aggression by students have been found to be highly correlated with teachers' emotional distress [36,[38][39][40]. Bullying experiences during teacher training have been associated with adverse outcomes, including compromised job satisfaction and a diminished general health state [41]. Addressing teacher victimization is crucial for creating a positive and supportive educational environment, as teacher-directed violence impacts school climate, and even student academic and behavioral outcomes [9,10,[42][43][44][45][46]. Exposure to violence, emotional exhaustion, and low professional achievement by teachers contribute to poor student performance in school [47,48]. Research has shown that teacher victimization can significantly impact student academic and behavioral outcomes as well as the schooling, recruitment, and retention of highly effective teachers [8,49,50]. Violence against teachers predicts physical and emotional effects, as well as negative outcomes in teaching-related functioning, with women reporting higher levels of physical symptoms compared to men [51]. Serious acts of violence against teachers have been found to affect their performance at school and can lead to absenteeism due to fear and safety concerns [52]. While some studies found no significant differences in stress for teachers who experienced teacher-directed violence compared to those who did not experience it [53,54], other studies revealed that teacher-directed violence significantly impacted teacher wellbeing, recruitment, and retention [3,16,18,19,55,56]. Self-blame predicted negative affect, which, in turn, predicted the majority of outcomes after experiencing violence against teachers [9,57,58]. The relationship between school-and teacher-level factors, including those related to victimization, and teacher job satisfaction has been consistently established in the literature [59][60][61]. Teachers who feel supported by the administration and work in environments where rules are consistently enforced are less likely to fall victim to teacher-directed violence [46,[62][63][64]. The lack of support from administrators has been identified as a factor that negatively impacts teachers' feelings, interpersonal challenges, and school systems and policies [2,7,23,44,65,66]. Perceived school support has been identified as having a direct effect on exposure to school violence, subjective well-being, and professional disengagement in teachers [67]. Next, urban schools have reported the highest levels of teacher-directed violence, followed by rural schools and then suburban schools [54]. A significant relationship has been detected between teacher-directed violence and factors such as gender and the education sector [68]. Male gender and urban settings have been associated with a higher likelihood of teacher victimization [69]. Previous studies have suggested a negative impact of teacher victimization on teachers' well-being [25]. Teacher-directed violence has consistently been associated with adverse effects on emotional and physical well-being [53]. Verbal and physical aggression by students have been found to be highly correlated with teachers' emotional distress [36], while perceived teacher stress has been directly associated with emotional and physical violent discipline, mediated by job perceptions [70,71]. Additionally, teachers' sense of disempowerment after experiencing incidents of violence was associated with turnover intentions and decisions [72]. Finally, teaching satisfaction has been found to be positively correlated with self-esteem but negatively correlated with psychological distress and teaching stress, and teachers' well-being was correlated with the belief in a just world [73]. --- Teachers' Life Satisfaction Teacher life satisfaction is affected by a variety of antecedents [35,63,[74][75][76], and understanding these factors is important for creating a positive teaching environment. Research has evidenced several factors contributing to teachers' life satisfaction. Firstly, positive and supportive relationships with colleagues contribute significantly to teacher life satisfaction [53,77]. Adequate support from school administrators, including clear communication, recognition of achievements, and fair policies, is also vital for teacher satisfaction [2]. Adequate and fair financial compensation, along with competitive benefits, plays a role in teacher satisfaction, and policies that support a healthy work-life balance, such as flexible schedules and reasonable working hours, positively impact teacher satisfaction [23]. Additionally, studies have revealed that teachers who have a degree of autonomy in decision-making and classroom management often report higher levels of job satisfaction [23]. Next, manageable workloads that allow for a balance between professional and personal life as well as access to continuous professional development and opportunities for career advancement contribute to higher satisfaction levels [12]. Teachers who feel that their values align with the mission and values of the school are more likely to be satisfied with their job [78]. Perceived job security and stability can also contribute to satisfaction, reducing stress related to employment concerns [74]. Most importantly, teachers who have positive relationships with their students and colleagues often experience higher job and life satisfaction [76]. Positive student-teacher relationships as well as recognition and appreciation from students, parents, colleagues, and administrators contribute to a more rewarding teaching experience and life satisfaction [77]. The literature underscores the far-reaching consequences of teacher victimization on various facets of teachers' lives [79][80][81][82], including life satisfaction. Studies have regularly shown that teacher victimization has a significantly negative effect on job satisfaction [12,17]. Victims of bullying in the teaching profession are more likely to report poor self-rated health and life satisfaction, with compromised relationships with parents, teachers, and peers partially mediating these effects [83]. Teacher victimization experiences, along with the fear of crime, have been found to have a strong direct link to job and employer satisfaction [84]. Additionally, the perception of victimization increases the probability of teachers leaving both the school and the profession [13, 15,30]. However, the relationship between victimization and overall life satisfaction is complex, with data showing mixed results [85]. School violence has an indirect effect on life satisfaction through school satisfaction for those who have experienced victimization [86]. Teacher victimization is highly correlated with emotional distress, and factors such as gender, a student-oriented approach, and incident characteristics predict the extent of this distress [36,87]. While teacher victimization is linked to heightened stress associated with teaching, some evidence does not support a specific link between the fear of victimization and teacher stress [88]. On the whole, victimization has a negative relationship with life satisfaction and a positive relationship with emotional difficulties, with hope and school connectedness identified as potential mediators [89]. In some research, teacher victimization has been associated with the stress faced by teachers [50]. High stress levels were positively linked to negative affect, but self-control and organizational social support were identified as factors that can contribute to life satisfaction among teachers [76]. --- Present Study Several decades ago, research evidenced that teacher victimization experiences are negatively associated with job and employer satisfaction [84]. Years later, it was found that victimization impacts multiple domains, but the data on the relationship between victimization and overall life satisfaction were mixed [85]. Some recent research found no significant differences in stress for teachers who experienced teacher-directed violence compared to those who did not experience it [53,54]. However, the majority of findings suggest that teacher victimization could be related to diminished life satisfaction and imply negative links between teacher victimization and satisfaction with life. Understanding and addressing teachers' life satisfaction, especially teacher victimization by students, their parents, and school staff, can provide insights into preventing a victimization culture at school. This creates a more supportive work environment for teachers, ultimately enhancing their overall life satisfaction and, subsequently, positively impacting students' achievements and well-being. Educational institutions that implement positive, evidence-based strategies and fair discipline policies promote positive interactions between students and teachers, as suggested by the socioecological framework [26]. However, it could also be assumed that teacher victimization by school staff is related to teacher victimization by students and their parents, and this premise is grounded in the organizational climate theory and social learning theory. Organizational climate theory suggests that workplace victimization can create a hostile environment, fostering negative interactions among individuals within that environment [90][91][92]. Teachers who experience victimization by school staff may develop a heightened sensitivity to aggressive behaviors, leading them to perceive and react to similar behaviors from students and their parents. Social learning theory posits that individuals learn from observing and imitating others [93], and, in the educational context, if individuals (students) witness aggressive behaviors, they may be more likely to engage in similar behaviors. Moreover, the school environment functions as a microcosm of society, and patterns of aggression and victimization may permeate various relationships within the school community. Therefore, teacher victimization by school staff could presumably impact the overall interpersonal dynamics within the school, potentially affecting the relationships between teachers and students or their parents. This cross-sectional study intended to contribute by exploring teacher victimization within this specific framework. Furthermore, if teachers experience victimization by school staff, this may create a negative emotional climate that permeates their interactions with students and their parents, and this negative climate, in turn, can contribute to strained relationships, further affecting teacher life satisfaction. Moreover, presumably, victimization can have cascading effects on well-being, as negative experiences in one domain can spill over into other areas of life, influencing overall life satisfaction. So, it is important to shed light on the potential mechanisms through which teacher victimization by school staff could be related to broader aspects of teachers' lives. Therefore, this study aimed to reveal the role of teacher victimization by school staff, followed by teacher victimization by students and their parents, in teachers' life satisfaction. The following hypotheses were examined: H1. Teacher victimization by students and their parents is negatively related to teacher life satisfaction. --- H2. Teacher victimization by school staff (teachers and administrators ) is negatively related to teacher life satisfaction. --- H3. Teacher victimization by school staff is related to teacher victimization by students and their parents. H4. Teacher victimization by students and their parents mediates the link between teacher victimization by school staff and teacher life satisfaction. Of those, 1059 participants were females (92.4%), 85 were males (7.4%), and 2 preferred not to disclose their gender. The survey sample reflects the demographics of Lithuanian teachers, based on official statistics (Official Statistics Portal: https://osp.stat.gov. lt/statistiniu-rodikliu-analize?hash=2db4b643-8a84-47ea-bde9-ee71f984b661#/, accessed on 19 January 2024). The mean age of participants was 51 years (SD = 9290, age range from 20 to 72 years old). According to the official education indicators of the Republic of Lithuania (<unk>VIS: https://www.svis.smm.lt/pedagogai/, accessed on 19 January 2024), the average age of teachers in Lithuania at the time of the survey was 51.16 years. The sociodemographic characteristics of the participants at baseline are presented in Table 1. Participation in the study was anonymous and voluntary, and the respondents did not receive any compensation. An invitation to participate in the study was sent to the official teacher communities, allowing all Lithuanian teachers to voluntarily participate in the study. The questionnaire's heading introduced the purpose and the need for the study. Victimization was discussed in a few sentences, thus providing teachers with an introduction to the phenomenon of victimization. This study's data were taken from a more extensive study on Lithuanian teachers' victimization experiences and well-being. The data collection mode was computer-assisted, and it took about 30 min to complete. The survey data show that 1328 teachers completed the questionnaires on the online platform. However, only 1146 questionnaires were completed fully/correctly. Before the data collection, the basic principles of research ethics were discussed, and the research instrument was approved by the Scientific Committee of the Lifelong Learning Laboratory at Mykolas Romeris University on 2 October 2023, under protocol no. MVGLAB-2023-01. --- Materials and Methods --- The Sample --- Instruments To reveal teacher victimization (TV) by students, their parents, and school staff, and the links with teachers' life satisfaction, this study used several previously validated instruments: the translated Lithuanian version of the Satisfaction with Life Scale (SWLS) [94] and the translated Lithuanian version of the Multidimensional Teacher Victimization Scale [20]. The original items of both instruments were translated into Lithuanian and back-translated. To assess teacher victimization by students' parents and school staff, we applied some additional questions constructed by the authors of this study. The Satisfaction with Life Scale (SWLS) was applied to assess teachers' life satisfaction. This scale is a 5-item instrument designed to measure global cognitive judgments of satisfaction with one's life [94]. The response pattern follows a 7-point Likert scale ranging from 1 (totally disagree) to 7 (totally agree). The SWLS has been validated in many previous studies and contexts [94,95]. The Multidimensional Teacher Victimization Scale was used to assess teachers' opinions on the forms of violence they most frequently experience from students in schools [20]. This scale encompasses various forms of violence perpetrated by students (physical, social, verbal, cyber, sexual, and property-related). Each statement follows a 5-point Likert scale ranging from 1 (never) to 5 (more than once a week). The Multidimensional Teacher Victimization Scale was initially validated in previous studies [20]. The Verbal Teacher Victimization by Parents Scale was created by the authors of this study, based on the Multidimensional Teacher Victimization Scale's verbal teacher victimization (Verbal TV) by students subscale. The scale consisted of 4 items: "Student's parent(s) laughed at my looks, dress, or other personal characteristics"; "Student's parent(s) made fun of me by calling me names"; "Student's parent(s) threatened me"; "Student's parent(s) swore at me". Each statement followed a 5-point Likert scale ranging from 1 (never) to 5 (more than once a week). To assess teacher verbal victimization by school staff, or bullying by school staff, two questions were applied: "As a teacher, I was bullied by another teacher/teachers"; "As a teacher, I was bullied by the administrative staff". Each statement followed a 5-point Likert scale ranging from 1 (never) to 5 (more than once a week). In the results section, we included Cronbach's alpha and McDonald's <unk> values and model fit indices for the confirmatory factor analyses (CFAs) of the instruments used in this study. --- Statistical Analysis SPSS v.26.0, AMOS v.26.0, JASP v.18, and JAMOVI v.2.2.1 software were applied to analyze the data. JASP v.18 software was applied for confirmatory factor analyses (CFAs), JAMOVI was applied for mediation analysis, AMOS was applied for structural equation modeling (SEM) [96], and SPSS was applied for the rest of the analyses [97]. In SEM, model fit was evaluated based on the CFI (comparative fit index), the normed fit index (NFI), the Tucker-Lewis's coefficient (TLI), SRMR (standardized root mean square residual), RMSEA (root mean square error of approximation), and the <unk>2 was presented for descriptive purposes [98]. The values higher than 0.90 for CFI and TLI, and values lower than 0.08 for RMSEA and SRMR, are considered indicative of a good fit, and p-values lower than 0.05 are considered to be statistically significant [99,100]. --- Results In the preliminary analysis, the internal consistency and validity of the instruments used in this study were assessed and the descriptive statistics were calculated. In the main analysis, the hypotheses on the links between the study variables were tested. --- Preliminary Analysis Initially, several confirmatory factor analyses (CFAs) were performed and Cronbach's <unk> and McDonald's <unk> values were calculated to examine the reliability and validity of the instruments. As can be seen in Table 2, the internal consistency (<unk> and <unk>) of the instruments is good. Moreover, the results confirmed the validity of the six-factor Multidimensional Teacher Victimization Scale (including physical TV, social TV, verbal TV, cyber TV, sexual harassment, and personal property offenses) [20]. Additionally, the results revealed that a seventh factor could be added to this scale, namely, verbal TV by students' parents. The data distribution and descriptives of the study variables are presented in Table 3. The results revealed that the data exhibited a departure from normal distribution. The prevalence of different forms of teacher victimization by students, their parents, and school staff in the Lithuanian sample is presented in Table 4. The results revealed that 38.5 percent of teachers had experienced bullying by school staff (other teachers and colleagues), while slightly fewer teachers, 33.9 percent, experienced verbal victimization by students' parents. The results revealed that 38.5 percent of teachers had experienced bullying by school staff (other teachers and colleagues), while slightly fewer teachers, 33.9 percent, experienced verbal victimization by students' parents. Overall teacher victimization by students in the Lithuanian sample reached 65.8 percent, with the highest rates of verbal TV (51.0 percent) and social TV (50.8 percent) and the lowest rates of cyber TV (12.8 percent) among other TV forms. --- Main Analysis Firstly, to examine the links between different forms of teacher victimization by students, their parents, school staff, and teachers' life satisfaction, correlational analysis was performed (Table 5). Correlational analysis revealed that life satisfaction was statistically significantly negatively related to verbal TV (rho = -0.252, p <unk> 0.001), social TV (rho = -0.248, p <unk> 0.001), bullying by staff (rho = -0.207, p <unk> 0.001), parental verbal TV (rho = -0.171, p <unk> 0.001), personal property offenses (rho = -0.169, p <unk> 0.001), sexual harassment (rho = -0.166, p <unk> 0.001), cyber TV (rho = -0.156, p <unk> 0.001), and physical TV (rho = -0.144, p <unk> 0.001). Bullying by staff was significantly positively related to parental verbal TV (rho = 0.354, p <unk> 0.001), social TV (rho = 0.334, p <unk> 0.001), verbal TV (rho = 0.293, p <unk> 0.001), sexual harassment (rho = 0.251, p <unk> 0.001), physical TV (rho = 0.247, p <unk> 0.001), personal property offenses (rho = 0.219, p <unk> 0.001), and cyber TV (rho = 0.206, p <unk> 0.001). To examine the hypotheses and explore various aspects of the relationships among the study variables, we conducted a structural equation modeling (SEM) analysis. Utilizing SEM offers several advantages, as it allows for the assessment of the meaningfulness and significance of the theoretical structural connections between the constructs. In this study, we employed the covariance-based structural equation modeling (CB-SEM) approach, chosen specifically because our research necessitated a comprehensive measure of goodnessof-fit at a global level. Standardized results of the model are presented in Figure 1. The findings revealed that the fit of the model was acceptable: <unk> The estimates of the model of associations between the study variables (teacher victimization by school staff, victimization by school children and their parents, and life satisfaction) are displayed in Table 6. Table 6. Scalar estimates of the model of associations between teacher victimization (bullying) by staff, victimization by children and their parents, and life satisfaction. The estimates of the model of associations between the study variables (teacher victimization by school staff, victimization by school children and their parents, and life satisfaction) are displayed in Table 6. The SEM findings suggested that teacher victimization (bullying) by school staff followed by teacher victimization by students and their parents plays a significant role in teacher life satisfaction. The mediation analysis results indicating the role of overall victimization by students are presented in Table 7. The indirect, direct, and total effects were significant, even though the R 2 for life satisfaction was just 0.064, and the R 2 for overall victimization by students was 0.155. To summarize, H1, which assumed that teacher victimization by students and their parents is negatively related to teacher life satisfaction, was confirmed. The results also confirmed H2, which presumed that teacher victimization by school staff (teachers and administrators) is negatively related to teacher life satisfaction, and H3, which stated that teacher victimization by school staff is related to teacher victimization by students and their parents. Next, the findings confirmed H4, which assumed that teacher victimization by students and their parents mediates the link between teacher victimization by school staff and teacher life satisfaction. --- Discussion This study focused on a significant yet often overlooked aspect of the educational environment: teacher victimization. Although conducted within a Lithuanian context, it offers some significant insights into the importance of addressing teacher victimization. The purpose of this study was to examine the links between various forms of teacher victimization-by students, their parents, and school staff-and teachers' life satisfaction, as well as to reveal the prevalence of teacher victimization in Lithuania. Previous surveys have revealed that the prevalence of violence against teachers varies in different countries [30,51,56,101]. This study revealed that the prevalence rates of various forms of victimization faced by teachers, including bullying by school staff, verbal victimization by students' parents, and different types of victimization by students in the Lithuanian sample are alarmingly high, with over a third of teachers experiencing bullying by colleagues and verbal victimization by students' parents and nearly two-thirds by students. Although the rates of teacher victimization in Lithuania are relatively lower than demonstrated by previous research in other countries (e.g., [2,27]), still the rate of around 40% could be considered worryingly high because of its potential negative effects on teacher well-being, school climate, teaching quality, and overall educational outcomes, as it signals a need for intervention, support, and a combined effort to foster a positive and respectful work environment within educational institutions. The findings of this study align with the literature that recognizes the multifaceted nature of workplace victimization, which can stem from multiple sources, including colleagues, superiors, and even external sources like parents [12,[14][15][16][17]49,88,102,103]. Thus, a substantial proportion of teachers in Lithuania have experienced different forms of victimization, with the highest being verbal victimization and the lowest being cyber victimization. Next, this study demonstrated that various forms of teacher victimization were significantly negatively related to life satisfaction, which also aligns with previous research [11,12,31,74]. Specifically, negative correlations were found with bullying by school staff and parental verbal victimization, as well as victimization by students: verbal and social victimization, personal property offenses, sexual harassment, and cyber and physical victimization. A clear and significant negative correlation between different forms of teacher victimization and life satisfaction suggests that experiences of victimization, whether verbal, social, physical, or cyber, could adversely affect the well-being of teachers, which was also indirectly indicated by other studies [38][39][40]104,105]. The stronger the victimization, particularly in forms like bullying by staff and verbal victimization, the greater the possible negative impact on life satisfaction. Still, it is important to note that this study was cross-sectional and suggests only the links between teacher victimization and well-being, but the nature of these associations could be multifaceted and reciprocal, indicating alternative explanations for the findings. As there was a significant negative correlation between different forms of teacher victimization (such as verbal, social, bullying by staff, etc.) and life satisfaction, the findings support the assumption that teacher victimization could negatively affect their life satisfaction. However, the cross-sectional design of this study limits making definitive causal claims and indicates that this assumption, to some extent evidenced in previous research [10,34,73,106], requires validation through stronger, longitudinal designs. Furthermore, this study's finding of a negative correlation between various forms of teacher victimization and life satisfaction is consistent with the broader literature on occupational stress. Research has long established that workplace bullying and victimization have detrimental effects on an individual's psychological well-being [103,107,108]. This underscores the necessity for interventions focusing on the mental health and well-being of teachers. Additionally, the findings of this study revealed that teacher victimization or bullying by school staff (other teachers and administrators) was significantly positively related to verbal victimization by parents and various forms of victimization by students: social victimization, verbal victimization, sexual harassment, physical victimization, personal property offenses, and cyber victimization. These findings suggest a complex interplay between different victimization experiences within the school environment, as evidenced by previous research [40,104,[109][110][111][112][113][114][115][116][117][118][119][120][121][122]. The SEM analysis, which is valuable as it allows for the examination of complex interrelationships between variables [100], provided a more nuanced understanding of the relationships between different types of victimization and life satisfaction. The findings suggested that teacher victimization by school staff, followed by victimization by students and their parents, plays a significant role in teachers' life satisfaction. Hypothesis 1, which posited that teacher victimization by students and their parents is negatively related to teacher life satisfaction, was confirmed. The results also supported Hypothesis 2, which assumed that teacher victimization by school staff is negatively related to life satisfaction, and Hypothesis 3, which stated that teacher victimization by school staff is related to teacher victimization by students and their parents. Finally, the findings confirmed Hypothesis 4, which suggested that teacher victimization by students and their parents mediates the link between teacher victimization by school staff and teacher life satisfaction. The findings suggest a possible cascading effect where victimization by school staff is related to victimization by students and parents, further deteriorating life satisfaction. This could imply that a hostile or negative environment enabled by staff may be a contributing factor or a marker of a broader culture of victimization that also involves students and parents. These implications align with the previous studies on victimization culture [83,121,123]. However, the cross-sectional design of this study implies that the findings should be regarded with caution and need further validation. One of the critical findings from the SEM analysis is the mediating role of victimization by students and parents. This suggests that the impact of staff victimization on a teacher's life satisfaction could be not just directly, but also indirectly influenced by the additional victimization experienced by students and parents. In other words, teachers who are victimized by colleagues are more likely to experience victimization from students and parents, which could further damage their life satisfaction. These findings underscore the importance of non-violent communication [124] and policies in educational environments, starting from school-staff interactions to create a supportive and compassionate school climate for the flourishing of teachers and students [10,44,45,111,113,121]. Thus, this study highlights the need for effective interventions and policies to prevent teacher victimization, which could include professional development for teachers and administrators on identifying and addressing bullying, creating supportive networks within schools, and fostering a school culture that values respect, compassion, and inclusivity, as outlined in previous research [113,125]. Therefore, this study revealed a complex network of relationships where teacher victimization in various forms is significantly and negatively associated with life satisfaction and provided a comprehensive picture of how different forms of victimization collectively relate to teachers' life satisfaction. The findings emphasize the importance of addressing teacher victimization in its various forms as a key factor in improving the quality of the work environment and the overall well-being of teachers [13][14][15][16]49,88,126]. Moreover, the findings of this study also contribute to the academic discourse on teacher victimization, which is a critical issue in educational research, as teacher victimization can have far-reaching consequences, not only affecting the psychological wellbeing of the teachers but also impacting the educational environment and student outcomes [18,28,45,62,64,127]. In the broader context of educational research, these findings align with the existing literature that emphasizes the importance of a safe and supportive work environment for teachers [2,63,77,113,125]. Previous studies have shown that teachers' well-being is crucial for effective teaching and positive student outcomes, and teacher victimization can lead to increased stress, burnout, and even attrition from the profession [24,32,33,41,74]. The findings from this study underscore the importance of addressing teacher victimization as a critical factor in ensuring a healthy and productive educational environment and call for a comprehensive approach that includes awareness, prevention, support, and intervention strategies to safeguard teachers' well-being and, by extension, enhance the quality of education [19,24,34,52,55,61,76,80,115]. Overall, the findings of this study indicate a need for comprehensive educational policies and practice strategies to address teacher victimization, including professional development for staff, support systems for teachers, and interventions that foster a positive school culture [8,23,63,110,112]. --- Limitations and Future Directions The results of this study provide some valuable insights into the relationship between various forms of teacher victimization-by students, their parents, and school staff-and teachers' life satisfaction, but there are several limitations. Firstly, this study lacks a stronger theoretical and methodological background. Using validated scales to assess multidimensional teacher victimization by students' parents or school staff, and controlling for additional variables (e.g., gender) that might confound the relationships, would provide more valuable insights into the links between different forms of TV and teacher life satisfaction. The next significant limitation of this study was that this research hypothesized links between the study variables, although causality or directionality based on the methodology of the survey cannot be specified. This study identified several significant relationships, but it is crucial to investigate the causality and directionality of these relationships. Longitudinal studies or experimental designs could help uncover causal links, and the generalizations based on the findings of this study should be made with caution. Moreover, longitudinal studies could provide insights into the long-term impacts of teacher victimization on life satisfaction. Furthermore, further research can explore potential antecedents of multidimensional teacher victimization by students, their parents, and school staff, such as cultural factors, personality traits, or adverse childhood experiences, as well as potential consequences of TV, such as burnout or post-traumatic stress, as attempted in previous research [24]. In addition, although these findings contribute to the global understanding of teacher victimization, they are specific to a Lithuanian sample, and caution should be exercised when generalizing the results to other cultural or educational contexts, as the unique cultural and institutional factors in Lithuania may affect the dynamics of teacher victimization differently than in other regions. Thus, it is important to consider cultural and contextual factors in interpreting these results. Comparative studies across different cultural contexts could help in understanding the universal versus context-specific elements of the phenomenon of teacher victimization by students, their parents, and school staff. Cross-cultural studies in educational settings could highlight how educational systems and cultural norms are related to the manifestation of teacher victimization. Finally, presumably, only those teachers for whom the experience of victimization was not so pronounced or painful were willing to participate in the study. In contrast, teachers who were more sensitive to the phenomenon may have been inclined to refuse to take part in the study, so the results might not accurately reflect the real situation and may not be representative. Moreover, participants might underreport or overreport certain experiences due to social desirability or other factors, and future research could benefit from additional data sources, such as observer ratings or administrative records, to enhance the robustness of the findings. In conclusion, although this study contributes to the understanding of teacher victimization and its association with teacher life
The links between different forms of teacher victimization and teachers' life satisfaction are still under-researched. To highlight teacher victimization by various parties within the school environment and its associations with teachers' life satisfaction, the Satisfaction with Life Scale, the Multidimensional Teacher Victimization Scale, and some additional measures were applied. The findings based on a Lithuanian sample (n = 1146) revealed that a significant portion of teachers have experienced victimization in various forms: 38.5% of teachers have been bullied by school staff, 33.9% have faced verbal victimization from students' parents, and victimization by students affected 65.8% of teachers, with verbal and social victimization being the most common. An SEM analysis (χ 2 = 355.787; Df = 33; CFI = 0.928; TLI = 0.902; NFI = 0.922; RMSEA = 0.092 [0.084-0.101]; SRMR = 0.0432) revealed that bullying by staff is not only detrimental in its own right but also relates positively to other forms of victimization, including verbal victimization by parents and multidimensional victimization by students, as teacher victimization by students and their parents mediated the relationship between teacher victimization by school staff and teacher life satisfaction. The findings suggest a complex problem within the school environment where different forms of victimization are interconnected and call for urgent attention and action from educational policymakers and school administrators to address and mitigate teacher victimization.
results to other cultural or educational contexts, as the unique cultural and institutional factors in Lithuania may affect the dynamics of teacher victimization differently than in other regions. Thus, it is important to consider cultural and contextual factors in interpreting these results. Comparative studies across different cultural contexts could help in understanding the universal versus context-specific elements of the phenomenon of teacher victimization by students, their parents, and school staff. Cross-cultural studies in educational settings could highlight how educational systems and cultural norms are related to the manifestation of teacher victimization. Finally, presumably, only those teachers for whom the experience of victimization was not so pronounced or painful were willing to participate in the study. In contrast, teachers who were more sensitive to the phenomenon may have been inclined to refuse to take part in the study, so the results might not accurately reflect the real situation and may not be representative. Moreover, participants might underreport or overreport certain experiences due to social desirability or other factors, and future research could benefit from additional data sources, such as observer ratings or administrative records, to enhance the robustness of the findings. In conclusion, although this study contributes to the understanding of teacher victimization and its association with teacher life satisfaction, it underscores the need for systemic approaches to address multidimensional teacher victimization and highlights the importance of future research to promote teacher well-being and the overall climate of educational institutions. --- Conclusions This study highlights a critical issue in the educational sector in Lithuania-the widespread victimization of teachers by various parties within the school environment and its significant negative associations with teachers' life satisfaction. A significant portion of teachers in Lithuania experience victimization in various forms. The findings demonstrated that 38.5% of teachers have been bullied by school staff, and a slightly lower percentage (33.9%) have faced verbal victimization from students' parents. The most prevalent form of victimization is by students, affecting 65.8% of teachers, with verbal and social victimization being the most common. The findings revealed a clear and significant negative correlation between different forms of teacher victimization and life satisfaction. The stronger the victimization, partic-ularly in forms like bullying by staff and verbal victimization by students, the lower the teachers' life satisfaction. This study indicates that bullying by staff is not only detrimental in its own right but also relates positively to other forms of victimization, such as verbal victimization by parents and multidimensional victimization by students. This interrelation suggests a complex and pervasive problem within the school environment where different forms of victimization are interconnected. This study confirmed that teacher victimization, especially by school staff, followed by victimization by students and their parents, significantly relates to teachers' life satisfaction. Moreover, teacher victimization by students and their parents mediates the relationship between teacher victimization by school staff and teacher life satisfaction. This implies that the negative impact of staff victimization on life satisfaction can be exacerbated by additional victimization from students and parents. These findings call for urgent attention and action from educational policymakers and school administrators to address and mitigate teacher victimization, thereby improving the overall well-being of educators. --- Data Availability Statement: Data will be available upon request from the corresponding author. --- Conflicts of Interest: The authors declare no conflicts of interest.
The links between different forms of teacher victimization and teachers' life satisfaction are still under-researched. To highlight teacher victimization by various parties within the school environment and its associations with teachers' life satisfaction, the Satisfaction with Life Scale, the Multidimensional Teacher Victimization Scale, and some additional measures were applied. The findings based on a Lithuanian sample (n = 1146) revealed that a significant portion of teachers have experienced victimization in various forms: 38.5% of teachers have been bullied by school staff, 33.9% have faced verbal victimization from students' parents, and victimization by students affected 65.8% of teachers, with verbal and social victimization being the most common. An SEM analysis (χ 2 = 355.787; Df = 33; CFI = 0.928; TLI = 0.902; NFI = 0.922; RMSEA = 0.092 [0.084-0.101]; SRMR = 0.0432) revealed that bullying by staff is not only detrimental in its own right but also relates positively to other forms of victimization, including verbal victimization by parents and multidimensional victimization by students, as teacher victimization by students and their parents mediated the relationship between teacher victimization by school staff and teacher life satisfaction. The findings suggest a complex problem within the school environment where different forms of victimization are interconnected and call for urgent attention and action from educational policymakers and school administrators to address and mitigate teacher victimization.
Introduction A substantial evidence base supports the population benefits and cost-effectiveness of harm reduction interventions (e.g., needle distribution, supervised drug consumption) for people who use drugs (PWUD) [1][2][3]. From an instrumental-rational perspective, such evidence should translate directly into policies that institutionalize harm reduction services as routine interventions in health systems. However, these services have been contentious and implementation of them continues to be haphazard [4][5][6][7][8]. The literature on morality policy is helpful for understanding this disconnection between evidence and haphazard implementation of harm reduction services [9,10]. Scholars in this area propose that when decision makers must reconcile conflicting public values over the legitimacy of providing health or social services to target populations, they strategically downplay instrumental support in favor of policy designed to serve rhetorical, symbolic functions [11,12]. This insight has informed the Canadian Harm Reduction Policy Project (CHARPP), a mixed-method, multiple case study drawing on four data sources (policy documents, informant interviews, media coverage, and a national public opinion survey) to analyze how policies governing harm reduction services are positioned within and across the Canadian provinces and territories. In previous work, CHARPP analyzed harm reduction policies written by governments and health authorities. Two studies confirmed that policies were largely produced for rhetorical rather than instrumental purposes, as revealed in documents that avoided clear governance statements (e.g., timelines, funding arrangements, governmental endorsements, references to legislation), and failed to name or support specific harm reduction interventions or key international tenets of harm reduction (i.e., abstaining from substance use is not required to receive health services, stigma and discrimination are often faced by substance users, PWUD should be involved in policy making) [13,14]. A complementary CHARPP study interviewed governmental officials, health system leaders, and people with lived/living experience, confirming that Canadian policies offer weak instrumental support for harm reduction. Policy actors expressed ambivalence about the utility of formal policy and described how they adopted pragmatic strategies to support harm reduction services in morality policy environments [15]. Finally, CHARPP analyses of 17 years of Canadian newspaper coverage concluded that harm reduction was rarely portrayed negatively or from a criminal perspective. Volume of coverage tracked major events (e.g., Canada's opioid emergency, legal challenges to Vancouver's safe injection programming) but dramatically overemphasized supervised drug consumption and naloxone programs at the expense of other harm reduction services. This limited sense of 'newsworthiness' may have perpetuated a morality policy environment for harm reduction by reducing public awareness of the full range of evidence-supported harm reduction services that could benefit PWUD [16]. Public acceptability is of course a key consideration in developing policy frameworks for harm reduction services and is the focus of the present study. Canadian and Australian research has documented substantial public support for a variety of harm reduction services, including supervised injection programs [17][18][19][20][21][22], needle distribution [19,[23][24][25][26], and safer inhalation programs [25,27]. However, research is limited because public opinion has been described regionally within those countries, and typically for select, newsworthy harm reduction services rather than the full spectrum of evidence-supported interventions. Nationally representative surveys of US adults revealed that most respondents were either neutral or opposed toward needle distribution and supervised injection programs [19,23,26], but no similar research has comprehensively described national and regional public opinions toward harm reduction in Canada. The literature on public views toward harm reduction is also limited because few studies have examined correlates of public support. Extant research has emphasized sociodemographic correlates, revealing that liberal political views, higher educational attainment, and income are positively associated with public support for harm reduction [17,19,23,25]. However, little work has tested plausible theoretical models that could inform intervention strategies to modify public opinion. To address this gap, we propose a social exposure model depicted in Fig 1 wherein four constructs influence public support for harm reduction services, drawing on theories in the morality policy, intergroup relations, addiction, and media communication literatures. Our model proposes that stigmatized attitudes toward PWUD are a proximal determinant of public views on harm reduction services. This prediction reflects a key tenet of morality policy studies [9], namely that emphasizing the ostensibly immoral, deviant behaviors of a target population (PWUD) renders evidence on intervention effectiveness irrelevant, thus calling into question the legitimacy of offering health services to them. When opponents of harm reduction services adopt this position, they essentially view PWUD as a stigmatized outgroup, unworthy of receiving effective interventions. Thus, we hypothesize that stigmatized attitudes toward PWUD will be inversely associated with public support for harm reduction-a prediction that has been supported in two recent US studies [19,23]. Stigmatized attitudes may in turn be influenced by level of familiarity with PWUD. The intergroup relations literature has provided empirical support for the contact hypothesis, according to which greater exposure to and familiarity with outgroups facilitates empathic personal attitudes toward outgroup members [28][29][30]. Thus, we hypothesize that personal familiarity with PWUD will be inversely associated with stigmatized attitudes and positively associated with support for harm reduction. Although not previously investigated, beliefs about addiction, and in particular, disease model beliefs, may also be relevant for understanding stigma toward PWUD and public views toward harm reduction services. Harm reduction programs are contentious in part because PWUD do not need to abstain from substance use in order to receive them. This key tenet of harm reduction problematizes disease model thinking, according to which drug and alcohol dependence is chronic relapsing brain disorder [31] that can only be mitigated by complete abstinence [32]. From this perspective, harm reduction services are problematic because they 'enable' substance use and perpetuate the disease. Thus, we hypothesize that disease model beliefs will be positively associated with stigma toward PWUD and inversely associated with support for harm reduction. Finally, cultivation theory proposes that exposure to media shapes peoples' views on policy responses to contentious social issues [33]. For example, exposure to violent media programming is positively associated with beliefs that one will become a victim of violence and also with support for punitive and retributive legislation [34]. Although not previously investigated, we hypothesize that exposure to media reporting on harm reduction will be inversely associated with stigmatized attitudes toward PWUD and positively associated with support for harm reduction. --- Objectives Our first objective was to describe the nature and extent of public support for harm reduction as a broad approach to substance use in Canada, and also in relation to seven specific harm reduction interventions: supervised consumption, syringe distribution, naloxone, low threshold opioid treatment, community outreach, drug checking, and safer inhalation services. Our second objective was to test our social exposure model. Specifically, we sought to answer three research questions implied by our model, including: (a) whether stigmatized attitudes toward PWUI are inversely associated with public support for harm reduction, (b) whether personal familiarity with PWUD, disease model beliefs about addiction, and exposure to media coverage on harm reduction are positively, inversely, and positively associated with support for harm reduction, respectively, and (c) whether these distal social exposure variables operate indirectly to influence public support for harm reduction, via stigmatized attitudes toward PWUD. --- Materials and methods --- Sample and procedure Participants were recruited from an online research panel (Ipsos Canadian Online Panel), and the survey methods were designed to produce generalizable estimates of public opinion toward harm reduction at both national and provincial levels using a two-phased sampling procedure. In phase 1, randomly-drawn Canadian adult panel members were invited to participate until a quota sample of 2002 respondents matching the age and sex distributions of Canadian adults (18+ years) residing in each major region of Canada (i.e., BC, Alberta, Prairie region [Saskatchewan/Manitoba], Ontario, Quebec, Atlantic region [Nova Scotia, New Brunswick, Prince Edward Island, Newfoundland] was obtained. In phase 2, a booster sample of 2643 respondents was recruited to oversample individual provinces, i.e., to provide representative estimates for each Canadian province; sampling proceeded until a quota sample matching the age and sex distributions of Canadian adults residing in each of the 10 Canadian provinces was recruited. National and provincial quotas within age and sex strata were based on the 2016 Canadian Census. Canadians residing in the territories (Northwest Territories; Nunavut; Yukon) were not invited to participate in either sampling phase. The final sample included 4645 adults, 18 years of age or older. Analyses of the phase 1 subsample provided nationally representative estimates; analyses of the total sample provided provincially representative estimates. In order to provide accurate parameter estimates and to avoid errors in calculating variances, survey weights were converted to normalized (relative) weights for each respondent. Relative weights were calculated by dividing the survey weight of a respondent by the mean of all survey weights [35,36]. The sum of the survey and relative weights in the nationally (n = 2002) and provincially (N = 4645) representative datasets each equaled their respective sample sizes and as such the original sample weights were used for subsequent analyses. Sample characteristics are provided in Table 1. In both sampling phases, panel members received email invitations which included a personal identification number along with a URL link to an information letter/informed consent procedure (S1 File). Consenting participants completed the survey online at their convenience from May 31 -June 25, 2018 and had the ability to leave the survey and complete it at another time [36]. In order to maximize participation and minimize nonresponse bias, email reminders were sent approximately three days following the initial invitation, and an incentive (points allocated toward quarterly prize draws for panel members) was provided to all respondents who completed the survey. --- Measures Items and scales used in the present study were drawn from four survey modules: (1) opinions on national and provincial responses to substance use, (2) opinions on harm reduction as an approach to substance use and seven specific harm reduction interventions, (3) personal experiences with, and attitudes toward substance use and addictions, and (4) sociodemographics (S1 File). General support for harm reduction. Three survey items assessed public views toward harm reduction as an approach to substance use. These questions were preceded by a definition of harm reduction, which was neutrally framed to acknowledge supportive and opposing positions (i.e., "Harm reduction refers to public health programs that reduce the harms related to drug use, without requiring people to stop using substances. An example would be providing supervised injection sites to people who inject drugs so that they can use drugs more safely. There are lots of different opinions about harm reduction. Supporters think these programs can significantly reduce death and the transmission of disease among people who use drugs, and that these programs can bring them into contact with health and social services that could help in their recovery. Opponents argue that harm reduction programs encourage crime and drug use and should not be offered."). Respondents rated their level of personal support for harm reduction (strongly oppose, oppose, don't know/no opinion, support, strongly support, prefer not to say). Responses were recoded as 1 = oppose (i.e., strongly oppose or oppose), 2 = don't know/ no opinion, and 3 = support (i.e., support or strongly support); respondents who endorsed 'prefer not to say' were recoded as missing and removed from weighted parameter estimates regarding that particular question. Two questions assessed support for government action on harm reduction (i.e., "My federal [and in a separate question, provincial] government should provide more financial and other support for harm reduction services"), each followed by six responses (strongly disagree, disagree, don't know/no opinion, agree, strongly agree, prefer not to say). Responses were recoded as 1 = disagree (i.e., strongly disagree or disagree), 2 = don't know/no opinion, and 3 = support (i.e., agree or strongly agree); respondents who endorsed 'prefer not to say' were recoded as missing and removed from weighted parameter estimates regarding that particular question. Support for specific harm reduction interventions. Participants provided their views on seven harm reduction services: supervised drug consumption, syringe distribution, naloxone, low threshold opioid treatment (i.e., opioid agonist medications delivered without imposing abstinence as a condition for access), community outreach, drug checking, and safer inhalation kits. Each service was defined for respondents, followed by one item assessing support (strongly oppose, oppose, don't know/no opinion, support, strongly support, prefer not to say). Responses were recoded as 1 = oppose (i.e., strongly oppose or oppose), 2 = don't know/ no opinion, and 3 = support (i.e., support or strongly support); respondents who endorsed 'prefer not to say' were recoded as missing and removed from weighted parameter estimates regarding that particular question. Stigmatized attitudes toward PWUD. Stigma was assessed using four social distance items (<unk> =.72 in the present sample) modified from the World Psychiatric Association's Schizophrenia: Open the Door project [37]: (1) would you be afraid to talk to someone who has a substance use problem? (2) would you be upset or disturbed to be in the same room with someone who has a substance use problem? (3) would you make friends with someone who has a substance use problem? and (4) would you feel embarrassed or ashamed if your friends knew that someone in your family has a substance use problem? Each item was accompanied by a 6-point response scale (definitely not, probably not, not sure/don't know, probably, definitely, prefer not to say); respondents who endorsed 'prefer not to say' were recoded as missing for model testing. The third stigma question was reverse-coded so that higher scores were indicative of stronger stigmatized attitudes. Level of familiarity with PWUD. Respondents completed the level of familiarity (LOF) scale [38,39], modified in this study to assess how familiar respondents were with people who have substance use problems (<unk> =.84 in the present sample). The scale includes 11 dichotomous items ranging from no familiarity (e.g., "I have never observed a person that I was aware had a substance use problem" [LOF score = 1] to maximum familiarity (e.g., "I have a substance use problem" [LOF score = 11]), with additional items assessing moderate familiarity (e.g., "I have watched a documentary on television about substance use problems" [LOF score = 4). Respondents indicated whether each statement was true or false for them, and an overall LOF score was assigned based on respondents' highest level of familiarity. For example, if a respondent indicated that they watched a documentary about persons with a substance use problem (LOF score = 4) and also indicated that they have a relative who has a substance use problem (LOF score = 9) that respondent would receive a LOF score of 9. Respondents who endorsed 'none of the above' were recoded as missing. Disease model beliefs. Respondents completed the 7-item disease model beliefs subscale from the Short Understanding of Substance Abuse Scales (<unk> =.79 in the present sample) [32,40]. Items assessed agreement with views that addiction is a chronic relapsing disorder that can only be ameliorated with abstinence (e.g., "There are only two possibilities for an alcoholic or drug addict-permanent abstinence or death"; "Once a person is an alcoholic or an addict, he or she will always be an alcoholic or an addict"). Response options were recorded using a 6-point response scale (strongly disagree, disagree, don't know/no opinion, agree, strongly agree, prefer not to say); respondents who endorsed 'prefer not to say' were recoded as missing for model testing. Media exposure to harm reduction. Two survey items developed for this study assessed respondents' exposure to harm reduction via the media. Specifically, participants indicated whether they had ever seen or heard media coverage of harm reduction (yes, no), and media coverage featuring bereaved mothers who had a child die from a fatal drug overdose (yes, no). Sociodemographics. Participants' sex (male, female), age, and educational attainment (high school completion or less, technical school/college diploma, university degree(s)) were collected as part of the survey sampling procedures. In addition, a single survey item asked participants to identify their political views (i.e., very liberal, mostly liberal, equally liberal and conservative, mostly conservative, very conservative, I don't have any political views, prefer not to say), annual household income (<unk> $50,000 CDN, $50,000 -$100,000, > $100,000), and whether they lived in a rural or urban area. substance use and for seven specific interventions. To account for the complex survey design, standard errors and 95% confidence intervals for weighted proportions were estimated using a set of 500 bootstrap weights computed using MPlus version 8.4 [36]. The survey design stratified respondents by age and sex within each province or region; a stratification variable was created in which each respondent was placed into one of six strata based on age (18-34, 35-54, 55-100 years) and sex (males, females). The primary sampling units (PSUs) for weighted estimates for the phase 1 sample were region (Alberta, Atlantic Provinces, British Columbia, Ontario, Quebec, Saskatchewan and Manitoba) whereas the PSUs for the total, provincially representative dataset, were the ten Canadian provinces. Province was defined as a cluster variable for the provincial analyses of the entire dataset, while region was defined as a cluster variable for the national analyses using the representative Canadian subsample. These clustering and stratification variables were used to create 500 bootstrap weights for the entire (provincially representative) dataset and the nationally representative Canadian subsample and to estimate variances. Once standard errors were produced using 500 bootstrap weights, a coefficient of variation was calculated for each weighted proportion by dividing the standard error by the weighted proportion to derive the sampling variability percentage. Using Statistics Canada criteria, parameter estimates exhibiting sampling variability of less than 16.6% were considered acceptable while estimates with variability greater than 16.5% but less than 33.3% were classified as moderate, and were annotated with an 'interpret with caution' descriptor. All weighted estimates of population proportions had sample sizes of more than thirty individuals [41]. Objective 2. MPlus version 8.4 and R version 3.6.1 were used to evaluate the hypothesized model predicting public support for harm reduction depicted in Fig 1. First, in order to describe the relationship between latent variables presented in Fig 1 and their indicators, a measurement model was estimated [42]. Constructs that were assessed using pre-existing scales drawn from the literature (i.e., LOF scale, disease model beliefs scale) were treated as single composite indicator scale scores, while constructs assessed using new indicator items developed for this study (i.e., media exposure to harm reduction, support for harm reduction) were treated as latent variables. Due to the borderline alpha coefficient observed for its 4-item composite scale, we also treated the stigma construct as a latent variable in our analyses and investigated the measurement structure of those items. Factor loadings and correlations between indicators and latent variables were assessed these each latent constructs, followed by confirmatory factor analysis. Second, a structural equation model (SEM) was estimated to evaluate direct and indirect effects of the constructs depicted in Fig 1. Results for this model were compared to a second SEM that included four covariates (political views, income, education, and respondent sex) to produce covariate-adjusted estimates. A weighted least squares means and variance adjusted method was utilized to accommodate missing data and categorical variables in evaluating both the measurement and structural models. Maximum likelihood estimation was used to calculate the correlations among continuous variables and account for missing data. Shapiro-Wilk tests of normality were conducted on each independent variable to assess whether they were normally distributed. Model fit was evaluated using several indices (i.e., Root Mean Square Error of Approximation [RMSEA], Comparative Fit Index [CFI], Standardized Root Mean Square Residual [SRMSR]; Chi square [<unk> 2 ]). title "National Survey of Public Opinion on Harm Reduction Services and Drug Use", study ID: Pro00080911. --- Results --- Objective 1: Describing public support for harm reduction Canadians were generally supportive (64%) of harm reduction as an approach to substance use (Fig 2). Public support varied across different harm reduction services, with more than three quarters supporting community outreach (79%) and over 70% supporting naloxone distribution (72%) and drug checking interventions (70%) (Fig 2). Needle distribution (60%) and supervised drug consumption programs (55%) received lesser, though still majority, support (Fig 2). Low-threshold opioid agonist treatment (49%) and safer inhalation kits (44%) received the least amount of support among Canadians at the national level (Fig 2). Public views at the provincial level revealed some regional diversity. Specifically, respondents residing in the Atlantic region of Canada (New Brunswick, Nova Scotia, Newfoundland and Labrador, Prince Edward Island) and British Columbia reported most support for harm reduction (as a general approach, and among the 7 specific services measured) compared to other provinces and regions, while central Canadians (Ontario, Quebec) tended to view harm reduction more moderately. Respondents living in the Canadian prairie provinces (Alberta, Saskatchewan, and Manitoba) reported the lowest levels of support for harm reduction compared to other regions. Level of familiarity with PWUD was assessed using a single continuous indicator variable, LOF scores. Disease model beliefs were assessed using a single continuous indicator variable, disease model subscale scores. The latent media exposure to harm reduction construct was assessed using two categorical items: whether or not respondents reported ever seeing or hearing media coverage (a) featuring harm reduction and (b) bereaved mothers who had a child die from a drug overdose. Stigmatized attitudes toward PWUD was treated as a latent construct measured using four continuous indicator variables drawn from this module of the survey. Shapiro-Wilk tests of normality revealed that each study variable was non-normally distributed; however, estimates of kurtosis and skewness for these variables (Table 2) indicated that these data were within the acceptable non-normal cut-off points recommended by Kline [43]. Our latent outcome construct, support for harm reduction, was assessed using three continuous indicator variables: overall support for harm reduction, support for increased federal investment in harm reduction, and support for increased provincial investment in harm reduction. Correlations among continuous indicator variables along with their accompanying descriptive statistics are presented in Table 2. The average age of participants was 48.2 years (SD = 16.0). Respondents were slightly positive toward harm reduction on the response scale (M = 3.6, SD = 1.2) and also on whether the federal (M = 3.6, SD = 1.3) or provincial governments (M = 3.5, SD = 1.3) should increase financial and other supports regarding harm 2, variables ProvHR1 and FedHR1 were highly correlated with one another (> 0.9) while the remaining inter-item correlations were less than 0.85 [43]. We also compared median scores on our three indicators of support for harm reduction (overall support for harm reduction, support for increased federal investment in harm reduction, and support for increased provincial investment in harm reduction) across four sociodemographic covariates: annual household income, education, political views, and respondent sex. Median levels of support for harm reduction were consistent (median = 4; support) across all covariates except for political affiliation, where we observed differences in median levels of support were observed such that respondents who identified with'very' and'mostly' conservative political views reported lower median levels of support toward harm reduction (median = 3; don't know/no opinion) compared to those who endorsed more liberal political affiliations. Measurement model. The latent constructs were correlated with each other (ps ranged from 0.05 to 0.001; standardized values ranged from 0.06 to-0.18). A confirmatory factor analysis revealed that the standardized factor loadings for the stigma toward PWUD (values ranged from 0.41 to 0.81), support for harm reduction (values ranged from 0.84 to 0.93), and exposure to media coverage on harm reduction (values ranged from 0.59 to 0.65) latent variables in the hypothesized model were acceptable to high with the exception of the third stigma item (0.41) [44]. The low factor loading of the third stigma item (would you make friends with someone who has a substance use problem?) suggests that it was a weaker indicator of the stigma construct, and was dropped from the model [44]. Factor variance for the media exposure construct was set to 1 with the loadings freely estimated [43]. An initial confirmatory factor analysis for the hypothesized measurement model indicated poor global model fit (RMSEA = 0.083; CFI = 0.787; SRMSR = 0.052, <unk> 2 = 798.437, p <unk> 0.001, df = 24, N = 4645). The model was then re-specified by removing the third scale item from the stigma latent construct, given its low factor loading. A second confirmatory factor analysis indicated that the overidentified model exhibited good global fit (RMSEA = 0.026; CFI = 0.982; SRMSR = 0.016, <unk> 2 = 70.248, p <unk> 0.001, df = 17; N = 4645). Closer inspection of local fit also revealed that the model had good local fit while all factor loadings were considered acceptable to high and statistically significant (p <unk> 0.001) [43,44]. Structural model. A structural equation model, incorporating the survey weights, was fit in order to examine indirect and direct effects of level of familiarity, disease model beliefs, exposure to harm reduction media, and stigmatized attitudes toward PWUD on public support of harm reduction (Fig 3). Each path was tested to see whether it was nonzero and if so, whether the valence of the observed association confirmed theoretical predictions (Fig 3). Results from this SEM were compared to a second SEM that included sociodemographic covariates to obtain adjusted estimates (Table 3). In both analyses, data were weighted by age and sex. The final weighted unadjusted structural model ( well as good local fit. In order to estimate both the indirect and total effects using missing data, 1,000 bootstrap samples were estimated. Bias-corrected significance levels for all effects did not vary from the original unadjusted and adjusted models that did not use bootstrapping. The final unadjusted model explained over 11% of observed variance regarding stigmatized attitudes and 5% of observed variance regarding public support for harm reduction, while the adjusted model explained 13% of observed variance regarding stigmatized attitudes and 17% of observed variance regarding public support for harm reduction (Fig 3). Table 3 presents indirect, direct, and total effects of the study variables on stigma and support for harm reduction. Inspection of direct effects in the structural model indicated that disease model belief about addiction were positively associated with stigmatized attitudes toward PWUD in the unadjusted and adjusted models (<unk> = 0.22, p <unk> 0.001) while, level of familiarity with PWUD was inversely associated with stigmatizing beliefs toward PWUD in both the unadjusted and adjusted models (<unk> = -0.22, p <unk> 0.001). Additionally, media exposure to harm reduction was inversely associated with stigmatized attitudes in both the unadjusted (<unk> = -0.07, p = 0.007) and adjusted models (<unk> = -0.08, p = 0.003). When evaluating direct effects on support for harm reduction, disease model beliefs about addiction exhibited the strongest inverse association with support for harm reduction (Unadjusted <unk> = -0.17, p <unk> 0.001; Adjusted <unk> = -0.10, p <unk> 0.001), compared to stigmatizing attitudes toward PWUD (Unadjusted <unk> = -0.09, p <unk> 0.001; Adjusted <unk> = -0.06, p = 0.003). Conversely, level of familiarity with PWUD was positively associated with support for harm reduction (Unadjusted <unk> = 0.06, p <unk> 0.001; Adjusted <unk> = 0.07, p <unk> 0.001). Contrary to prediction, we observed no direct association between media consumption and support for harm reduction (Unadjusted <unk> = 0.03, p = 0.220; Adjusted <unk> = -0.02, p = 0.438). We also assessed whether exposure to media reporting on harm reduction was associated with support for harm reduction via an indirect pathway, i.e., via stigmatizing attitudes. Media exposure exhibited a small, though statistically significant indirect effect on public support for harm reduction via stigma (Unadjusted <unk> = 0.01, p = 0.026; Adjusted <unk> = 0.004, p = 0.048). --- Discussion To our knowledge, this is the first national study to provide population estimates of public support for harm reduction as a broad approach to substance use, and in relation to seven specific harm reduction interventions. Results indicated that about two-thirds of Canadian adults (64%) were supportive of harm reduction as a general approach to substance use (provincial estimates = 60% -73%). Importantly, these estimates were obtained using a neutral assessment strategy, i.e. a question that provided a substantive definition in conjunction with popular reasons for support and opposition to harm reduction (i.e., 'harm reduction refers to public health programs that reduce the harms related to drug use, without requiring people to stop using substances...Supporters think these programs can significantly reduce death and the transmission of disease among people who use drugs, and that these programs can bring them into contact with health and social services that could help in their recovery). Opponents argue that harm reduction programs encourage crime and drug use and should not be offered'). Our results are consistent with previous research that similarly documented substantial public support for harm reduction in select Canadian regions [17,18,21,25,27]. The present study replicated and extended those findings using survey methods that provided both nationally and provincially representative population estimates. Previous survey research investigated public support only for specific, high-profile harm reduction services (e.g., supervised drug consumption programs) [18]. The present study addressed this limitation by assessing views on a broader range of harm reduction interventions. Our results showed that five of seven interventions were supported by over half of Canadian adults, with strongest support reported for outreach (79%), naloxone (72%), and drug checking (70%), followed by syringe distribution (60%) and supervised injection programs (55%). Two intervention strategies-low threshold opioid agonist treatment and safe inhalation interventions-did not receive majority support in this study. Those results are consistent with CHARPP's analysis of Canadian newspaper reporting on harm reduction, which demonstrated that these interventions received among the lowest coverage rates over a 17 year period [16]. Taken as a whole, our findings that most Canadian adults support or strongly support harm reduction as an approach to substance use and most harm reduction services could reflect public awareness of Canada's innovative approach to services for PWUD. Canada is widely recognized as an international leader in harm reduction, starting with early adoption of needle distribution programs in the late 1980s, and more recent implementation of North America's first supervised drug consumption program in Vancouver in 2003, and North America's first clinical trial of prescription heroin in 2005 [45,46]. However, previous CHARPP studies documented relatively weak, rhetorical public policy frameworks governing harm reduction services produced by provincial governments and health authorities [13][14][15]. Further research is needed to explain this disconnect between inadequate policy supports for harm reduction, despite broad support in the general population. If policy makers are insufficiently aware of such support, they may inadvertently perpetuate a rhetorical, morality policy environment for these services. Cultivating robust knowledge exchange opportunities between governmental and health system decision makers and population and public health researchers investigating determinants of attitudes toward harm reduction services could enhance policy makers' access to accurate information about public views to support the policy development process. The second objective of this study was to evaluate a social exposure model predicting public support for harm reduction, drawing on theories in the intergroup relations, addiction, and media communication literatures. Our model hypothesized that three distal variables (personal familiarity with PWUD, disease model beliefs about addiction, and exposure to media coverage on harm reduction) influence support for harm reduction directly, and also indirectly, via their effects on stigmatized attitudes toward PWUD. Overall, our results-which were adjusted for covariates typically considered in this literature (age, political affiliation, income, education)-provided substantial support for the proposed model. As predicted, we observed a significant inverse association between stigmatized attitudes toward PWUD and public support for harm reduction. These results replicate US studies, which similarly reported that stigma toward PWUD appears to undermine public opinion toward harm reduction services [19,23]. Drawing on the intergroup relations literature, our results also confirmed an inverse association between personal familiarity with PWUD and stigmatized attitudes toward this outgroup, as well as a significant positive association between familiarity and support for harm reduction. These results are consistent with the contact hypothesis, according to which greater exposure to and familiarity with outgroups facilitates empathic personal attitudes [28][29][30]. One implication of these findings is that efforts to enhance support for harm reduction could focus on programs to strengthen social contact between the public and PWUD. Eversman [47] notes that "the very nature of addiction and how best to treat it divides harm reduction supporters and opponents" (p. 17), yet to our knowledge, no previous research has examined the role of disease model beliefs about addiction in relation to public support for harm reduction. Results confirmed that greater endorsement of disease model beliefs was associated with greater stigmatized attitudes toward PWUD, as well as lesser support for harm reduction. These findings have not been reported in the literature to our knowledge, and suggest that efforts to enhance support for harm reduction usefully could problematize certain disease model beliefs (e.g. "There are only two possibilities for an alcoholic or drug addict-permanent abstinence or death"). Finally, drawing on the media communication literature, we also predicted that exposure to media coverage of harm reduction would be positively associated with public support toward these services. Contrary to our prediction, we observed no direct association between media exposure and support for harm reduction. Instead, we observed an indirect effect, such that greater media exposure to harm reduction was associated with lesser stigmatized attitudes toward PWUD, which in turn was associated with greater support for harm reduction. Those results are consistent with cultivation theory [33], according to which exposure to media shapes personal opinions on contentious social issues by altering beliefs about outgroups. Our finding that media exposure to harm reduction was inversely associated with stigmatized attitudes toward PWUD suggests that media can play an important role in promoting support for harm reduction by reducing stigma toward drug users. To that end, media gatekeepers (e.g., editors, content producers) could support efforts to promote positive public views toward harm reduction by prioritizing coverage of PWUD accessing these services who experience positive life changes, thus challenging disease model beliefs about addiction. --- Study limitations In
We described public views toward harm reduction among Canadian adults and tested a social exposure model predicting support for these contentious services, drawing on theories in the morality policy, intergroup relations, addiction, and media communication literatures. A quota sample of 4645 adults (18+ years), randomly drawn from an online research panel and stratified to match age and sex distributions of adults within and across Canadian provinces, was recruited in June 2018. Participants completed survey items assessing support for harm reduction for people who use drugs (PWUD) and for seven harm reduction interventions. Additional items assessed exposure to media coverage on harm reduction, and scales assessing stigma toward PWUD (α = .72), personal familiarity with PWUD (α = .84), and disease model beliefs about addiction (α = .79). Most (64%) Canadians supported harm reduction (provincial estimates = 60% -73%). Five of seven interventions received majority support, including: outreach (79%), naloxone (72%), drug checking (70%), needle distribution (60%) and supervised drug consumption (55%). Low-threshold opioid agonist treatment and safe inhalation interventions received less support (49% and 44%). Our social exposure model, adjusted for respondent sex, household income, political views, and education, exhibited good fit and accounted for 17% of variance in public support for harm reduction. Personal familiarity with PWUD and disease model beliefs about addiction were directly associated with support (βs = .07 and -0.10, respectively), and indirectly influenced public support via stigmatized attitudes toward PWUD (βs = 0.01 and -0.01, respectively). Strategies to increase support for harm reduction could problematize certain disease model beliefs (e.g., "There are only two possibilities for an alcoholic or drug addict-permanent abstinence or death") and creating opportunities to reduce social distance between PWUD, the public, and policy makers.
are only two possibilities for an alcoholic or drug addict-permanent abstinence or death"). Finally, drawing on the media communication literature, we also predicted that exposure to media coverage of harm reduction would be positively associated with public support toward these services. Contrary to our prediction, we observed no direct association between media exposure and support for harm reduction. Instead, we observed an indirect effect, such that greater media exposure to harm reduction was associated with lesser stigmatized attitudes toward PWUD, which in turn was associated with greater support for harm reduction. Those results are consistent with cultivation theory [33], according to which exposure to media shapes personal opinions on contentious social issues by altering beliefs about outgroups. Our finding that media exposure to harm reduction was inversely associated with stigmatized attitudes toward PWUD suggests that media can play an important role in promoting support for harm reduction by reducing stigma toward drug users. To that end, media gatekeepers (e.g., editors, content producers) could support efforts to promote positive public views toward harm reduction by prioritizing coverage of PWUD accessing these services who experience positive life changes, thus challenging disease model beliefs about addiction. --- Study limitations In general, the cross-sectional research design used in this study precludes casual claims as well as assessing directionality. Future research should attempt to replicate our theoretical model using longitudinal study designs. Given that our social exposure model is exploratory in nature, further research is also needed to replicate the specific associations observed in this study: results may change when the measures are administered to populations outside of Canada. Another limitation of the present work is that adults living in the Canadian Territories (Northwest Territories, Nunavut, and Yukon) were not included in the sample. Future research should therefore refine and replicate these methodologies and incorporate those populations. Finally, the constructs in our social exposure model collectively accounted for only 17% of variance in our outcome measure of support for harm reduction. Although we adjusted for political viewpoints, income, education level, and sex of participants, those results imply that there are additional influences on public views not measured in the present research that may also be associated with public support for harm reduction. --- Conclusions Despite generally favorable opinions toward harm reduction across Canada, weak and rhetorical public policy frameworks currently govern harm reduction services [13,14]. The present study advances this area beyond past efforts to identify sociodemographic correlates of public views of these contentious services, such as political affiliation, education, or age. Our social exposure model suggests that efforts to change views on these services could focus on problematizing certain disease model beliefs (e.g., "There are only two possibilities for an alcoholic or drug addict-permanent abstinence or death": a representative item on the disease model beliefs measure used in this study) and creating opportunities to reduce social distance between PWUD, the public, and policy makers. --- Data are available from https://dataverse.library.ualberta.ca/dataset. xhtml?persistentId=doi:10.7939/DVN/BZ7OGL. --- Ethics statement The study procedures and measures were approved by the University of Alberta Health Research Ethics Board. Ethics approval was obtained from the University of Alberta under the
We described public views toward harm reduction among Canadian adults and tested a social exposure model predicting support for these contentious services, drawing on theories in the morality policy, intergroup relations, addiction, and media communication literatures. A quota sample of 4645 adults (18+ years), randomly drawn from an online research panel and stratified to match age and sex distributions of adults within and across Canadian provinces, was recruited in June 2018. Participants completed survey items assessing support for harm reduction for people who use drugs (PWUD) and for seven harm reduction interventions. Additional items assessed exposure to media coverage on harm reduction, and scales assessing stigma toward PWUD (α = .72), personal familiarity with PWUD (α = .84), and disease model beliefs about addiction (α = .79). Most (64%) Canadians supported harm reduction (provincial estimates = 60% -73%). Five of seven interventions received majority support, including: outreach (79%), naloxone (72%), drug checking (70%), needle distribution (60%) and supervised drug consumption (55%). Low-threshold opioid agonist treatment and safe inhalation interventions received less support (49% and 44%). Our social exposure model, adjusted for respondent sex, household income, political views, and education, exhibited good fit and accounted for 17% of variance in public support for harm reduction. Personal familiarity with PWUD and disease model beliefs about addiction were directly associated with support (βs = .07 and -0.10, respectively), and indirectly influenced public support via stigmatized attitudes toward PWUD (βs = 0.01 and -0.01, respectively). Strategies to increase support for harm reduction could problematize certain disease model beliefs (e.g., "There are only two possibilities for an alcoholic or drug addict-permanent abstinence or death") and creating opportunities to reduce social distance between PWUD, the public, and policy makers.
INTRODUCTION The cultural transformations that have taken place since the popularisation of the Internet and the World Wide Web (WWW) in the mid 90s are numerous and deal with issues relating to the technological, social, economic, ethical, political, environmental and aesthetic domains. These transformations are often happening at the intersections of individuals and organisational structures, where for example, the roles of users and producers have become increasingly difficult to differentiate, or the role of cultural institutions and art in general has been constantly challenged and re-considered. At the same time, cultural spaces and practiceshow and where culture takes place/being produced/being formed have changed dramatically. All the above are the direct result of a world changed by Information and Communication Technologies (ICTs) along with significant events and changes in the economic and political spheres of geographies around the world. The results of this transformation are full of complexity and contradiction. By being a cultural product of life with and after the Internet, Internet art (from net.art to post-Internet art) symbolises the drastic changes that took place on and to the Internet. Post-Internet refers to the new processes and conceptual dialogues that arose due to these social changes. It is a critical shift from discussing the Internet as a contained entity governing merely our digital interactions to saying something more about its ubiquitous presence and the reconfiguration of all culture by the Internet (Connor 2013). This paper aims to examine the dimension of mediation in the post-Internet condition through the post-Internet art medium, in an effort to produce a better understanding around the changing nature of life post Internet and very importantly, to encourage researchers at the intersections of sociotechnical and technocultural research, to consider the ubiquitous medium of Internet art as a rich and useful tool for their work. In the Posthuman Glossary, Clark writes about the post-Internet: This rebirth of a condition defines a quantitative shift in the ontological treatment of digital-nondigital technological hybrids on both sides of the posthuman ambivalence. This includes interleaving with, and de-centring, difference through connections to previously out of reach global otherness on the one hand, and the use and reproduction of dominant, standardised distribution, production platforms and protocols which redefine much of the space formerly known as offline, on the other (Clark 2018). The concept of a 'condition' aims to create an understanding of exploring the historical present and to provide a framework for exploring its elements which in the case of this paper is the dimension of mediation. The main point of mediation in the post-Internet condition has to do with viewing the mediated experience on the same level as primary experience. Mediation in the post-Internet condition moves further than the digital cultural heritage (Zschocke et al. 2004), or the physical as digital through digital reproduction processes (Manovich 2001). In the post-Internet condition, the shift from analogue to digital is not a point of friction anymore while mediation through digital technologies does not rely on reality representation but rather on acceptance of mediated realities as reality. Post-Internet mediatisation processes bring together the physical, imagined, virtual and the hybrid (Manovich 2013). Viewing the mediated experience on the same level as primary experience has been associated with the work of many post-Internet artists like Parker Ito, Oliver Laric and Artie Vierkant (Quaranta 2015). Mediation post-Internet is shaped by participatory cultures within network societies (Castells 2004;Castells 2012), where socio-cultural processes operate within an overabundance of information and contribute towards a constant process of creation, distribution, usage, manipulation and integration of information in all its forms. Mediation in the post-Internet context can be understood as a complex and hybrid process of "understanding and articulating our being in, and becoming with, the technological world, our emergence and ways of intra-acting with it, as well as the acts and processes of temporarily stabilising the world into media, agents, relations, and networks'' (Kember & Zylinska 2012). A key concept discussed by Kember and Zylinska is that mediation entails recognising our locatedness within media as being always already mediated. This allows for a meta-level of mediation where engagement with the world happens within conditions of mediation that can be measurable and identifiable, but they can also be un-measurable and non-identifiable. The un-measurable and non-identifiable aspects of mediation in the post-Internet condition, hint towards the unprecedented, unexpected, unformed and unruled products of mediation where the networks and infrastructures of ICTs exist together with an infinite production of both human and nonhuman-produced knowledge, communication, experience, politics and culture. Human and nonhuman actors, humans and machines, networks, algorithms and technologies, co-create conditions of life in a hybrid and liquid state. In this mediated state, the human and non-human exist in a state of mutualistic symbiotic intra-action, meaning that human and non-human actors are attached by constantly exchanging and diffracting, influencing and working inseparably (Barad 2007). To examine and understand this level of mediated life post-Internet requires a view of the Internet as more than its technical elements, systems, protocols and networks. The various processes of mediation that involve ICTs have definitely a lot to do with their technical elements, however, their biological elements are equally important in producing and driving these processes of mediation. Together, the biological and the technical elements are capable of generating new forms, unprecedented connections and unexpected events within what Zylinska calls 'living media' and 'biomediations' (Zylinska 2020). This shift from ideas of connected media and media life that examine a metaphysical 'living' condition as a result of the connectivity of the object to the world via the medium, to a living condition that both exists within and drives the mediatisation processes is a key element of how mediation in the post-Internet condition could be approached and understood. Mediation post-Internet can even be described as multidimensional and post-Internet artworks can be understood as art in the post-Internet condition instead of technologically-mediated art. Any aspect of sociocultural production affected by the Internet can be considered as mediated based on its mediatisation processes, like mediated sociality, mediated entertainment and mediated consumerism. Three main areas of mediatisation are being discussed here as highlights and indicators of the hybrid and multifaceted character of mediation in the post-Internet condition. These are mediated publicness, mediated self and mediated trust. --- MEDIATED PUBLICNESS Publicness is one of the aspects of life that has been discussed in the last two decades as an increasingly mediated process. More specifically the mediation of publicness is linked to the rise of social media and how public engagement has been shaped by ICTs. The link between publicness and technologies has been extensively examined from the lens of the public and the media. Communities have always used media like newspapers, radio and television to create new publics, and form new connections amongst actors/users and the public (Dayan 2001;Harrison & Barthel 2009). To the extent they could, people have always used media to create public identities for themselves, others, and groups (Baym & Boyd 2012). The scale, pervasiveness, ubiquitousness and connectivity of the Internet and more specifically of social media, are what makes the level of widespread publicness post-Internet unprecedented. This increased level of mediated publicness depends on practices of appropriation of both Internet technology and web content within the context of participatory cultures (Christou & Hazas 2017). The socio-cultural practices of mediated publicness are dependent on the appropriation of networked media tools, ICTs and web content. Smartphones, cameras, editing applications and software are what people use to take photos and videos to document their lives or to simply create content for Instagram, Facebook, Twitter and YouTube. Social media are where people can post their content, engage with the public, consume content and participate in online social interactions. Platforms for social news aggregation and discussion and chat software like Reddit, Discord and Twitch, are where people can engage with specialised topics and form niche yet global communities. Countless sites dedicated to online news and content aggregation like Digg, Pocket and Fark, are where curation of the massive everyday social activity online along with community engagement and participation based on interests and topics takes place. All of the above and much more, enable activity by mediated connection to take place as part of a new form of mediated publicness. Internet artists have been using these mediated public spaces to directly connect with global audiences without necessarily targeting art audiences. Online performances through social media are a great example of how an art experience can be designed for mediated public spaces. Amalia Ulman's scripted performances designed entirely for circulation in Instagram and Facebook: Excellences and Perfections ( 2014), and Privilege (2016), are notable examples of this practice. Both works are premised on appropriating and acting out the expectations of the social media audience by "...turning a mirror back onto the fantasies of this public in order to expose their effects on how women perceive themselves" (Smith 2017). The performative nature of both Facebook and Instagram platforms, where identities and experiences are carefully constructed and curated for public consumption and approval (like, share and comment functions), guide the nature of these online performances where artificial situations are presented as real. These situations include plastic surgery and fake locations (staged photos) like cities and hotel rooms. The Red Lines artwork (Figure 1) by Evan Roth is a peer-to-peer network performance. The Red Lines network connected users with servers in geographically specific locations to participate in the sharing and viewing of 82 individual pieces from the artist's Landscape video series. Over the course of two years (2018-2020), 120,000 people in 166 countries connected to the Red Lines network. The work was commissioned via the arts organisation Artangel's open call for proposals to produce a major project that could be experienced anywhere in the world. The artist has travelled to coastal sites around the world where Internet cables emerge from the sea to record the work's videos (artangel.org). Red Lines investigates the physicality of the Internet through a public performance that any viewer could stream at home but also become an active participant to the work's network. This is because of the Red Lines's decentralised peer-to-peer network where a viewer becomes part of the network, streaming from other viewers who simultaneously stream the feed from them, anywhere in the world. Red Lines is a network containing infrared videos of coastal landscapes that can be streamed to a smartphone, tablet, or computer by anyone, anywhere. By setting a device in your home or workplace to display this artwork, you share a synchronized viewing experience with people around the world. Filmed in infrared, the spectrum by which data is transmitted through fiber optic cables, 82 slowly moving videos are stored on servers located in the same territories in which they were filmed. When you view a network located video made in Hong Kong, for example, it activates the submarine cable route between Hong Kong and you. You then become part of the peer-to-peer network which enables this work to be experienced by people around you (Roth 2020). --- MEDIATED SELF The reality of the mediated selfa concept that is not new or born through the mediated processes of ICTs and digital mediabecomes extended in the post-Internet condition. As with appropriation or mediated publicness, the mediated self, moves further than the virtual image-body represented as a proxy or a stand-in for a 'virtual' world. The self in a state of mediation is what becomes the state of the self, post-Internet. Earlier technologically mediated representations of the self like mirrors, photographs and videos have allowed for new understandings of how the self can be seen by ourselves or others, in different representational mediums and different times and spaces. The number of interactions that ourselves can have online along with the abundance of spacetimes within which ourselves exists online, and the ability to willingly or unwillingly control/archive/trace/manipulate/curate and exploit the image and activities of said selves, is what allows the post-Internet mediation of the self to operate within a previously impossible level of mediation. The extent of the mediation of the self, post-Internet is constantly expanding and with it expand implications relating to privacy, freedom and control. The transformative possibilities of the self, online, whether that is in visual appearance, behaviour or action (Cleland 2010), allow for unlimited versions of the self. At the same time, the level of control or lack of control over these versions of the self, allows for new levels of embodied identities. The self as data, the self as avatar, the self as image, are all extensions of the self, contributing to new ways of seeing the self. The self post-Internet is mediated and extended and with it are our ways of seeing and understanding the self itself. James Bridle's 2015 artwork Citizen EX (Figure 2), examines the concept of algorithmic citizenship. The concept of algorithmic citizenship is based on the work of John Cheney-Lippold, first outlined in the 2011 journal paper 'A New Algorithmic Identity: Soft Biopolitics and the Modulation of Control' which discusses the capacity of computer algorithms to infer categories of identity upon users based largely on their web-surfing activities (Cheney-Lippold 2011). Bridle's algorithmic citizenship is described as a new form of citizenship which is not assigned at birth, or through complex legal documents, but through data. "By downloading a browser extension, you can see where on the web you really are and what that means. As one moves around the web, the Citizen Ex extension looks up the location of every website visit. Then by clicking the Citizen Ex icon on the browser's menu bar, one can see a map showing where the website is, and one can also see their algorithmic citizenship, and how it changes over time with the websites they use" (citizen-ex.com). Citizen Ex calculates your algorithmic citizenship based on where you go online. Every site you visit is counted as evidence of your affiliation to a particular place and added to your constantly revised algorithmic citizenship. Because the Internet is everywhere, you can go anywhere -but because the Internet is real, this also has consequences... Like other computerised processes, it can happen at the speed of light, and it can happen over and over again, constantly revising and recalculating. It can split a single citizenship into an infinite number of sub-citizenships, and count and weight them over time to produce combinations of affiliations to different states (Bridle, 2015). 2007). The Status Project, is a study of the construction of our 'official identities' and creates what Bunting describes as "...an expert system for identity mutation". The work explores how information supplied by the public in their interactions with organisations and institutions is logged. The project draws on his direct encounters with specific database collection processes and the information he was obliged to supply in his life as a public citizen in order to access specific services; this includes data collected from the Internet and information found on governmental databases. This data is then used to map and illustrate how we behave, relate, choose things, travel and move around in social spaces. The project surveys individuals on a local, national and international level producing maps of "influence and personal portraits for both comprehension and social mobility" (Garrett 2012). --- MEDIATED TRUST Trust in persons, institutions and systems is to a considerable extent, the outcome of mediated processes (Endress 2002). Specifically, communication of information, which is inherently a mediated process, is a determinate factor to how trust is built and developed. As the Internet has increasingly become the main space for communication, circulation and retrieval of information, a trust intermediary (Schäfer 2016), it has also presented important new developments on how trust is being determined and affected by the heterogeneity of online and digital media. Information is embedded in a flurry of heuristic cues such as 'likes','shares' and 'comments' which may influence how trust indicators are taken up (Anderson et al. 2014). At the same time, the platforms where information is being communicated and circulated are themselves objects that people can trust or distrust. Since the birth of the Internet, there has been a constant state of tension between digital freedoms of expression and association, authoritarian restrictions on information and communication access and the development of Internet framing policies and national and international web and Internet public and private regulations. This level of tension is telling of the importance of continuing to expand our understanding of how trust in persons, institutions and systems is affected by Internetrelated mediated processes. Acts and movements of critical practice and resistance like hacking, building of free software and open-source communities, digital resistance techniques and training sessions and circumvention devices and techniques, are all indicators of the complex trust/distrust issues that keep emerging. Early Internet art, net.art, is a great example of how artworks were directed towards exposing and bypassing the economic, juridical and technical obstacles restricting free data and information exchange and free development of software (Dreher 2015), demonstrating who how and which interests determine net conditions of the time. Post-Internet art has also been dealing with contemporary issues around control, power, trust and their processes of mediation. Subjects and themes associated with post-Internet artworks are trust in technologies and platforms, interpersonal trust/authenticity, trust in systems and governance and trust in information (disinformation/misinformation). Some of the methods post-Internet artists use to approach trust today are as follows: identity play, audience manipulation, critical interventions/hacktivism, algorithmic play, network mapping and social media propaganda. --- Benjamin Grosser's 2018 artwork Safebook (Figure 4), is a browser extension that makes Facebook'safe'. The artist asks "Given the harms that Facebook has wrought on mental health, privacy, and democracy, what would it take to make Facebook "safe?" Is it possible to defuse Facebook's amplification of anxiety, division, and disinformation while still allowing users to post a status, leave a comment, or confirm a friend? With Safebook, the answer is yes!" (https://bengrosser.com). The Safebook browser extension is Facebook without content where all images, text, video and audio on the site are hidden. What is left behind are the empty boxes, columns, pop-ups and drop-downs that allow for the 'like' and'react' features. The user can still post, scroll through an empty news feed and do everything that they would normally do on Facebook. Grosser asks "With the content hidden, can you still find your way around Facebook? If so, what does this reveal about just how ingrained the site's interface has become? And finally, is complete removal of all content the only way a social media network can be safe?" Maybe the only way to keep Facebooka platform that has been criticised for being complicit in and a space for spreading hoaxes and misinformationfrom harming us is to hide everything (Ohlheiser 2018). Twitter Demetricator is used as a tool that allows users to think critically about social media. It is up to the user to reflect on how visible metrics affect the way we behave and interact on social media. Visible metrics are designed to draw our attention, they can influence and even guide the how, what and when of our posts as users learn what works best in terms of approval and engagement by the users. "Indeed, it's almost impossible to comprehend just how central metrics are to the Twitter experience until you install Demetricator. Only when I tried it, did I realize that my eyes were instinctively flicking to a tweet's retweet and favourite counters before I even processed the tweet itself. Only when I tried Demetricator did I understand how much I relied on those signals to evaluate a tweet-not only its popularity or reach, but its value" (Oremus 2018). --- CONCLUSION Both the level and nature of mediatisation processes have changed as a result of the social, economic, cultural and political developments in relation to the Internet. How the physical becomes digital through digital reproduction processes or how physical reality is being represented in digital space has been an important area of scholarship during the first wave of widespread Internet use and adoption of digital technologies. In post-Internet times, however, mediation is considered a precondition for most areas of social activity. Analysing the complex and hybrid processes of mediation in the post-Internet condition requires a broad examination of the myriad of intra-actions between human and nonhuman actors which operate by constantly exchanging and diffracting, influencing and working inseparably (Barad 2007). As mediation is an important dimension of the post-Internet condition it is also a common theme in post-Internet artworks. The three main areas of mediatisation as observed by the processes of reviewing Internet artworks and discourse around the post-Internet, are mediated publicness, mediated self and mediated trust. The artworks discussed in this section help illuminate the processes, dynamics, tensions and experiences of mediation in the post-Internet condition. Performing for social media audiences' expectations, critically manipulating social media applications, engaging Internet users globally in peer-to-peer networks, developing new methods that examine identity as defined by algorithmic processes and developing a platform that attempts to manipulate public opinion, are all perfect examples of how important the role of mediation is for our understanding of the world and of ourselves and how vital it is to continue to explore and critically engage with its processes.
This paper examines the dimension of mediation in the post-Internet condition through the post-Internet art medium. In the post-Internet condition, human and non-human actors, humans and machines, networks, algorithms and technologies, co-create conditions of life in a hybrid and liquid state of mediation. The paper discusses three important areas of mediatisation as highlights and indicators of the hybrid and multifaceted character of mediation post-Internet. These are mediated publicness, mediated self and mediated trust. The artworks discussed in this paper help illuminate the dynamics, tensions and experiences of contemporary mediation and act as examples of how important the role of mediation is in our understanding of the world and of ourselves in it and how vital it is to continue to explore and critically engage with its processes.
Background The population, economy and social situation of China are facing new changes and challenges. In order to improve the population structure and actively respond to the aging population, China implemented a new three-child policy to allow per couple to have up to three children on May 31, 2021 [1]. With the arrival of the three-child policy, childrearing has become a focused problem, especially for children aged 0-3 years. Chinese government emphasizes the care of children aged 0-3 years and issued childrearing policies including "Guidance on Promoting the Development of Care Services for Infants and Young Children under 3 Years" [2] and "The Decision to Optimize the Family Planning Policy and Promote Long-term Balanced Population Development" [3] to promote the healthy growth and development of young children. 0-3 years is a critical period for children's physical and mental development [4]. Ensuring children's development during this period provides a strong foundation for the future. Children's early development requires nurturing care, and childrearing can have a significant influence on children's development [5]. Jeong et al. [6] did a meta-analysis using 102 researches by November 5, 2020, and found that parenting interventions for children during the first 3 years of life are effective for improving early child cognitive, language, motor, socioemotional development, and attachment and reduced behavior problems. Zhou et al. [7] implemented a community-based, integrated and nurturing care intervention among 2745 child-caregiver pairs in four poverty-stricken counties, and found that childcare intervention could significantly prevent developmental delay in children under 3 years in rural China. For rearing children aged 0-3 years, previous studies mostly focused on childrearing attitude, knowledge and quality [8][9][10]. There are few researches on childrearing barriers in China and lack of national surveys. In 2009, it was reported that 30.6% of Chinese households with children aged 0-3 years found child-rearing to be much more difficult than before [11]. Recently, Zhang et al. [12] conducted a cross-sectional survey with a sample of 2229 parents of children aged 6-35 months and found that 87.5% of Chinese parents reported experiencing childrearing difficulties, and 31.5% of parents reported experiencing major difficulties. They also found that family having financial problems, and father not joining in child-rearing might face high risk to major childrearing difficulties. Several foreign and domestic studies also showed that sociodemographic characteristics and environment such as parent's education and family income seem to have influence on childrearing challenges and difficulties [11,13,14]. In this study, we performed a national cross-sectional survey on the barriers to rearing children aged 0-3 years in China, hoping to provide scientific evidence for making childrearing policies and supporting measures, help reduce childrearing barriers, ensure the early physical and mental development of children and improve the quality of Chinese population. --- Methods --- Study design and study population A national anonymous cross-sectional survey was conducted online in June 2021 using a random sampling method on the largest online survey platform in China: Wen Juan Xing (Changsha Ranxing Information Technology Co., Ltd., Hunan, China). A sample database covering over 2.6 million respondents was established by this online platform, whose personal information was confirmed to ensure an authentic, diverse and representative sample [15]. A sample size of 4200 people was indicated to be sufficient to estimate the prevalence of 87.5% (as previously reported in China [12]) with 1% margin of error and 95% confidence level using the formula: n = Z 2 <unk>/2 <unk>p(1-p) d 2 [16]. The participants completed the questionnaires online by mobile phone. A total of 5491 potentially eligible respondents were randomly selected and invited to participate in the survey. After quality control and manual check procedures to exclude ineligible, incomplete, and invalid questionnaires, the final sample consisted of 4406 respondents (80.2%) (flowchart presented in Supplemental Fig. 1). --- Data collection A self-administered questionnaire was designed to collect information from the participants, including 12 questions about sociodemographic characteristics, 4 questions about reproductive status, 5 questions about fertility intentions and one question about childrearing barriers. The primary outcome was the prevalence of barriers to rearing children aged 0-3 years, which was defined as the proportion of respondents who selfreported childrearing barriers. Fertility intention refers to the unwillingness or willingness to have a second or third child. The only-child situation of parents includes parents neither of whom is only child, parents one of whom is only child, and parents both of whom are only child. Sociodemographic characteristics included gender, ethnicity (Han and minority), age, residence (rural and urban), educational level (Junior high school or below, Senior high school or equivalent, and College or higher), annual household income (<unk> 30,000, 30,000-80,000, 80,000-120,000, > 120,000 Chinses yuan (CNY)), number of children (1, <unk>2), province, and occupation (factory worker, farmer, clerk, public servant, employee, and others). According to the economic development level, the provinces and municipalities were divided into 3 regions, including eastern (Beijing, Tianjin, Hebei, Liaoning, Shanghai, Jiangsu, Zhejiang, Fujian, Shandong, Guangdong, and Hainan), central (Shanxi, Jilin, Heilongjiang, Anhui, Jiangxi, Henan, Hubei, and Hunan) and western (Inner Mongolia, Chongqing, Guangxi, Sichuan, Guizhou, Yunnan, Tibet, Shaanxi, Gansu, Qinghai, Ningxia, and Xinjiang) region [17]. Barriers to rearing children aged 0-3 years was investigated with the question "What do you think is the biggest barrier to rearing 0 to 3-year-old children?" (Answer options: high time cost, high childrearing cost, high education cost, physical factors, others, and no barriers). We defined the respondents who chose the first five options as parents with childrearing barriers. For the biggest barrier, "high time cost" referred to the lack of time to raise children. "High childrearing cost" referred to the heavy economic burden of rearing children. "High education cost" referred to the great pressure to satisfy the education of young children. "Physical factor" referred to the factors related to personal health status. --- Statistical analysis We used proportion to describe categorical variables, and calculated the prevalence of barriers to rearing children aged 0-3 years. Univariate logistic regression was used to estimate the crude odd ratio (cOR) and its 95% confidence interval (CI). After controlling sociodemographic characteristics (gender, ethnicity, age, residence, educational level, annual household income, number of children, region, and occupation), multivariate logistic regression was adopted to analyze the association between fertility intention, only-child situation of parents and childrearing barriers and then calculated the adjusted odd ratio (aOR) and its 95%CI. Moreover, we analyzed subgroups stratified by number of children. Two-sided p values <unk> 0.05 were considered statistically significant. All analyses were performed with R version 4.0.5. --- Patient and public involvement Patients and the public were not involved in the design and conduct of the study. --- Results --- Sociodemographic characteristics and fertility intention among our study population Of the 4406 Chinese parents included in our study, 57.9% were women, 95.3% were the Han nationality, 65.5% were urban, 70.9% had a college degree or above, 62.1% had an annual household income of over 80,000 CNY, 68.9% had one child, 31.1% had at least two children, 58.1% lived in the eastern region, and 47.3% were employees (Table 1). 53.0% of the respondents were parents neither of whom is only child, 62.6% intended to have a second child, and 14.8% intended to have a third child. --- Prevalence of barriers to rearing children aged 0-3 years 94.7% of the 4406 respondents self-reported barriers to rearing children aged 0-3 years, of which 39.3% reported high time cost, 36.5% reported high childrearing cost, 13.5% reported high education cost, and 5.0% reported physical factors as the biggest barriers (Table 2). High time cost and high childrearing cost were also major barriers among respondents intended to have a second child (75.7%) and those intended to have a third child (66.7%) (Fig. 1). --- Related sociodemographic factors of childrearing barriers Women (aOR 1.49, 95%CI 1.13, 1.96) and people with college degree or above (aOR 3.46, 95%CI 2.08, 5.75) were associated with higher prevalence of childrearing barriers, whereas farmers (aOR 0.48, 95%CI 0.26, 0.87) were associated with lower prevalence (Table 1). Women (aOR 1.17, 95%CI 1.03, 1.33), and people having an annual household income of over 80,000 CNY (aOR 1.50-2.08, all P <unk> 0.05) were more likely to report high time cost as the biggest barrier. People having at least 2 children (aOR 1.16, 95%CI 1.00, 1.35) tended to report high childrearing cost, while people with an annual household income of over 120,000 CNY (aOR 0.60, 95%CI 0.48, 0.75) did not. Women (aOR 1.22, 95%CI 1.02, 1.47) and people aged 40-49 years (aOR 1.96, 95%CI 1.02, 3.77) reported high education cost more often, while people having an annual household income of over 80,000 CNY (aOR 0.52-0.63, P <unk> 0.05) reported less often (Supplemental Table 1, Supplemental Fig. 2). --- The association between fertility intention, only-child situation of parents and childrearing barriers Multivariate logistic regression models showed that people who intended to have a second child (aOR 0.58, 95%CI 0.40, 0.83) and people who intended to have a third child (aOR 0.51,95%CI 0.37, 0.71) were associated with less childrearing barriers (Table 2). People who intended to have a second child (aOR 1.21, 95%CI 1.04, 1.42) and parents one of whom is only child (aOR1.21, 95%CI 1.03, 1.42) were more likely to report high time cost as the biggest barrier, while people who intended to have a third child (aOR 0.77, 95%CI 0.64, 0.93) were less likely to report (Table 2). Parents one of whom is only child (aOR 0.81, 95%CI 0.69, 0.96) were related to less reported high childrearing cost. People who intended to have a third child (aOR 1.59, 95%CI 1.07, 2.36) and parents both of whom are only child (aOR 1.56, 95%CI 1.08, 2.26) were more likely to report physical factors, while people who intended to have a second child (aOR 0.61, 95%CI 0.42, 0.87) were less likely to report. In subgroup analysis, the association between fertility intention and childrearing barriers were stable (Table 3). --- Discussion In this study, we conducted a national represented cross-sectional study in 2021, right after the new three-child policy, to estimate the prevalence of childrearing barriers and analyze related factors, thereby helping make childrearing policies and supporting measures. We found that 94.7% of 4406 Chinese adults aged 18-49 years who had children self-reported barriers to rearing children aged 0-3 years. The biggest barrier included high time cost, high childrearing cost and high education cost. For related factors, women and well-educated people were associated with higher prevalence of barriers, while people who intended to have a second or third child were less likely to report childrearing barriers. Attention should be paid to childrearing barriers among children aged 0-3 years following the change of family planning policy. The prevalence of childrearing barriers in our study was close to previous studies [11,12]. Zhao et al. found that 88.2% of Chinese caregivers of children aged below 3 years reporting parenting difficulties in 2010 [11]. We found that women were more likely to report childrearing barriers than men, consistent with previous researches [19,20]. Although women traditionally play a significant role in family and childcare, more and more women enter workforce nowadays. Previous studies showed that it is difficult for working women to balance childcare and career because of incomplete supporting system for them in China [21]. Additionally, father involvement in children's early upbringing is a key source of positive child developmental outcomes [22][23][24]. However, fathers' involvement in parenting was less than mothers' in Chinese families [10]. Therefore, childrearing policies and supporting measures should be improved to help women to juggling work and childcare and encourage fathers to participate in childrearing. Well-educated people reported childrearing barriers more often. And this was consistent with previous study [13]. The reason might be their more attention to childrearing and education, more investment and busier work. Our results also showed that farmers self-reported barriers less often. This be ascribed to their outdated childrearing concepts, lack of scientific childrearing knowledge, insufficient investment, and lower parenting costs in rural areas [25], making them to feel easy to raise children aged 0-3 years. A survey conducted in 1715 rural households in western China found that the average parenting knowledge score of sample caregivers (0.52) is much lower than the expected average score (0.72) and parental investments are poor in rural areas [8]. Therefore, it is necessary to strengthen education of parenting knowledge and guide farmers to form scientific childrearing concepts. Notably, parents one of whom is only child were more likely to report high time cost as the biggest barrier, and parents both of whom are only child were more likely to report physical factors. After one-child policy for 36 years and universal two-child policy for only 5 years in China [26], many only-children became parents and might face considerable childrearing barriers. Besides time cost, economic cost, physical factors and other "hard barriers", only-child couples might face psychological and cultural barriers and need more time to adjust and accept [27]. Moreover, only-child couples tended to have more than one child [28]. Therefore, targeted strategies are needed for only-child parents to support their childrearing. For the association between fertility intention and childrearing barriers, people who intended to have a second child and people who intended to have a third child were less likely to report barriers. This finding was similar to previous studies in the context of two-child policy [29,30]. A cross-sectional study of 11,991 Chinese women on fertility intention in 2016 and 2017 indicated that economic, health, childrearing, and educational barriers were associated with a lower intent to have a second child [30]. Conversely, people with fertility intention might have positive attitude towards childrearing. Nevertheless, Fig. 1 The biggest barrier to rearing children aged 0-3 years among people who intend to have a second/third child because of their fertility potential, sustained efforts to reduce their barriers to rearing children aged 0-3 years are required. For high time cost, our findings showed that people intended to have a second child were more likely to report high time cost as the biggest barrier, while people intended to have a third child were less likely to report. The potential reason is that people with sufficient time would consider having a third child. With the development of society and the popularization of education, Chinese people of childbearing age are widely involved in social production. Busy working parents often leave their children to their grandparents to raise [31,32], which might mitigate this problem but bring childrearing pressure to grandparents [33]. Moreover, left-behind children need more attention due to the detrimental influence of parental migration and poor rearing environment [34][35][36]. Therefore, sufficient parental leave, available and qualified childcare services, and other supporting measures should be provided to reduce high time cost of rearing children aged 0-3 years. --- Table 3 The barriers to rearing children aged 0-3 years among our study population stratified by number of children aOR adjusted odd ratio, CI confidence interval, CNY Chinese yuan The aOR was calculated through the multivariable logistic regression controlling variables including ethnicity, age, region and the only-child situation of parents * indicates significant at p-value <unk> 0.05 For high childrearing cost, our study suggested that 36.5% of Chinese parents of childbearing age reported high childrearing cost as the biggest barrier. Based on the data from China Family Panel Studies(CFPS) in 2013, the average direct consumption expenditure of 0 to 5-yearold children (including food, clothing, shelter, childcare, education and medical care) was 62,726 Chinese yuan [37]. The financial burden of childrearing is also a substantial barrier to fertility intention. Liu et al. [30] found that 47.7% of Chinese women of childbearing age reported economic barrier as the main obstacle to having a second child. In order to reduce economic cost of childrearing, childbirth allowance for parents with a second or third child, and strengthening price regulation of childcare products and services are expected. --- Number of children For high education cost, we found that 13.5% of Chinese adults aged 18-49 years who had children reported high education cost as the biggest barrier, consistent with the results of previous cross-sectional surveys [30]. Nowadays, Chinese parents attached great importance to early education of children aged 0-3 years [38]. However, the proportion of young children's enrollment in various childcare institutions is less than 5% in China, far lower than 50% in some developed countries [39]. Additionally, there exist many problems such as uneven quality of childcare services, and shortage of teachers and professionals [40]. Therefore, it is necessary to make relevant policies and measures to encourage the development of childcare and early education institutions and strengthen regulation. For physical barriers, our results indicated that people who intended to have a third child were more likely to report physical factors as the biggest barrier, while people who intended to have a second child were less likely to report. A cross-sectional study among Japanese mother showed that mothers aged 40 years or older had a high risk of facing difficulties with childrearing [41]. People are younger and healthier when they have a second child, thus downplaying physical factors. In contrast, people intended to have a third child are concerned about their health because they are older when they have a third child. Therefore, following the three-child policy, targeted childrearing policies and measures for older parents are needed. The ongoing COVID-19 pandemic might bring about difficulties in childrearing [42]. The United Nations Educational, Scientific and Cultural Organization estimates 1.38 billion children are out of school or child care [43]. The economic impact of the pandemic increases the financial burden of rearing children aged 0-3 years [43]. Moreover, the health risks and fear connected to COVID-19 influence parents' levels of stress and consequently children's well-being [44]. Therefore, it is essential to make effective strategies to strengthen childrearing, and protect a future for children during the COVID-19 pandemic. --- Strengths and limitations The main strength of our study is that it is the first to understand the barriers to rearing children aged 0-3 years and the association between fertility intention and childrearing barriers among Chinese parents following the new three-child policy. The estimated prevalence of barriers could provide scientific evidence for the need of making childrearing policies and supporting measures, and the analysis of related factors could help formulate targeted policies and measures for people with different sociodemographic characteristics and fertility intention, thereby reducing childrearing barriers and guaranteeing child health. However, there are some limitations. First, we collected data using online questionnaire, so that people who were not internet users were not included in our study. Nevertheless, there were 989 million internet users in China by December 2020, and 99.7% of them surf the internet by mobile phone [45]. Additionally, internet use was more prevalent in people of childbearing age than in other age groups. Second, our study was cross-sectional and could not demonstrate causal association. Third, because of lacking occupation option like "immigrant workers" in our questionnaire, we could not measure the barriers among immigrant Chinese who face more childrearing difficulties [46]. Last, the coronavirus disease 2019 (COVID-19) pandemic might have an impact on our results [42]. --- Conclusions In conclusion, 94.7% of Chinese people of childbearing age who had children self-reported barriers to rearing children aged 0-3 years. The biggest barrier mainly included high time cost, high childrearing cost and high education cost. The people who intended to have a second child and people who intended to have a third child were less likely to report childrearing barriers. Full consideration should be given to the barriers of people with different sociodemographic characteristics and people with fertility intention, thus making targeted childrearing policies and supporting measures to reduce the burden on people of childbearing age, encourage suitable couples to have a second or third child and then cope with China's aging population. --- Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. --- Abbreviations cOR: crude odd ratio; CI: confidence interval; aOR: adjusted odd ratio. --- Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12889-022-12880-z. --- Additional file 1: Supplemental --- Authors' contributions LK searched the literature, analyzed the data, interpreted the results, and drafted the manuscript. WJ, QM collected the data. SZ and JL revised the manuscript. ML conceived the study, designed the study, supervised the study, interpreted the results, and revised the manuscript. All authors have read and approved the manuscript. --- Declarations Ethics approval and consent to participate This study was approved by the Institutional Review Board of the Chinese Association of Maternal and Child Health Studies with the approval number CAMCHS16001. This cross-sectional survey was performed in accordance with the Declaration of Helsinki. Informed consent was obtained from all survey participants. --- Consent for publication Not applicable. --- Competing interests The authors declare that they have no competing interests. • fast, convenient online submission • thorough peer review by experienced researchers in your field --- • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year --- • At BMC, research is always in progress. --- Learn more biomedcentral.com/submissions Ready to submit your research Ready to submit your research? Choose BMC and benefit from:? Choose BMC and benefit from: --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: To further optimize birth policy, China implemented a new three-child policy to allow per couple to have up to three children on May 31, 2021.A national cross-sectional survey was conducted among 18 to 49-year-old Chinese parents who had at least one child in June 2021. We calculated the prevalence of self-reported childrearing barriers and used univariate logistic regression and multivariate logistic regression to analyze associated factors. Results: 94.7% of the respondents self-reported barriers to rearing children aged 0-3 years, and the biggest barrier included high time cost (39.3%), high parenting cost (36.5%) and high education cost (13.5%). Women (aOR 1.49, 95%CI 1.13,1.96) and people with college degree or above (aOR 3.46, 95%CI 2.08, 5.75) were associated with higher prevalence of childrearing barriers, and people who intended to have a second child (aOR 0.58, 95%CI 0.40, 0.83) and people who intended to have a third child (aOR 0.51,95%CI 0.37, 0.71) were less likely to report childrearing barriers. The biggest barrier was more likely to be high time cost for parents one of whom is only child (aOR1.21, 95%CI 1.03, 1.42) and physical factors for parents both of whom are only child (aOR 1.56,95%CI 1.08, 2.26).The prevalence of barriers to rearing children aged 0-3 years was high among Chinese people of childbearing age who had children. Full consideration should be given to the barriers of people with different sociodemographic characteristics and people with fertility intention, thus making targeted childrearing policies and supporting measures to reduce the burden on people of childbearing age, encourage suitable couples to have a second or third child and then cope with China's aging population.
Introduction Purchasing virtual goods has become increasingly pervasive among the young generations. Virtual goods (e.g., avatars) are non-physical in nature and exist in the online platforms they are created in [1]). That is, they cannot be carried off to and used in another online platform. This characteristic separates virtual goods from digital goods (e.g., audio files which work in many platforms). While virtual goods have existed as long as virtual worlds (VWs), they did not receive attention before VW operators started to sell them to users with real money. Interestingly, many of the current VWs are targeted for users aged between 5 and 15 years, who make the majority of over 1.4 billion registered VW users [2]. The large user base has made the overall spending on virtual goods to reach $15 billion already in 2012. 1 Despite the economic potential, the research on virtual goods purchasing behaviour in VWs is still in its infancy-compared to the 'traditional' online shopping or that which occurs offline. To contribute to virtual goods research we seek to fill three gaps in the current literature. First, prior literature on virtual goods has focused rather heavily on adult consumers, albeit young people admittedly make a notable group of existing consumers. For example, young people have been under-investigated in information systems research [3]. With regard to the second gap, we advance virtual goods research by building on user experience. We believe this is of considerable importance since purchasing virtual goods requires engagement in online platforms where the goods are available. To this end, we employ cognitive absorption, which is an established driver of technology use, also in VWs [4; 5]. Notwithstanding, its influence on purchasing behaviour has remained poorly understood. Third, we center on social context. While social context is demonstrated as critical for online platform success [6], studies on virtual goods fall short in examining its effect on purchasing behavior [7]. In this paper, we conceptualize social context to operate through perceived network size, social presence, and trust-all of which we consider relevant for virtual goods exchange. By filling these gaps, we add on to three different research areas, virtual goods purchasing behavior [7; 8; 9], young users use of information technology [3], and the relationship between virtual goods and platforms where they are exchanged [10; 11]. The paper is organized as follows. It starts with a literature revive and provide a foundation for the research model. The paper will then explicate the research model and hypotheses. This is followed by the methods and results. Lastly, it concludes with a discussion, including implications, limitations and suggestions for future research. --- 2 Research Background and Hypotheses --- Prior Literature on Virtual Purchasing Behavior Prior VW research has largely examined user adoption [12; 13; 14], including initial acceptance and post-adoption use [4; 15; 16; 17; 18; 19]. Purchasing behavior, in turn, has received less empirical research attention [20]. The prior research on the topic has found purchasing in VWs have being affected by the virtual environment [7], user motivation [1; 8] and social influences [8]. Here we focus on two aspects that have drawn less attention in the VW, namely user experience and social context. They supplement each other as user experience stresses the experience obtained by an individual and social context the environment which is co-created by individual users. Social context is also expected to influence the individual's behavior [21]. Given virtual goods purchasing behavior is fairly inseparable of VW use we believe social context and user experience fit in perfectly to our research goal. --- The Research Model The user experience of VWs can be characterized with three key aspects. First, VWs employ avatars as a core of the navigation mechanisms and to represent the users. Second, VWs accommodate a multi-user, 3D graphical environment that includes sounds and music. Third, the user interface is highly dynamic because of a constant influx of new features and activities to sustain users' interest. Thus, the richness of stimuli that make the user absorbed in the in-world activities lie in the core of the VW user experience. Hence, we employ the concept of cognitive absorption. Cognitive absorption consists of focused immersion, intrinsic motivation, perception of control, temporal dissociation and curiosity. We measure it as a multi-dimensional construct as it was originally developed [22]. We also scrutinize how the social context can influence virtual purchasing behaviour. The social context is essentially dependent on the number of users involved in the VW. The social interaction, and the value users derive from it, is influenced by network externalities [19]. This is articulated in Metcalfe's law that postulates that the value of a telecommunications network is proportional to the square of the number of connected users [23]. For an individual user, however, the value of interactive digital technologies is more dependent upon the presence of relevant people, i.e. the user's personal network, than the network size in general [19; 24]. From a sociological perspective, this can be explained by the concept of homophily, i.e. the tendency to bond and associate with individuals with whom one perceives similarity [25]. Prior evidence from computer-mediated communication shows that interaction that involves the use of IT is likely to occur with key interpersonal relationships [26]. Thus, network externalities stem particularly from the presence of one's key social network in the VW. In addition to the presence of other users and an in-world social network, the social atmosphere and the relationships between users represent important aspects of the social context. For example, people tend to communicate more when they perceive human warmth and psychological presence [27]. As a result, we examine the degree of human warmth and contact associated with the VW using the concept of social presence [28]. Trust is fundamental component of interpersonal relationships and an important predictor of online purchasing [29]. Hence, we investigate the trust in other VW users as a predictor of virtual purchasing. The constructs with their definitions and references are presented in Table1. --- Table 1. The Research constructs and their definitions --- Construct Definition Source Perceived enjoyment The degree of enjoyment associated with using the VW. [22] --- Focused immersion The experience of total engagement where other attentional demands are, in essence, ignored. [22] --- Perception of control The user's perception of being in charge of the interaction. [22] --- Temporal dissociation The inability to register the passage of time while engaged in interaction. [22; 30] --- Curiosity The extent the experience arouses an individual's sensory and cognitive curiosity. [22] --- Perceived network size The perception of the degree to which important others are present in the VW. [16; 31] Social presence The degree of human warmth associated with the VW. [28; 32] User-to-user trust The belief in the other VW users' honesty. [29] The research model accommodating the user experience and social context is presented in Figure 1 below. --- Fig. 1. The Research model --- Hypotheses Agarwal and Karahanna [22] positioned cognitive absorption as a predictor of perceived usefulness and ease of use but did not examine its direct effect on behavioural intention. Cognitive absorption is an intrinsically motivating state [33], enjoyment being one of its dimensions [22]. Intrinsic motivation, often captured with perceived enjoyment, in turn has been found to predict the intention to adopt and use various forms of IT, particularly those of hedonic nature [34; 35]. Prior VW research offers empirical support for the link between cognitive absorption and behavioural intention [4; 5]. As a result, we assume that the purchase intention is influenced by cognitive absorption and put forward the following hypothesis: H1: Cognitive absorption has a positive effect on purchase intention. Due to network externalities (Katz & Shapiro, 1986), the size of one's personal network inside the VW influences the amount of opportunities the user has for social interaction and communication. Furthermore, a large social circle in an VW provides more opportunities to demonstrate status through virtual purchasing or when trading virtual items with other users. Prior research on online social networking [36], instant messaging [24] and VWs [16] offers empirical evidence that the perceived size of user's network predicts the usage intention. Social presence has been found to have a positive effect on loyalty in the online shopping context [37]. Furthermore, previous VW research has shown a positive relationship between social presence and favourable attitudes [38] and user satisfaction [15]. However, the research has reported no relationship between social presence and behavioural intention [7; 15]. H3: Social presence has a positive effect on purchase intention. Abundant research on e-commerce has verified a positive relationship between trust in the online merchant and user's purchasing behaviour [39]. However, considerably fewer studies have examined to what extent the trust between users affects purchasing, especially in an environment where the users are represented as avatars. Lu et al. [40] reported a positive relationship between intentions to purchase from the website and member-to-member trust. H4: Trust in other users of the VW has a positive effect on purchase intention. Social presence has been found to increase the number of messages exchanged in electronic communication [27]. As VWs are information-rich environments that are well capable of transmitting various non-verbal cues [15], we propose a positive relationship between social presence and trust in the other VW users. This assertion is also in accordance with the e-commerce literature that has reported social presence to have a positive effect on trust [32; 41]. H5: Social presence has a positive effect on trust in the other users of the VW. --- Empirical Research --- Data Collection and Measurement The data was collected through an online survey among the users of the Finnish Habbo Hotel portal in co-operation with Sulake Corporation, the Finnish company that owns and operates Habbo Hotel. The survey was opened 8,928 times. 3,265 respondents proceeded to the final page and submitted the survey. This yielded a response rate of 36.6 percent. To further ensure the reliability of the results only fully completed questionnaires were included in the analysis. As a result, the final sample consisted of 1,225 responses. 60.8 per cent of the final sample was female. To ensure the reliability of the measurement, the survey items were adopted from prior literature with wording adjusted to match the VW context and the target audience. The literature references of the measurement items were presented in Table 2. The items were measured with a seven-point Likert scale, anchored from strongly disagree to strongly agree -except perceived network size, which was measured with semantic scale. The constructs were modeled using reflective indicators. --- Data Analysis The data was analysed using partial least squares with smartPLS software [42]. We began the analysis by testing the convergent and discriminant validity of the measurement model. Convergent validity was evaluated based on three criteria [43]: firstly, all indicator factor loadings should be significant and exceed 0.70. Secondly, composite reliabilities should exceed 0.80. Thirdly, average variance extracted (AVE) by each construct should be greater than 0.5. Appendix A illustrates that the data met the criteria for convergent validity. With respect to discriminant validity, the AVE for each construct should exceed the squared correlation between that and any other construct [43]. Table 3 shows that discriminant validity was confirmed. After having verified the validity and reliability of the measurement model, we proceeded to testing the structural model. According to Agarwal & Karahanna (2000), cognitive absorption was modeled as a second order construct. Bootstrapping with 1,000 subsamples was used to estimate the significance of the path coefficients. The latent variable scores of its five constituting factors were used as an input to build the second order variable. The R 2 of purchase intention was 42.7 per cent, which indicates that the model as a whole exerts good predictive validity. As the sample size was large, instead of looking strictly the significance of the path coefficients, we considered the value of 0.1 as a threshold to interpret that a variable exerts a substantial effect on its endogenous construct [44]. Based on this criterion, all hypotheses were supported except H3. Age, gender and length of usage experience with the VW were included in the structural model as control variables. None of the control variables exerted a significant influence on purchase intention. Figure 2 below summarizes the results from testing the structural model. --- Fig. 2. Results of the PLS analysis --- Discussion and Conclusion The key finding of this study is that user experience captured through cognitive absorption and its first order constructs is the main driver of purchase intention. While prior research provided empirical evidence of user experience in driving usage [45] our results show that it has effect on purchase intention that takes place beyond usage. Overall, this finding implies that engaging user experience can drive VW operators' sales and thus it is critical for VWs success. --- Theoretical Implications Our results verify the importance of cognitive absorption as a component of the VW user experience and its value in predicting purchasing behaviour. On a more theoretical level, our conceptualization of cognitive absorption as a five-dimensional second order construct offers other researchers guidance how to capture the contextual characteristics of VWs. Based on our findings, virtual purchasing is substantially affected by the experiential aspects of VW usage, indicating that the user experience is a stable predictor of virtual purchasing across context [7] Perceived network size With regard to the social context, our findings show that purchasing behaviour is influenced by the size of user's in-world network and the trust experienced in other users of the VW. This indicates that network externalities play a role in virtual world participation [19]. Network externalities can thus affect the hedonic value extracted from the VW participation by offering more invitations to in-world events and parties. Furthermore, the status value [46] from possessing virtual items is likely to be dependent on the size of one's in-world social network. Albeit social presence did exert hardly a marginal effect on purchase intention, it is a relative strong predictor of the trust between users. While the trust in other users influenced purchase intention, its role was not particularly salient. We assume that rather than having a linear relationship with purchase intention, trust may exert a threshold, thus being a prerequisite for purchasing to take place. --- Implications for Practice For VW operators and developers creating engaging experiences seems to be a way to reinforce in-world purchasing behaviour. This may indicate that purchasing results from sustained participation in the VW and can thus represent a subsequent stage in the development of the customer relationship. From this perspective, operators should focus on developing the customer relationships rather than utilizing tactical marketing tools to promote in-world purchasing. Second, the results offer certain evidence that a trusting and psychologically warm social environment encourages purchasing behaviour. Thus, VWs operators are advised to have mechanisms not only to protect users' virtual property, but to prevent aggressive behaviour and communication towards other users. Third, we suggest operators should take a close look at how the presence and actions of other users within and beyond the VW affect users' participation and purchasing decisions. Young people have been reported to follow fads and fashion and thus be more prone to bandwagon effect than older generations [47]. This can partly explain the dynamics of the social setting and sometimes very short lifespans of trends in the VW for the young. --- Limitations and Future Research First, due to our research context, generalizability of the results is limited. Second, we examined behavioural intention instead of actual behaviour. Third, we used three constructs, perceived network size, social presence and trust in other users to examine the social context. Due to its conceptual breath, social context is very difficult to condense into a set of variables. We recommend further research to offer a richer understanding of the social context and structures behind the behavioural outcomes such as virtual purchasing. For example, future research could examine the interplay between the social context and purchasing behaviour [21]. Fourth, in our conceptualization of trust we focused only on the trustworthiness, i.e. the reliability of other users. However, prior research has highlighted the complex, multi-faceted nature of trust [29]. Moreover, in the VW context, the user may or may not trust in several entities such as the user community (or a specific subgroup within the community), the company operating the service and the service as a whole. Thus, research focusing particularly on the nature and dimensions of trust in the VW context would offer a better understanding of the social context of VWs and, at the same time, uncover the role of the avatar-centric environment in the formation of trust. Finally, we used only the cognitive absorption to empirically examine the key aspects of the VW user experience. However, from VWs people do not necessary seek for immersion and intensive experiences but a relaxing place to spend time and socialise with other users in a casual manner. Hence, further research could examine to what extent the VW participation is perceived relaxing or stress-relieving.
Millions of young people spend real money on virtual goods such as avatars or in-world currency. Yet, limited empirical research has examined their shopping behaviour in virtual worlds. This research delves into young consumers' virtual goods purchasing behaviour and the relevance of social context and usage experience. We assert that virtual goods purchasing behaviour is inseparable of the online platform in which it is taking place. We employ the concept of cognitive absorption to capture the user experience and examine the social context with three variables, the size of one's in-world network, trust in the other users of the online platform and social presence. We test our research model with data collected from 1,225 virtual world users and use PLS in the analysis. The results show that virtual goods purchasing behaviour is predicted by cognitive absorption, perceived size of one's in-world network as well as trust in the other users.
I. Introduction Over the last four decades, social assistance programmes for the poor have undergone dramatic expansions in most developing and transition countries. These programmes include a wide spectrum of cash and in-kind support programmes for the needy, including conditional cash transfers, free healthcare for the poor, food aid, and public work programmes. The World Bank has facilitated and shaped this expansion through social assistance policy recommendations to national governments. Several scholars have illustrated that national governments have taken these recommendations into serious consideration in designing and redesigning welfare systems (Brooks, 2004;Deacon and Hulse, 1997;Radin, 2008). According to Weaver (2008 objectives, we refer to all set of power-related intentions of governments to shape, control and transform the political actions of grassroots groups. We are interested in those political objectives that include the struggles and interests of incumbent political parties and government institutions, that shape public policies in ways that structural (economic or demographic) forces not necessarily would have entailed, i.e. those objectives that cannot be reduced to a mere reflection of structural dynamics. More specifically, political objectives refer to government concerns for the containment of social unrest, protests waves, popular political grievances in the broader sense and the mobilization of popular support, needed during times of intra-elite competition, wartime, and the mobilization of new blocs of supporters by new political leaders from outside established "intra-elite" circles. The latter may occur via populist political change, as in the case of India, Thailand, Brazil, Turkey, Russia, Ukraine, and South Africa, where governments appeal to the poor through populist policies, facing opposition from the middle classes and the elites (Ashman and Vignon 2014;Sridharan 2014;Onuch 2014;Singer 2014;Yörük 2014). The existing literature on the expansion of social assistance programmes has predominantly focused on a wide array of structural factors that are considered by national governments, including rising poverty, unemployment, de-industrialization, and aging. Political motives of authorities have been mentioned in the literature, however, to a lesser extent. There are a number of studies that look at political factors to explain the policies of the World Bank specifically (see e.g. Van Houten 2007;Barnett andFinnemore 1999, 2004;Toye 2009), but there is still much space left for empirical evidence. Moreover, these studies address different policy domains in which the WB operates, but not welfare and poverty reduction. --- II. Politicisation and existing literature G concerns with political mobilization have accelerated as a consequence of the global rise of the poor as a key grassroots political group, which has largely grown in capacity to threaten or strengthen existing local and global political and economic regimes during the last decades. Mike Davis (2006) argues that the informal proletariat, which consists of workers and the poor who are excluded from formal social security nets and live on precarious grounds, has now become a new grassroots political agency as the source of political threat and support for both governments and the economic elite. Poverty of the informal proletariat usually interacts with existing ethnic, racial and religious inequalities/differences and this contributes to prevalent political polarizations, as well. (Arrighi, 2009;Wacquant, 2008). There are many scholars providing arguments and evidence for the statement that social assistance policies serve to stabilize politics in the contemporary world. Harvard economist Dani Rodrik drew attention to the political dimension of growing social policy programmes. Rodrik (1997) argued that globalization created deeper class divisions between the rich and the poor that would be politically unstable. He suggested that a re-orientation from pensions to anti-poverty programmes would address the political challenges of globalization. the mobilizing capacity of terrorists and fuel civil and ethnic conflict (Auvinen and Nafziger, 1999;Gurr, 1970;Fearon and Laitin, 2003;Krieger and Meierricks, 2009;Paxson, 2002;Li and Schaub, 2004). The welfare-terrorism nexus fits within the broader literature on securitization and the security-development nexus (see e.g. Keukeleire and Raube 2013). Favourable social policy measures can assimilate oppositional movements by undermining substitute for economic security and equality and hence (Chenoweth, 2007: 3;Burgoon, 2006;Taydas and Peksen, 2012). There has been a broader securitisation of policies after 9/11 so the question arises whether politicization and securitization of welfare policies by the WB can be attributed --- III. Social welfare as a political tool and The World Bank de-politicising rhetoric of social assistance In the post-war period up to the late 1970s, welfare systems in many countries worldwide were based on employment-based social security programs, which, since the 1980s, have been gradually replaced by social assistance programmes targeting the poor (Goldberg and Rosenthal, 2002;Deacon and Cohen, 2011;Sugiyama, 2011). The literature explaining the welfare systems in the post-war period, i.e. during the so-called Golden Age of Capitalism, or Embedded Liberalism (Ruggie, 1982) emphasized political factors and structural factors together, including such political factors as the containment of unrest or mobilization of popular support, and such structural factors as demographic changes or economic incentives. According to Ruggie, embedded liberalism is the system in which economic and political elites pursued the double task of continuing the free markets economy on an international level; and developing interventionist and welfare based policies domestically. A modern welfare state was established in the Western world to sustain full-employment, economic growth and social services under the auspices of the US hegemony (Arrighi, 1990;Harvey, 2005). Over time, social security came to be seen in the Western world as a permanent feature of capitalism for its contribution to political stability and continuity (Katznelson, 1981). The welfare state provided the means for political legitimacy necessary to contain the threat from grassroots groups, most importantly the working class movements (Goldberg and Rosenthal, 2002). Simply put, the welfare state functioned to contain, 1973;Olson, 1982;Offe, 1984;Fox-Piven and Richard Cloward, 1971). O C While the literature on the pre-1980 welfare systems thus emphasized both structural and political factors, most students of recent welfare systems transformations have solely emphasized structural factors, such as aging, labour informalization, unemployment, globalization, deindustrialization, the rise of poverty, and the rise of the service sector (Brooks and Manza, 2006;Pierson, 2001; see also Hall, 2007;Ruger, 2005;Radin, 2008;Tungodden et al., 2004). This literature has thereby largely under-examined the possibility that contemporary welfare system changes have also been affected by political concerns of national and supranational institutions. --- T W B (which is the B official motto [World Bank 2015]) and denies having political objectives in its policy recommendations (Miller-Adams, 1999: 5). The 1944 Articles of Agreement that established the workings of the Bretton Woods institutions, UN, 1944: 65; see also Van Houten, 2007: 653-4). Reasons for this non-political character would be that it allows the Bank to cooperate with different types of regimes (Miller-Adams, 1999: 5) and provide the Bank with more authority and legitimacy in recommending policies to others (Barnett and Finnemore, 2004: 21). A 2011 study by the Independent Evaluation G WB social assistance programmes in third world countries (IEG, 2011: 57). While -poverty B reduction, generally does not directly support such objectives" (IEG, 2011: 38) and never gets involved in political matters (IEG, 2011: 65). However, a number of studies have however found that there are clear references to W B T W B ies in a broader framework of politics and globalization (see for instance Goldman, 2005;Van de Laar, 1976;Benjamin, 2007;Woods, 2006;Miller-Adams, 1999;Barnett and Finnemore, 1999). As argued by Barnett and Finnemore, --- W B processes (2004: 21, also see Van Houten 2007: 653). T W B B -making (Weaver and Leiteritz, 2005: 371), with the US remaining the most influential one (Fleck and Kilby, 2006: 224; see also Weaver, 2008: 1;Morrison, 2013: 297). Donors impose their interests through (i) direct appointment of the leadership cadres of the Bank, (ii) donating majority of the funds, (iii) the threat of denying Bank funds access to the national private capital markets in case Bank declines donor G, 1994: 56 cited in Weaver, 2008).T B would thus be situated somewhere in between being an instrument of powerful states and having a bureaucratically driven autonomy, giving it a form (idem: 6; see also Toye, 2009: 299). T B s, has developed an internal culture, consisting of ideologies, norms, values and power relations that create particular interests distinct from those of member governments (Barnett andFinnemore, 2004: 19, citing Alvesson, 1993). However, as noted by Weaver, since the mid-1990s, there has been a shift in the bank discourse to include greater emphasis on political factors, as a result of the changing external U""' D C War tensions, no longer shied away from these inherently political areas of development, 2008: 92). Moreover, having recognized the devastating effects of the crude neoliberalism of the 1980s, the Bank has modified its agendas to include policy of -1990s. --- W Thus, the Bank is a rationally organized bureaucratic structure whose bureaucracy has well-defined interests and objectives. In order to attain these objectives and maximize bureaucratic interests, the Bank has to negotiate manners with member organizations, which structurally leads the Bank towards a conservative direction. Domestic and international political interests of donor and client governments are thus negotiated within the relatively autonomous structure of the Bank and translated into politically driven policy recommendations. To the extent that member states have tendencies to contain unrest and mobilize popular support (which is not a rare case), the Bank is drawn into recommending mechanisms to do so. It is fairly possible for the Bank to recommend social assistance for political containment or mobilization because the clients, as political actors, tend to internalize policy recommendations that would function politically at home. In short, politically useful recommendations might find customers more easily (Toye, 2009: 305). On the other hand, due mainly to the encouragement of donor governments, the Bank also functions as an overseer of global political stability. World Bank donor states may have their longer-term security interests defended if one considers the objectives of stability, which in turn should lead to less migration or threat (including that of terrorism). Social unrest is K and Raube, 2013: 557). This is different than creating particular recipes for particular governments but generating blueprints for political stabilization through social assistance programs that can be modified and adapted to each case. The Bank, in that sense, is politically conscious and concerned for global grassroots political stability. In addition to existing structural factors, these political objectives have increasingly --- IV. Research and empirical findings The following section will systematically assess how the arguments raised in literature about welfare spending as means to political containment and mobilization, may apply to the policy recommendations of the World Bank as well. T T whether social unrest W B s. We will examine the extent to which the World Bank social assistance policy recommendation documents include references to political concerns for containment of social unrest and mobilization of popular support. We will also analyse how the references to these concerns have changed over time. --- W B political objectives 1. NVivo 10 was used to code all relevant documents. The following section will present the trends and patterns that were highlighted through the analysis. Subsequently, the article will provide an analysis of the content of these social policy recommendation documents and show the ways in which the World Bank has proposed social assistance as a political instrument. --- Trends over time The results show that the World Bank has explicitly discussed social assistance as an instrument of political containment and mobilization in more than one quarter of all documents 116 out of 447 reports. Specifically, 48 documents referred to welfare as a tool for political mobilization; 57 documents for political containment; and 11 to both containment and mobilization. The time series analysis of the number of World Bank documents that contain political references has furthermore shown two main trends: 1) The number of documents that include references to political containment or mobilization has increased over time, especially after the 2000s (Figure 1). 2) In relative terms, the percentage of documents with reference to political containment vis-à-vis political mobilization has increased after the 1990s (Figure 2). The World Bank furthermore argues that in order to mitigate conflict, countries need providing socioeconomic growth among all the significant regional, religious and ethnic WB <unk> L'B to the genocide, and reducing poverty is critical (...) also as a means to improve the political base by providing free public services or lucrative service-related jobs to their versal healthcare programme in Thailand, the so-called 30-Bath Gold Card system, as an instrument that Thai governing party used to win a F B new public work programmes have usually been introduced before elections, such as in Bangladesh, India, Indonesia, Pakistan, Jamaica and the Philippines (WB/ Subbarao et al. 1997: 95). In 2010, the World Bank stated that the Zambian government used the socalled Fertiliser Support Progr --- T W B WB IEG Another political objective of social protection programmes that is observed in these reforms, which may otherwise not have been accepted by the society (WB/Barr, 1995: 3; see also WB/Bigio, 1998: 178). The World Bank believed that political instability is considered a hindrance for policy implementation (IEG, 2005: 108). In the 1998 report, " F'P B also can be (WB/Bigio, 1998: 178). The World Bank in fact seems to apply social safety nets as a means to institute A IEG safety net (SSN) can be installed at the same time as economic reform, that is an ""N IEG 2011: 66). Thus, resistance or political opposition against the reforms can be avoided and as such, governments would prevent insecurity and instability. To this end, the reforms should be pushed through fast and have an extensive scope (IEG, 2011: 66, 74). ""N helping win support in elections, demonstrating government legitimacy to gain social IEG 2. (social) unrest, elections, populist, dissident, (un)employment, turmoil, political, security, threat, strike, demonstration, contain, conflict, stability, safety, health, target, violen(ce), risk. --- V. Discussion and conclusion 2. These strategies are not solely applied in policies towards developing countries: also in the case of Germany certain economic developments have deliberately been made socially acceptable through means of social security benefits (G20, 2003: 64). --- Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. --- Author biographies
The version in the Kent Academic Repository may differ from the final published version. Users are advised to check http://kar.kent.ac.uk for the status of the paper. Users should always cite the published version of record.
Background In 1994 the new South African government declared the overall priority of eradicating poverty and removing inequities -socioeconomic inequalities and differential access to services that are unfair or unjust [1]. As a result, the government created the Reconstruction and Development Programme (RDP) to reduce poverty and distribute income more evenly. Less spending was to go to the military, and more was to be distributed on education, housing, and health, including the building and upgrading of clinics and promises of free health care to children under six and pregnant mothers [2]. The RDP however, fell out of public view within two years, and the ministry overseeing it was abolished [3]. It was criticized by some as a short-sighted programme of basic needs fulfilment [4]. In 1996, to help meet the goals of the RDP and respond to neoliberal influences, the government adopted the Growth, Employment & Redistribution (GEAR) macroeconomic policy. GEAR intended to reduce the role of the state and increase corporate and private investment [5,6]. GEAR was publicly proposed as a way to provide a fast growing economy, create jobs, redistribute income, and hasten universal access to basic needs [7]. Consistent with a focus on decentralization, the national, provincial, and local governments adopted local economic development (LED) strategies which aimed to reduce poverty and increase employment through local initiatives and solutions [8]. LED encourages communities to take control to stimulate economic growth through community-based initiatives and local skills, resulting in increased opportunities, community empowerment and self reliance [9]. One of the policies adopted along with LED was the creation of Spatial Development Initiatives (SDIs) in 1997 by the South African Department of Trade and Industry (DTI), intended to promote and encourage private investment and development in areas that were considered to have the greatest potential for growth. The SDIs focussed on short term interventions designed to attract private sector investment, to stimulate growth of locally owned small, medium and micro-enterprises (SMMEs), and empower local communities [10]. They identified and sought to address bottlenecks to investment, such as inadequate infrastructure (water, roads, electricity, and communications) [11]. Development was concentrated in relatively small areas rather than thinly spread across larger regions or provinces [5]. The SDIs expected to benefit rural communities through increased employment, improvement of local infrastructure, and income from leasing out lands [12]. Since 1994, there have been some successes nationally as a result of the national and local development strategies. These include new health clinics, schools, housing, and improved water facilities [2]. Yet there are also reports that many South Africans have become disillusioned at the lack of progress, particularly with regards to standard of living and employment. For example, unemployment rates rose from 19% in 1996 to 29% in 2001 [13]. Economic growth rates have been modest at best and South Africa is often seen as one of the most unequal societies with regards to distribution of income [14]. Some authors have described a dual economy, the "first economy" containing the industrial, mining and agricultural sectors that produce wealth, while the "second economy" is characterised by poverty and underdevelopment [15][16][17]. Concerns exist around health care as well. Many South Africans criticize the government's handling of the HIV/ AIDS epidemic [18], one of the leading causes of life expectancy estimates declining by ten years between 1996 and 2002 [19]. Furthermore, public spending is rarely optimised towards the poor [20,21]. The wealthiest provinces receive most health care expenditure, and since 1999 there has been increasing emphasis on privatized health care which the poorer regions cannot afford [22]. This leaves the most vulnerable less likely or unable to access health services when they need them, leading to higher risk of poor health, increasing the burden of future health costs, and reducing their ability to seek employment or farm their own lands [23,24]. The Eastern Cape is one of nine provinces in South Africa and is located along its south eastern shore. It was formed in 1994 with the new government, encompassing the previously Transkei and Ciskei Xhosa homelands. It is one of the poorest provinces in the country and by 2001 only one-fifth of the population was employed [25]. It has two large cities -Port Elizabeth and East London -but much of the region is rural and relies on subsistence farming. The province is home to the Wild Coast region, located along its north eastern coast (Figure 1). After years of labour migration under the apartheid system, by 1994 the Wild Coast population was predominantly female and unemployed. At that time the Wild Coast had little access to clean water or public service infrastructure. Unemployment was higher than the national average and much like the province as a whole; nearly three-quarters of the Wild Coast population lived in poverty [4]. The region also faces many health threats including HIV/AIDS and tuberculosis [26]. The Wild Coast SDI started in 1997 with a particular focus on tourism and SMME development. Agriculture and forestry were other sectors that were identified to stimulate growth, with private companies partnering with communities [12]. Such initiatives expected to create economic opportunities for local populations, particularly women. The SDI identified four coastal high potential "anchor" areas as the focus for public and private investment: Mkambati, Port St Johns, Coffee Bay and Dwesa/Cwebe. SDI planners felt that intensive investment in these four areas would spill out and spur economic development in the rest of the Wild Coast region. In partnership with the Eastern Cape Socio-economic Consultative Council (Ecsecc), CIET assessed the Wild Coast SDI over several years. While coverage of basic needs (such as water and health) was not an explicit goal of the SDI, early feedback from community-based evaluation of the SDI showed that unless these were met the initiative had little chance of success. A 1997 baseline study showed that people in the region were unaware of what they could do to improve their socio-economic conditions. There were high levels of unemployment and lack of food security, a low proportion of households obtained their water from protected sources such as taps, there was a substantial degree of corruption in the public services (including health), and little knowledge of the SDI project itself [27]. Follow-up surveys in 2000 and 2004 showed little evidence of increased economic opportunities [28,29]. The Wild Coast SDI was terminated since the 2004 evaluation. Responsibility for development of the region moved from the DTI to the Department of Environmental Affairs and Tourism (DEAT). Newer initiatives in the area have included the EU community-based tourism initiative, the controversial N2 toll road, the establishment of the Pondoland National Park and a new Wild Coast Development Project. It remains unclear how these new initiatives intend to decrease poverty and improve health in the region, as they seem poised to repeat the shortcomings of the SDI. An additional 2007 follow-up survey of the same communities provided an opportunity to examine the inequities detected in the original 1997 baseline (such as access to clean water, food security, household construction, education and employment), and how such inequities affect access to health care. --- Methods The methods relied on standard CIET social audit protocols [30,31]. We stratified the last stage random sample of twenty communities by anchor/non-anchor status, geographic location (such coastal/non-coastal), proximity to infrastructure, and road accessibility. The 2000, 2004 and 2007 follow-up surveys returned to the sites of the 1997 baseline. Data collection instruments across the different cycles included household questionnaires and community profiles. We additionally shared and discussed preliminary findings with the participating communities through gender stratified focus groups. We translated all instruments into isiXhosa and then non-members of the research team translated them back into English to ensure questions remained true to their intended meaning. We piloted the instruments extensively before implementing them in the field to refine the instruments, test for clarity and ensure proper translation. The CIETinternational ethical review board conducted and granted ethical clearance. Fieldworkers recorded and stored household data without any identifying fields, ensuring confidentiality of the respondents. We maintained confidentiality of the sample community identities as much as possible, especially with regard to the nonanchor areas. The exact sample sites were not included in any reporting. Data entry and analysis relied on public domain software EpiInfo [32], and open source analysis and geomatics software CIETmap [33]. We adjusted indicators to account for the effect of uneven sampling, and report weighted results. We examined associations between factors in bivariate and then multivariate analysis using the Mantel Haenszel procedure [34]. Multivariate models took into account potential household inequities such as nonanchor status, household crowding, access to protected sources of water, roof construction, main food item purchased, and perception of community empowerment. Individual level models additionally accounted for age, sex, education and income earning opportunities. For access to health services, we made separate models for men and women; and limited these to those aged 18-65 in order to account for income earning opportunities. We adjusted for clustering using a method produced by Gilles Lamothe based on a variance estimator to weight the Mantel Haenszel odds ratio for cluster-correlated data, described elsewhere [35]. We describe associations using the Odds Ratio (OR), indicating where this is adjusted by stratification (ORa), accompanied by the cluster adjusted 95% confidence interval (CIca). Averages are accompanied by a measurement of the standard error (se) and the total number (n). We derived measurements of trend using the Mantel-Haenszel extension [36]. Some indicators were not comparable or collected in 1997, and for these trends compare 2000 to 2007. We imputed ten additional datasets using the Amelia II program for missing data [37] to test how missing data would affect the final models. These tests showed little effect on the final models, so we report the original results. --- Results --- Socio-economic indicators Household characteristics In 2007, we collected data from 2401 households. Respondents provided information about 8496 individuals. Among these, 57% (4830/8478) were female, a nearly identical proportion to previous years. Average household size in 2007 was 3.7 people (SD 2.2, n2378), the same as in 2004 but lower than in 1997. One-third (777/2322) of households were made of mud with grass thatch roofs, a significant reduction from previous years (Table 1). --- Community empowerment -hearing about and having a say in development When asked in 2007 what development projects respondents had heard about in their area, only one-quarter of household respondents could name something (500/ 2026). Only one respondent mentioned the SDI by name when asked about development projects in 2007. Among those who had heard of any development projects, only one half (246) felt they had a say in it. --- Sources of water Just over half (1284/2359) of households in 2007 got their water from a relatively protected source, such as a tank or tap. Households made of mud and grass, and households who bought basics as their main food item were less likely to have protected sources of water (Table 2). There has been a significant and steady increase in households having access to protected sources of water since the baseline, from 20% (550/2455) in 1997 to 52% (1284/2359) in 2007 (<unk> 2 trend 756.4, p=0.00000). The increase is consistent across different household types, for example, both among those with tin roofs and among those with grass roofs, yet inequities remain between the two (Figure 2). --- Food In 2007, 85% (1959/2317) of households purchased basics such as maize as their main food item. The proportion purchasing basics was lower than in previous years (Table 1). Despite the objective of the SDI to generate small and medium economic activity, only 7% (282/4160) owned a business in 2007. Those from larger households, and men were less likely to own their own business in 2007 (Table 2). The proportion who owned a business was nearly identical to the proportions from previous years (Table 3). --- Household loans and credit In 2007, some 16% (377/2230) of households had loans. This is the same proportion as in 1997 (417/2471) and 2004 (391/2256) but much lower than in 2000 (41%, 951/2302; <unk> 2 trend 30.407, p=0.00000). There has been a significant increase of households reporting emergencies as the purpose of their loans, from 4% (13/408) in 1997, less than 1% in 2000 (2/917) and 2004 (1/388), to 13% (42/368) in 2007 (<unk> 2 trend 49.425, p=0.00000). When asked about the source of their loan, 56% (204/ 364) claimed they got their loan from a loan shark, a source which has seen a dramatic and consistent increase since the baseline (1997: 2% 6/415, 2000: 2% 21/871, 2004: 35% 138/386; <unk> 2 trend 570.469, p=0.00000). --- Access to health services Accessed in the last year: access to health services by female residents increased each year among increasing age groups (under 18, 18-65, 66+). A lower proportion of male residents of working age (18-65 years) accessed health services than in the two other age groups in each year. Additionally, a lower proportion of those 18-65 accessed health care in 2007 than in 2000. Among those aged 66+, higher proportions of men accessed health services since 2000 (Figure 3). For both male and female residents aged 18-65, those from less crowded households (4 or fewer people) were more likely to have accessed health services in the last year; those from households without loans were less likely to have accessed health services in the last year (Table 2). Type of health service used: There was an increase in the use of government services and a corresponding decrease in use of private services and hospitals, particularly among women (Table 4). Those with an income from wages or a business (Figure 4) and those from houses with tin roofs were less likely to have visited a government or a traditional health services (Table 2). Among female residents aged 18-65, those with an income earning opportunity, those from houses with tin roofs, and those who purchased non-basics as their main food item were less likely to have visited a government or traditional health service; and women from anchor areas were more likely to have visited a government or a traditional health service than those from non-anchor areas (Table 2). --- Choice of health service The most common reasons cited for choosing government health clinics were proximity, cost, and feeling there was no other choice. The most common reasons for choosing private clinics were good service, good medication, feeling there is no other choice, and referrals. Reasons for choosing government and private health services were nearly identical for male and female residents (Table 5). Among male residents aged 18-65 years, those with an income earning opportunity were more than twice as likely to choose the health institution on their last visit due to better service, better medication, or referrals (as compared to reasons such as proximity, cost, or lack of choice) than those without an income earning opportunity (Table 2). We found the same for female residents aged 18-65, but only among those who lived in houses that did not receive any income from migrant workers (ORa 2.47, CIca 1.55-3.95). --- Attention needed Some 8% (153/1781) of users of government clinics attended for prevention reasons like immunisation, while 2% (2/252) of users of private clinics attended for prevention reasons. A much lower proportion of male than female residents attended a health institution for prevention reasonsand only 5/642 men in 2007 attended for prevention reasons (Figure 5). Among female residents aged 18-65, those with some formal education were nearly eight times more likely to have accessed a government health service for prevention reasons than those with no formal education; and those who lived in households that received income from migrant workers were less likely to have accessed government health service for prevention reasons than those who lived in households that had not received income from migrant workers (Table 2). --- Waiting times Users of government facilities reported longer waiting times than users of private clinics; and female users overall reported longer waiting times than did male users in government facilities. However in 2007, female users of private clinics reported lower average waiting times than men (Table 6). Among male users (aged 18-65) of government clinics, those from households without loans were twice as likely to report waiting less than one hour for service than those from households with loans (ORa 2.06, 95% CIca 1.19-3.59). This was similar for female users as well (ORa 1.52, 95%CIca1.20-1.92). However, additionally among female users of government clinics, those with an income earning opportunity, those from less crowded households, and those whose main food item was not basics were less likely to have waited less than an hour for service (Table 2). --- Payments at government clinics Payments at government clinics have decreased significantly overall since 2000 for both male and female health service users: only 2% (57/3432) of users of government clinics in 2007 claimed they paid something for their service on their last visit, fewer than in 2000 (26% 1159/5571) and 2004 (7% 201/4313) (<unk> 2 trend 928.49, p=0.00000). We found no evidence of a difference by sex, age group or other socio-economic characteristics in the rare report of having made a payment in 2007. --- Discussion The Wild Coast has seen development improvements since 1997, including increased access to protected sources of water and a marginal increase in employment. Pronounced inequitiessuch as differential access to health care based on education and income -were still evident in 2007. --- Water Since the baseline in 1997, water supply from protected sources increased from 20% to 50% in 2007. Yet the proportion with access to protected sources in the Wild Coast region is lower than the provincial average (70%) and lower still than the national average (88%) [38]. This still leaves half of the population of the region without a protected source, making them susceptible to water-related illnesses such as diarrhoea and cholera. Reported improvements in access to water supplies in the Eastern Cape overall are offset by reports of poor water quality, particularly in rural areas [39,40], and it is possible that those with "protected" sources are not much better off than those without. As access to clean and safe water directly impacts on health and income potential, community and district capacities for ongoing and consistent monitoring and testing must be implemented alongside improved water infrastructure developments. Priority must also be given to ensuring water provision and quality in the Wild Coast increases to meet provincial and national standards. --- Income and employment The SDI aimed to increase employment and to promote entrepreneurship. There has been no increase in the number of respondents considering owning their own business, or of those who actually do own a business. Employment levels among adults increased gradually from 15% in 2000 to 20% in 2007, but this still leaves a majority without work. As with water, employment rates within the region are still below the provincial and national rates [41] leaving the Wild Coast region largely in the same economic shape as before the initiative. Importantly, the most vulnerable (such as those with less education, and less water and food security) are less likely to have worked for wages, leaving them with little chance of improving their standard of living. Loan sharks have prospered as the main source of household loans. Increasing numbers of loans are to respond to household emergencies; few are for starting businesses or creating income opportunities. --- Access to health services Fewer male than female residents accessed health services and, among those who did, very few did so for preventive reasons. Lower rates of men's access may be explained by women's increased interaction with the health service through antenatal care. Yet other studies have found that men also tend to wait until they see signs of illness before seeking help or attention (such as testing for HIV) [35,42]. Striking differences in health care access exist between the most and least vulnerable within the region. Women with some formal education were nearly eight times more likely to access health services for prevention reasons, in comparison with those with no formal education. For both male and female residents, income was strongly related to type of health clinic visited, and the reason for doing so, consistent with results found in KwaZulu-Natal [43]. Those with less income were more likely to visit government services, reporting determinants of cost and distance; users of private clinics sought out better service and medication. Lower food security and poorer house construction was also associated with women visiting government, not private, health services. Each of the male and female focus groups discussed a lack of satisfaction with government clinics, stressing poor service, and a lack of privacy as key concerns. Additionally, medication was reportedly either missing or expired, and several focus groups stated that patients were given "panado" regardless of their ailment. Average waiting times were also consistently lower for users of private clinics than for users of government clinics. Despite this, the proportion using government clinics increased. Payments at government health clinics for free services were nearly non-existent by 2007, an indication that corruption in the form of unofficial payments is no longer an issue. This is promising, as it frees up household resources for other needs. Focus groups still complain about favouritism among the nurses and doctors at the clinics, and removing user fees for service does not help those who need medicine that is unavailable. Although unique as a detailed follow-up of health care and development in the Wild Coast, there are some limitations to this study. The cross sectional design only allows us to report associations and limits what we can conclude about causality. For example, when we state that those from households with unprotected sources of water were less likely to have worked for wages in the previous month, we cannot attribute causality in one direction or another. Secondly, we can report with some confidence on trends over time, but we are unable to provide individual linkages through the years as one might through a longitudinal study that follows up with the same individuals in each year. --- Conclusion The government's economic and development initiatives since 1994 have failed in their short-term goals in the Wild Coast region, particularly with regards to employment and health. Policies such as the RDP and GEAR set out to improve quality of life, redistribute wealth in a more equitable manner, and increase economic activity in the most vulnerable areas. Yet much of the Wild Coast region was still without clean water in 2007 and the majority were unemployed. Much of the economic growth in the country as a whole since democracy has taken place in the larger urban centres, with smaller towns and rural areas falling further behind [44]. LED strategies aimed to stimulate growth locally and empower communities but there is little evidence of this happening in the Wild Coast, consistent with evidence nationally that suggests LED successes have been modest at best, and primarily located in larger well resourced cities [8]. The Wild Coast SDI sought to increase economic activity and foster the growth of SMMEs, yet there is no evidence of an increase in locally owned businesses or even the consideration of ownership. Furthermore, development initiatives seem to have failed in increasing access and improving health services, even though these were identified early on in the process as crucial for their success. By 2007, residents still complained of poor service and a lack of medications in government health clinics and there are still socioeconomic inequities in terms of access, particularly for preventative reasons. One might argue that development takes time and that the full effects of the initiatives have not yet been felt, although a decade of repeated and consistent measurement make this unlikely. The Wild Coast region still falls well below provincial and national standards in key areas such as access to clean water and employment. Inequities in access to health services leave the most vulnerable in a continued negative cycle, as poor health impacts negatively on income generating opportunities and increase the burden of health costs for households that are already struggling to survive. --- Authors' contributions SM contributed to instrument design, conducted data analysis and drafted the manuscript. NA designed the study, developed the methodology, and contributed to the analysis and the drafting of the manuscript. Both authors have approved the final manuscript. --- Competing interests The authors declare that they have no competing interests. Published: 21 December 2011
Background: After election in 1994, the South African government implemented national and regional programmes, such as the Wild Coast Spatial Development Initiative (SDI), to provoke economic growth and to decrease inequities. CIET measured development in the Wild Coast region across four linked cross-sectional surveys (1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007). The 2007 survey was an opportunity to look at inequities since the original 1997 baseline, and how such inequities affect access to health care. Methods: The 2000, 2004 and 2007 follow-up surveys revisited the communities of the 1997 baseline. Householdlevel multivariate analysis looked at development indicators and access to health in the context of inequities such as household crowding, access to protected sources of water, house roof construction, main food item purchased, and perception of community empowerment. Individual multivariate models accounted for age, sex, education and income earning opportunities. Results: Overall access to protected sources of water increased since the baseline (from 20% in 1997 to 50% in 2007), yet households made of mud and grass, and households who bought basics as their main food item were still less likely to have protected sources of water. The most vulnerable, such as those with less education and less water and food security, were also less likely to have worked for wages leaving them with little chance of improving their standard of living (less education OR 0.59, 95%CI 0.37-0.94; less water security OR 0.67, 95%CI 0.48-0.93; less food security OR 0.43, 95%CI 0.29-0.64). People with less income were more likely to visit government services (among men OR 0.28, 95%CI 0.13-0.59; among women OR 0.33, 95%CI 0.20-0.54), reporting decision factors of cost and distance; users of private clinics sought out better service and medication. Lower food security and poorer house construction was also associated with women visiting government rather than private health services. Women with some formal education were nearly eight times more likely than women with no education to access health services for prevention rather than curative reasons (OR 7.65,).While there have been some improvements, the Wild Coast region still falls well below provincial and national standards in key areas such as access to clean water and employment despite years of government-led investment. Inequities remain prominent, particularly around access to health services.
Introduction The effect of peer passengers on teenage driver's crash risk has received considerable research and attention. The majority of states (90%) in the United States limit the number of passengers in the vehicle during the first few months a teenage driver is licensed to drive independently (Insurance Institute for Highway Safety 2014). These policies are supported by epidemiological and observational studies that suggest the presence of peer passengers increases fatal crash risk (Chen, Baker et al. 2000;Ouimet, Simons-Morton et al. 2010), and risky driving behavior, particularly if those peers are young males (Simons-Morton, Lerner et al. 2005). However, research suggests the detrimental effect of the presence of peer passengers may not hold true under all conditions. For example, simulator studies have found peer passengers increased some but not all risky driving behaviors (Ouimet, Pradhan et al. 2013), and the presence of peer passengers may improve reaction times for teenage drivers (Toxopeus, Ramkhalawansingh et al. 2011). This indicates there may be specific circumstances where peer passengers increase risk, and others where they promote safer driving depending on the circumstances. Teen drivers' perceptions of their peer passengers' influence represents a potentially valuable source of understanding of the conditions under which peer passengers increase crash risk or promote safer driving. A previous self-reported survey study found that teen drivers did not perceive the presence of peer passengers to increase their crash risk, unless they created distractions or encourage dangerous behaviors (Ginsburg, Winston et al. 2008). Few studies have used qualitative research methods to examine teen drivers' perceptions of their peer passengers' presence in the vehicle, and their potential contribution to crash risk. The purpose of this study was to examine teen drivers' perceptions of their peer passengers using semi-structured interviews with questions focused on distraction and social influences. --- Methods --- Participants A convenience sample of 42 newly licensed male and female drivers participated in an extensive 18-month study of new drivers, including vehicle instrumentation, periodic surveys, test track driving assessment and a semi-structured exit interview (The Naturalistic Teen Driving Study) (Lee, Simons-Morton et al. 2011). Among eligibility criteria, participants were required to be younger than 17 years of age and obtained a provisional driver's license allowing independent driving within the past three weeks (see Lee et al., 2011 for more details). At the time of the study, all newly licensed teenage drivers were subject to the passenger restriction of the State of Virginia for newly licensed teens (effective in 2003), that limited the number of passengers to no more than a single passenger younger than 18 for the first 12 months of driving, and no more than three passengers younger than 18 thereafter (Insurance Institute for Highway Safety 2012) Sampling was stratified in order to have similar numbers of males and females and of drivers sharing or not sharing the vehicle with their parents. Among the exclusion criteria, drivers with diagnosed attention deficit disorder, with or without hyperactivity, were excluded (see Lee et al., 2011 for more details). The 41 (one participant was lost to follow up) interviews analyzed in the current study were conducted at the end of the 18-month study on driving behavior. The interview was designed as an exit interview to with direct questions regarding drivers' experiences with passengers and not originally designed to provide qualitative data. A trained research assistant at the Virginia Tech Transportation Institute conducted the interviews. The protocol was reviewed and approved by the Virginia Tech Institutional Review Board for the Protection of Human Subjects; parent consent and assent for teen participation were obtained. The semi-structured interviews included items on perceptions of their driving over the last 18 months, including participants' perception of the instrumentation of their cars, effects of passengers, secondary task engagement, and their driving skills. The focus of this report is drivers' perceptions of the effects of peer passengers on their concentration while driving. Participants were asked a series of open-ended questions about how male and female passengers affected the driver's level of distraction, concentration, with questions asking specifically whether or not passengers' comments and presence affected the way they drove (see Table 1 for core questions). Interviews were digitally recorded and professionally transcribed. The average length of each interview was 45 minutes and 58 seconds. The questions on passengers comprised one of seven sections in the interview guide; responses about passengers were coded wherever they occurred. Transcripts were entered into ATLAS.ti software (Version 7.0). This software allows text to be coded and retrieved for ease of summarization and interpretation. (Strauss and Corbin 1998). Content analysis of participants responses was used taking an inductive approach. Our research team, including an injury epidemiologist with an expertise in young driver research and a psychologist with expertise in qualitative methods and adolescent development, reviewed four transcripts (2 male, 2 female) to identify an initial list of themes. A coding manual was developed based on these four interviews and modified as subsequent interviews were coded. Additional codes were added to represent subthemes and to accommodate new themes that emerged as the coding progressed. Complex passages of text could be assigned multiple codes to adequately capture content. Double coding was used to improve trustworthiness and rigor (Strauss and Corbin 1998). Two coders were responsible for coding all transcripts. Meetings among coders and senior researchers were held weekly to ensure consistency and resolve coding discrepancies. A systematic review of the text assigned to specific codes was performed after the first 10 interviews and any identified adjustments to the coding scheme were implemented with previously completed transcripts re-coded as necessary. Compilation of coded text by themes was examined by the project team and the findings summarized. All passages coded as referring to male passengers, female passengers, teen passengers, group passengers, and driving alone were reviewed for this report regardless of other codes assigned. The quotes included in this report were edited for readability (removing extra words such as "like", and "I mean"), but without changing meaning. --- Results --- Participants Of the 41 young drivers, 48.7% were males, the majority were white (92.7%), and the mean age at recruitment was 16.4 (SD 0.3). All had been driving independently for the past 18 months. Two of the interviews contributed limited text due to equipment failure resulting in a loss of recording. --- Themes regarding peer passengers The report of themes that emerged from the interviews regarding young drivers' perceptions of peer passengers will be structured around the two organizing topics of distraction, and mechanisms of social influence. Included are themes that were expressed by many of the participants, as well as themes that were relevant but expressed by fewer participants. Furthermore, for several themes multiple viewpoints were expressed, often in contrast with one another reflecting diversity in the perceptions of young drivers. These findings are described below. --- Distraction Drivers responded to multiple questions about how having passengers affected their level of distraction and concentration. When asked directly, almost all drivers acknowledged that passengers were a distraction; however, most drivers described that distraction as "a little" or "not much". Some perceived the risk but also suggested they were able to manage it. They affected it, but... I don't think it was a big distraction,....they really wouldn't distract me, and even if they would try,.... I'd be watching the road, so....my first awareness would be the road A smaller subset of drivers described having passengers as very distracting. Notably, when asked about the effect of having multiple versus a single passenger, the majority of drivers recognized that multiple passengers detrimentally affected their driving, and provided vivid descriptions of the distraction that having multiple passengers can pose. Participants described losing control of the in-vehicle environment due to increased talking and movement in the vehicle, using statements like "messing around... in the backseat," "punching each other," "ridiculously loud," "definitely more hectic than a drive by myself," and "mayhem". This example is typical. As a group they [male passengers] were very distracting. And often they would mess around with each other, poke at the cameras a lot, especially in the beginning, and they would do it together... it was very distracting...Compared with one, I could pretty much control one, I really can't control a group Talking was the most commonly described way that passengers distracted the driver. Talking distracted drivers in several ways, such as drawing their attention to the topic being discussed or requiring drivers to concentrate on the conversation. As described by this male driver, some noted that turning one's head to look at a passenger who was speaking required the driver to take their eyes off the road. I'd make eye contact in the rear view mirror... you do it too long, and you look back on the road, and you're like "oh, shoot, got a little close or fast" or something... Loud noise and loud music were also frequently mentioned, particularly in the context of groups of passengers. Several drivers mentioned physically active behavior, such as "horseplay" or "dance parties". The following quotes from two participants are examples of how this was described. Just be obnoxious guy stuff... be loud, and have the window rolled down, and have to have the radio on... My friends are very partial to dance parties in my car so we had a lot of thosethose are pretty distracting...I wasn't as focused... Additionally, one infrequent but notably dangerous distraction was when a passenger directly interfered with the vehicle controls -some were minor such as turning on or off the windshield wipers or hazard lights, but two drivers described a passenger who grabbed the steering wheel. Drivers were not asked how they responded to these distractions, but several offered their strategies for managing their driving with peer passengers. One strategy was to put more effort into focusing on their driving when they felt themselves being distracted. As evidenced in the following quotes, for some this was a response to being a new driver, or a concern regarding consequences of a lapse of attention. I [was]... just so afraid [as] a new driver on the road... I didn't want to screw up so I just focused and try not to let anything change.... I would just focus more so on the road because I would have to or else, something bad could happen... Some responded to passenger behavior by managing the in-vehicle environment, including requesting passengers to be quiet and turning down the music. As one driver said it "felt like [being] a parent," referring to his efforts to get a group of passengers to settle down. Other drivers noted that they simply ignored their passengers and concentrated on their driving. Most indicated that as their driving skill and confidence improved, their ability to cope with the distraction also improved. Here is an example of how they handled distractions in response to traffic. If it got [to be] tight traffic, I'd be "shut up for a second, let me drive," so it affected it a little bit, but I think I was pretty good at handling the distraction. Interestingly, several drivers regarded having a passenger as a responsibility. For some, this was framed as them "worry[ing] more about others' safety than [their] own." For example, one male driver explained he drove more risky when alone because "my life is my life, but if somebody else is in the car it's their life that I'm taking into account as well". As indicated in the quote below, having more passengers in the car exacerbated this sentiment. [When] I have more people in the car... sometimes hits me... oh my gosh, I gotta, pay attention, I can't get involved in what they're talking about, I need to pay attention. I don't want to be responsible for them. --- Social Influence In addition to the overt distractions that peer passengers posed, when asked about how passengers affected their driving, the majority of drivers described at least one form of social influence from their peer passengers. This included direct comments about driving and indirect pressure in the form of social norms, and unspoken expectations from their passengers. For example, one female described how a male passenger encouraged her to drive over a median, I was with a friend who kind of encouraged it, which was probably the more dangerous thing,... "oh, just go over the median," 'cause, we wanted to go left, but we couldn't. In contrast, a male driver described how his girlfriend discouraged his fast cornering: "my girlfriend did say I took corners too fast, so I haven't taken corners so fast." One driver described the negative influence of passengers on his concentration, and his driving behavior, suggesting distraction and social influence can co-occur: Participant: They act like idiots. It's terrible. They affect my concentration a whole lot. Interviewer: Would you say it would be a negative or positive way? Participant: Most certainly negative...., I can admit this: they get me acting more stupid too. Participants also described several forms of indirect pressure from the passengers in the vehicle. Some male drivers described driving more safely with female passengers while being less concerned with the safety of their male passengers, as one driver explained "guys don't really care about bad driving..., or at least my friends, the passengers that I had." Another form of indirect pressure stemmed from drivers' perceptions of their passengers' unspoken expectations or from knowledge of their passenger's driving behavior. Drivers were not asked how their driving behavior was affected, but participants mentioned the specific behaviors that they would change in the presence of passengers, "I drove faster, mainly 'cause they always drove faster". In the case of having a more experienced driver as a passenger, one driver stated: I would probably be a tiny bit more daring, more likely to sit at a stop sign and wait for a very large gap if I'm by myself versus if I have a passenger in the car... And more daring because they know what they're doing and you don't as much. When drivers had knowledge of their passenger's driving behavior, these influenced their driving when those passengers were in the vehicle. For example: I had a friend who is kind of reckless the first year of his driving and it made me a little reckless. Well depending on the person that was in there, it made me already know how I should drive with them in there, that's how I would drive... Either fast or slow... Several participants described the diminishing effects of different forms of social influence over time. One male participant described a crash as a formative experience, leading to a lower susceptibility to influence from peer passengers: When they were in the car, I would sometimes act a little more dumb. [Participant laughs]... I would go for the impression thing until I wrecked, [...], and then I'd slow down. Before that I'd have the radio louder, I'd take turns too-too fast, drive a little bit faster. Especially on that trip where I wrecked, I was driving a lot faster than I should have. A common theme that several drivers described was that they considered their driving as a performance. At times, this was in direct response to passengers' comments about driving, but at other times it was due to the desire to appear skilled and competent in the presence of peers. For both male and female drivers, these perceptions were heightened in the presence of male passengers: "if I was in-with another male in the car, -I'd probably pay more attention to what I was doing and...-how I was performing". Here are two more examples. Now I'm not as distracted when males are in my car, but, at first I would say,... [it was] intimidating to have a male [passenger], even though they'd never driven before,.. 'cause you know males know how to drive. I would try and drive a little bit better,..., we were all getting our license and it was kind of a competition for who was [a] more skilled driver and who was better. We didn't say any of that,...so, I don't think that I overly sped or anything like that,... I just tried to be more skilled in what I was doing. Like when I was merging on to the highway, I'd just make it seem like it was really normal and I had done it a lot 'cause I wanted to look more experienced than them. When carrying peer passengers, some teens reported that their driving was influenced by the need to appear laidback. This may have resulted in their being less vigilant while driving, or taking risks such as rolling through a stop sign, to seem more in control. For example: I was more carefree with female passengers, like with my friends... I felt I was the cool one in the bunch who could drive already and take everyone everywhere... I was the same, kind of carefree and maybe didn't pay attention as much as I did when I was by myself. A related point raised by some drivers was the desire to appear attractive to passengers of the opposite sex while driving, as exemplified by one female driver who stated that she tried to "look more pretty... more graceful" when driving with male passengers, whom she stared while driving. --- Discussion The purpose of this study was to examine teen drivers' perceptions of their peer passengers. Using interviews from a sample of newly licensed teenage drivers, and guided by a grounded theory approach to data analysis, we found that teenage drivers were aware of the potential sources of risk that their peer passengers pose. Some participants articulated descriptions of the specific mechanisms of influence and detrimental effects on driving performance, demonstrating an awareness of the risks that peer passengers can pose. In contrast, other teens did not perceive the presence of peer passengers as having an influence on their own driving behavior, whether or not they described peers as distracting. Brown and colleagues describes four elements being involved in peer influence for teenagers: an event, activation of peer influence, a response, and generation of an outcome (Brown, Bakken et al. 2008). Several participants described the operation of these steps in relation to their peer passengers. Specifically, during events of driving, the presence of passengers would exert some influence, and participants would respond, often by driving in a riskier way to look skilled or attractive and meet their passenger's unspoken or observed expectations. Notably the influence was not uniformly to drive in a riskier way, with some participants describing safer driving in response to passenger expectations. This suggests social influences can operate indirectly through norms, which can be transmitted through modeling and verbal and non-verbal actions (Ouimet, Pradhan et al. 2013). A recent experimental simulator study reported increased risky driving among young male drivers exposed to young male passengers, with considerable variability depending on the type of the passenger present, e.g. risk accepting or risk-averse (Simons-Morton, Bingham et al. 2014). The consistency of evidence regarding social influences on driving behavior from studies using differing methods strengthens the conclusion that social influences are operating on driving behavior in ways similar to well-known pathways found in other areas of adolescent risk behavior, such as alcohol use (Borsari and Carey 2001) and smoking (Simons-Morton and Farhat 2010). Teen drivers described distraction and social influence as two potentially reinforcing forms of influence from peer passengers. Our findings suggest that future experimental study designs examine the potential interaction of these sources of crash risk. Another promising avenue of examination is teen's description of driving as a performance, and their desire to appear skilled, competent, cool, and attractive while driving in the presence of their passengers. This may lead them to engage in risky driving maneuvers to demonstrate mastery or skill that may also increase crash risk. Reframing social 10 norms about skill and competence to be focused on minimizing crash risk may present a potential avenue for intervention. --- Limitations The participants in this study may represent a unique sample of teenage drivers. They were recruited to participate in a naturalistic driving study where driving behavior was recorded continuously for 18 months, completed periodic surveys about their driving behavior, and participated in two test track driving assessments. Participants may have had a heightened awareness of their own driving behavior, relative to other teens. While participants were not provided with any feedback on their driving during the course of the study, volunteering to be subjects, and participating in this research may have primed them to be more aware of safety concerns and crash risk. Participants' descriptions of their driving behavior may be influenced by the presence of laws that restrict behaviors. During the first 12 months of licensure, all study participants were subject to the single peer passenger restriction of the State of Virginia for newly licensed teens (Insurance Institute for Highway Safety 2012). While the interview questions did not ask participants to describe the timing of behaviors, some response bias may exist; that is, participants may have been less willing to report behaviors that were illegal during the interview. Furthermore, the interview protocol was intended to elicit responses for specific circumstances and behaviors, such as distraction caused by peers. At times, the interviewer probed participants and reinforced their statements that related to risk and safety. While these instances were rare, they may have affected the participant's responses. Despite these limitations, the findings of this study represent a unique and valuable source of insight into teen drivers' perceptions of their peer passengers, and the findings could be used to inform the experimental study design, measurement development, and safety interventions. --- Author Manuscript Ehsani et al. Page 11 Table 1 Structured Interview (selected questions) During your first year of driving, how did male/female * passengers effect... Your concentration while driving? Level of distraction while driving? The way you were driving based on their comments? The way you were driving based not based on any comments, but based on their presence? Can you think of any other situation when teen male/female * passengers may have altered your driving: Is there anything a male teen passenger did that affected your driving? Thinking back about the first few months after you got your license, can you compare how your driving was affected when driving... Alone vs. with one male/female passenger? With one (1) teen vs. a group of teens? * Questions about male and female drivers were asked separately Transp Res Rec. Author manuscript; available in PMC 2016 June 24.
Background-The presence of peer passengers increases teenage drivers' fatal crash risk. Distraction and social influence are the two main factors that have been associated with increased risk. Teen drivers' perceptions of their peer passengers on these factors could inform our understanding of the conditions under which peer passengers increase crash risk or promote safer
D e-implementing low-value care is a major challenge within healthcare systems around the world. 1 The perpetuated use of healthcare services that provide little or no benefit to patients, or which may cause harm, represents wasteful consumption of healthcare resources. 2 Since the launch of the Choosing Wisely Campaign in 2012, there has been an exponential increase in research identifying hundreds of low-value practices across all areas of healthcare. [3][4][5] Although many low-value practices have been identified as candidates for de-implementation, their use persists because the process of changing engrained clinical behaviour is complex. While we have established theories, models, and frameworks to guide the process of implementing high-value care into practice, less is known about the process of de-implementing low-value care. Studies have begun to further unravel the complex interplay between processes and determinants (ie, barriers and facilitators) of deimplementation and implementation. Nevertheless, despite advancements in our understanding of de-implementation, low-value care remains a major burden within healthcare systems throughout the world. Prior to the coronavirus disease 2019 (COVID-19) pandemic, reducing low-value care was increasingly recognized as a priority for healthcare system improvement. Now, owing to the many negative health system impacts of COVID-19 (eg, delayed diagnoses and treatments), reducing low-value care should be an even greater priority. 6,7 Ensuring that healthcare providers are delivering high-value care will help mitigate the resource and financial constraints that will impact healthcare systems post-pandemic. 8 In Verkerk and colleagues' recent study "Key factors that promote low-value care: views of experts from the United States, Canada, and the Netherlands, " 9 the authors aimed to explore the factors that promote ongoing use of practices identified as low-value. This commentary will review the article by Verkerk et al, highlight key findings, and offer further consideration for how their findings may be interpreted and applied to future initiatives to reduce low-value care. Verkerk et al interviewed 18 experts from Canada, the United States, and Denmark. Pre-existing frameworks describing drivers of poor medical care and determinants of healthcare professional practice were used to guide interviews and elicit factors that promote low-value care. This enabled the authors to fill a gap within the literature and potentially identify social and system-level factors that are often overlooked, yet at a macro-level are potentially very influential. 10 Key factors promoting use of low-value care that emerged from the interviews included social factors (public and medical culture), system factors (payment structure, influence from industry, malpractice litigation), and knowledge factors (evidence, medical education). The identification and description of these factors are a meaningful addition to the body of literature describing determinants of low-value care 11,12 and offer potential strategies to reduce its overuse. Generalizability of many of the key factors promoting lowvalue care likely depends on context. For example, one of the social factors promoting low-value care identified in the study by Verkerk et al was public culture, and the tendency to believe that'more is more.'This suggests that patients may value the receipt of tests and treatments because it makes them feel like something is being done to help them, and understanding this, clinicians may decide to provide tests or treatments when the clinical indication may be weak or absent. Patient education materials, such as those from the Choosing Wisely campaign, have been shown to increase patients' awareness of low-value care and encourage them to initiate conversations about the value of their care with their physicians. 13 The importance of patient perceptions likely varies across clinical contexts. Diagnostic imaging for low-risk low back pain is an example of a low-value practice where patients' expectations or preferences have been shown to significantly influence utilization. 14 Healthcare providers have reported that more patient education and additional time to explain their rationale to a patient would help them reduce low-value imaging for low back pain. 15 Targeting patient expectations through implementation of an intervention within the patient-clinician interaction in primary care may provide an opportunity for the patient to express their preferences and engage in a discussion about the merits of imaging. A national intervention in Australia applied this approach to the patientclinician interaction regarding imaging for low-back pain. In their study, patient-specific educational tools and cliniciantargeted decision-support tools were implemented to assist with decision making regarding imaging for low-back pain. 16 They found that this intervention reduced primary care ordering of imaging by nearly 11% over the study period. Similar results have been achieved with interventions targeting the patient-clinician interaction in other primary care contexts, such as with antibiotic prescribing for upper respiratory tract infections and diagnostic imaging for lowrisk head injuries. 17 In contrast to primary care, where decisions regarding use of low-value tests or treatments are commonly made during the patient-clinician interaction, acute care, and in particular the intensive care unit (ICU), is a care environment where some of the decisions regarding care required to save life or limb may be less influenced by public culture. For example, several studies suggest that for most patients admitted to adult ICUs, a hemoglobin target of 7 g/dL is sufficient, and transfusion to higher hemoglobin levels that more closely resemble normal values is associated with worse outcomes. 18 Red blood cell transfusion when the hemoglobin is 7 g/dL or higher is, for most patients, low-value care. Owing to their severe illness, ICU patients are not aware that their hemoglobin level may be lower than normal, whereas the clinicians are, and thus best positioned to make decisions regarding the merits of transfusion. In this case, an intervention that targeted patients or their family members would be less impactful than one focusing more heavily on clinicians, their medical knowledge, and the strong medical culture that more care and normalization of physiology is better. The more care is better culture and the ability of clinicians to adapt established medical practice patterns in response to new evidence are major barriers to reducing use of lowvalue care that likely transcend all areas of medicine. It is hard for clinicians to unlearn patterns of practice that have emerged from years of medical training and experience. [19][20][21] A recent qualitative evidence synthesis indicates that clinician knowledge is a commonly reported determinant of lowvalue care, 12 yet it is less clear how this should be addressed. Clinicians engage with multiple sources of evidence (eg, journal articles, clinical guidelines) within a medical culture with established norms whilst also subject to their own cognitive biases. All of these elements may contribute to how they interact with and apply their medical knowledge surrounding low-value care. 22 Clinicians are also faced with patients whose complexity frequently exceeds that of those examined in clinical trials, and therefore have difficulty applying evidence to the clinical contexts they encounter. Additional work is required to further explore with clinicians their own experiences interacting with new potentially contradictory evidence and the decision to de-implement care that may no longer be considered high value. In addition to social and knowledge factors, the system in which care is delivered has been shown to influence the delivery of low-value care. For example, a study examining vitamin D screening in the United States and Canada found modest reductions in low-value screening following the release of Choosing Wisely recommendations. 23 However, when a new payment policy eliminating reimbursement for the screening was introduced in Ontario, Canada, the rate of screening was reduced by 93%. 23 Here, an intervention addressing system-level factors was needed in addition to the Choosing Wisely Campaign, which targets knowledge and social factors. Differences in the structure of healthcare systems suggests that context specific interventions may need to be considered. A systematic review of interventions to reduce low-value care identified the importance of systemlevel strategies that aimed to reduce demand of low-value care (eg, patient cost-sharing that incentivizes high-value care over low-value care) and supply of low-value care (eg, value-based pay-for-performance). 24 Research suggests that effective interventions that reduce low-value care are more commonly multi-component interventions that address both system-level factors (eg, payment structure, policy changes) and social and knowledge factors. 24 The factors identified by Verkerk et al complement those cited within the current low-value care and deimplementation literature. Two recent evidence syntheses of determinants of low-value care suggest patient and provider characteristics (eg, knowledge, attitude, behaviours) to be the most cited determinants of low-value care. 11,12 Other factors outside the patient-provider dynamic like the system-level factors identified by Verkerk et al appear to be less commonly reported in the literature, but as demonstrated by Verkerk's findings, this does not dismiss their impact on low-value care. Verkerk's study is an important reminder that no single determinant is responsible for the challenges associated with reducing low-value care; social, knowledge, and systemlevel factors are driving low-value care in an interconnected manner. When designing de-implementation interventions, these social, knowledge, and system factors should be evaluated to understand what the predominant driver of use of the specific low-value practice is and what might work best to reduce its use. As highlighted in this commentary, these factors are likely going to look different depending on the target low-value practice, care setting and health system. In conclusion, the study by Verkerk et al highlights key social, knowledge, and system factors that promote lowvalue care and underscores the complexity of the challenge of de-implementation. Understanding how these key factors vary with contextual factors such as the specific low-value practice and clinical setting is an important consideration in the design of de-implementation interventions. It is essential that we engage all relevant stakeholders, including clinicians and patients, as we continue to build the body of evidence describing determinants of low-value care, pursue initiatives to reduce low-value care, and advance the science of deimplementation. --- Ethical issues Not applicable. --- Competing interests Authors declare that they have no competing interests. --- Authors' contributions Conception and design: EES, JPL, HTS, and DJN. Drafting of the manuscript: EES, JPL, HTS, and DJN. Critical revision of the manuscript for important intellectual content: EES, JPL, HTS, and DJN. Supervision: EES, JPL, HTS, and DJN. --- Authors' affiliations 1 Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada. 2 School of Health Administration, Faculty of Health, Dalhousie University, Halifax, NS, Canada. 3 Department of Critical Care Medicine, University of Calgary and Alberta Health Services, Calgary, AB, Canada. 4 Department of Community Health Sciences, University of Calgary, Calgary, AB, Canada. 5 O'Brien Institute for Public Health, University of Calgary, Calgary, AB, Canada.
Low-value care contributes to poor quality of care and wasteful spending in healthcare systems. In Verkerk and colleagues' recent qualitative study, interviews with low-value care experts from Canada, the United States, and the Netherlands identified a broad range of nationally relevant social, system, and knowledge factors that promote ongoing use of low-value care. These factors highlight the complexity of the problem that is persistent use of low-value care and how it is heavily influenced by public and medical culture as well as healthcare system features. This commentary discusses how these findings integrate within current low-value care and de-implementation literature and uses specific low-value care examples to highlight the importance of considering context, culture, and clinical setting when considering how to apply these factors to future de-implementation initiatives.
Introduction The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic has brought about unprecedented challenges worldwide. In an attempt to deter the spread of the virus and disease associated with the virus (COVID-19), many countries implemented social and physical distancing restrictions that led to the closure of work places, schools and, recreational facilities [1]. The impact of COVID-19 restrictions on physical activity (PA) levels in adults has been mixed due to research studies reporting decreased exercise engagement [2][3][4], the maintenance of PA levels [5], or even increased exercise practice [5,6]. However, among children, these closures have had primarily a negative impact on 24-h activity behaviors (24-AB) [1], including decreased PA, increased sedentary behavior (SB), and poor sleep habits in the sedentary profiles in children [7][8][9]. These behaviors are independently and collectively associated with poor physical and mental health outcomes [10]. The purpose of this commentary is to extend the discourse on the importance of 24-AB by focusing on youth wheelchair users (YWU), where YWU can be defined as youth aged 5-17 years who are disabled as a result of musculoskeletal, neurological, cognitive, or other types of dysfunction and use a wheelchair as their main source of mobility [11]. Specifically, Children 2021, 8, 690 2 of 6 we discuss the importance of chronic disease prevention, provide a brief overview of 24-AB, and outline some of the lessons that can be learned from the COVID-19 pandemic. We have focused on YWU due to the high likelihood of their 24-AB being impacted by the COVID-19 social restrictions, the high risk of developing severe illness and complications following a COVID-19 infection due to underlying health conditions and co-morbidities [12,13], and the potential for improved mental and physical health outcomes with decreased SB, increased PA and/or improved sleep habits. --- Impact of COVID-19 Restrictions on Physical and Mental Health Those with pre-existing chronic diseases, including cardiometabolic diseases such as obesity, type II diabetes, and hypertension, are at heightened risk for severe complications and death following COVID-19 infection [14]. The YWU population is largely characterized by pre-existing conditions [15], not least because some type of pre-existing condition may have pre-empted and ultimately necessitated wheelchair use. These pre-existing conditions place YWU at greater risk for COVID-19-related complications [16,17]. In addition, wheelchair use creates a situation in which the child is more susceptible to negative 24-AB, all of which are linked to chronic diseases that can further exacerbate susceptibility to COVID-19-related complications [18][19][20]. As an example, consider a YWU who has a spinal cord injury whose risk for COVID-19-related complications may be increased both as a result of autonomic dysfunction associated with the spinal cord lesion, as well as the negative impacts on cardiometabolic health related to physical inactivity and SB. The increased risk of poor health outcomes among YWU, either directly or indirectly related to COVID-19, highlights the need for focused and effective preventive health measures in this population [18]. Physicians, allied-health practitioners, mental health professionals, as well as parents and teachers should be aware of the increased susceptibility to COVID-19 related complications faced by YWU [9], and work collaboratively and creatively to ameliorate health risks through the promotion of positive lifestyle behaviors [21]. Fortunately, 24-AB are modifiable, and can and should be targeted in YWU as a means to maintain health and reduce risk of complications for COVID-19 and/or future variants/pandemics [22]. --- 24-h Movement Behaviors With respect to 24-AB the most established guidelines are available for PA, followed by sleep. However, there is sufficient evidence to strongly associate each 24-AB, with chronic disease outcomes. For example, meeting PA guidelines is extremely important to improve physical and mental health as well as preventing many chronic diseases such as hypertension or diabetes. For youth, at least 60 min of moderate to high intensity physical activity is recommended per day [23,24]. Sufficient sleep duration and quality are also critical in supporting mental health, immune function, and attention span [25,26]. Therefore, 9-11 h of uninterrupted sleep is recommended for youth per night [23]. Lastly, SB is an independent risk factor for cardiometabolic diseases in adults [27] and likely youth [10]. Sedentary behavior has been defined as any waking behavior in a seated or reclining posture (<unk>1.5 METS) [28]. However, due to the lack of available evidence regarding SB and health outcomes in wheelchair users (or individuals with physical disabilities), the most recent World Health Organization (WHO) Guidelines concluded that there is no reason to believe that recommendations to reduce SB would be any different for wheelchair users [29]. The WHO recommends reducing SB, and others recommend <unk>2 h of screen time per day specifically for youth [23,30,31]. Activity behaviors including PA, sleep, and SB interact with one another across a 24-h day. Therefore, time spent engaging in one activity behavior should not be considered independently from the other behaviors. Time spent engaging in one activity behavior influences the physiological processes involved in the other behaviors. For example, increasing PA (e.g., using an arm ergometer) may lead to a reduction in SB or reducing TV time may result in a child going to bed earlier or improving their sleep quality [32,33]. It is extremely important for parents or guardians to establish a routine that promotes positive 24-AB in order to achieve the recommended guidelines for increasing PA, reducing SB, and promoting good sleep duration and quality. --- Challenges Moving Forward and Lessons Learned Infection with the COVID-19 virus and the imposed social restrictions will likely have lasting health impacts on YWU [14]. At present many of the long-term health impacts cannot be predicted. Additionally, it is unclear what types of specialized healthcare these youth will require (e.g., respiratory and cardiovascular), or whether there are enough properly trained medical experts to provide the necessary acute and chronic specialized medical care. While we cannot control all of the negative long-term implications of COVID-19, there is reason to believe that positive 24-AB can be beneficial to health outcomes [33]. Additionally, we are now in a position to reflect on events surrounding the COVID-19 pandemic. Specifically, in the remainder of this section we will use a socioecological model (SEM) to provide perspective on which lessons we can learn from and make use of moving forward (see Figure 1). The SEM posits that the ability to motivate or educate an individual to change their behavior is likely to be restricted if their socio-cultural and physical environments do not enable and support the behavior [33]. Specifically, the SEM allows us to contextualize the multiple levels of influence on behavior, including intraindividual, inter-individual, physical-environment, and policy levels. The policy level is beyond the scope of this short commentary; the remainder of this section will focus on the Intra-Individual, Inter-Individual, Physical-Environment Levels. Children 2021, 8, x FOR PEER REVIEW 3 of 6 influences the physiological processes involved in the other behaviors. For example, increasing PA (e.g., using an arm ergometer) may lead to a reduction in SB or reducing TV time may result in a child going to bed earlier or improving their sleep quality [32,33]. It is extremely important for parents or guardians to establish a routine that promotes positive 24-AB in order to achieve the recommended guidelines for increasing PA, reducing SB, and promoting good sleep duration and quality. --- Challenges Moving Forward and Lessons Learned Infection with the COVID-19 virus and the imposed social restrictions will likely have lasting health impacts on YWU [14]. At present many of the long-term health impacts cannot be predicted. Additionally, it is unclear what types of specialized healthcare these youth will require (e.g., respiratory and cardiovascular), or whether there are enough properly trained medical experts to provide the necessary acute and chronic specialized medical care. While we cannot control all of the negative long-term implications of COVID-19, there is reason to believe that positive 24-AB can be beneficial to health outcomes [33]. Additionally, we are now in a position to reflect on events surrounding the COVID-19 pandemic. Specifically, in the remainder of this section we will use a socioecological model (SEM) to provide perspective on which lessons we can learn from and make use of moving forward (see Figure 1). The SEM posits that the ability to motivate or educate an individual to change their behavior is likely to be restricted if their socio-cultural and physical environments do not enable and support the behavior [33]. Specifically, the SEM allows us to contextualize the multiple levels of influence on behavior, including intra-individual, inter-individual, physical-environment, and policy levels. The policy level is beyond the scope of this short commentary; the remainder of this section will focus on the Intra-Individual, Inter-Individual, Physical-Environment Levels. The Intra-Individual Level includes factors such as self-efficacy and activity enjoyment. The COVID-19 pandemic has led many of us to become more self-sufficient, and to realize that we can do more with less-including engaging in PA and breaking-up SB within our homes [34]. This heightened self-reliance can be channeled to raise self-efficacy The Intra-Individual Level includes factors such as self-efficacy and activity enjoyment. The COVID-19 pandemic has led many of us to become more self-sufficient, and to realize that we can do more with less-including engaging in PA and breaking-up SB within our homes [34]. This heightened self-reliance can be channeled to raise self-efficacy towards positive 24-AB [28]. Simple techniques include the use of goal setting, self-monitoring, and self-management [35]. This could include PA tracking via smartphone apps, setting and monitoring fixed bedtime and waketimes, and getting timed reminders to break-up Children 2021, 8, 690 4 of 6 sedentary behaviors [30]. Simple yet enjoyable activities that can be engaged in within the home include breaking-up sedentary behavior with light PA (e.g., playing with a pet), or participating in modified yoga available via the internet. Of relevance to the Inter-Individual Level of the SEM, YWU can participate in activities while engaging with others. To combat isolation, during the COVID-19 pandemic many people have learned to interact using various virtual platforms. The use of such platforms can continue post-COVID to, for example, challenge family or friends to SB interruption challenges or to participate in PA classes [36]. For example, individuals within support groups could challenge one another to engage musculature for at least one minute every hour or by reminding one another to break-up a sedentary bout with resistance band exercises. Additionally, positive 24-AB habits could be a family affair, including encouraging parents to restrict night-time access to screened devices (and harmful blue light), and replace the screen time with story time. Lastly, the Physical Environment-Level can be used to contextualize barriers to engaging in positive 24-AB. While COVID-19-related social restrictions have been viewed negatively with respect to our health and well-being, many individuals have adapted their home environments to improve their quality of life [37]. While physical therapy, gyms, leisure centers, and other facilities are beginning to operate on a normal schedule, the adaptations made during the pandemic need not be reversed. As opposed to physical infrastructure around the home, including paths, greenways, and public transportation, barriers within the home are relatively easy to reduce [38]. Modifications could include installing grab-bars to provide opportunities to break-up SB, or more simply placing resistance bands/other equipment around the home to make it easier to replace SB with PA. Additionally, to improve sleep-wake cycles the home environment could be modified to ensure children are positioned throughout the day to enhance exposure to sunlight, and timers can be set ensure children go outside at regular intervals. --- Conclusions Challenges faced by YWU include the greater risk of developing severe illness and complications following a COVID-19 infection, and the inability to fully predict the longterm health impacts of COVID-19 to the pandemic. However, we can take a moment to reflect and take away some important lessons gleaned during the pandemic era. For example, among the general able-bodied population we know that positive 24-AB improve chronic disease outcomes, and in doing so decreases the risk of COVID-19 infection complications. We have no reason to believe the same is not true for YWU. Using the SEM to provide context, we can take something positive away from this blight on our history, by reflecting on the adaptions we made to improve our quality of life during the pandemic to model positive and long-term 24-AB. --- Data Availability Statement: Not applicable. --- Conflicts of Interest: The authors declare no conflict of interest. Children 2021, 8, 690
Preventative measures taken worldwide to decrease the transmission of COVID-19 have had a tremendous impact on youth. Following social restrictions, youth with and without physical disabilities are engaging in less physical activity, more increased sedentary behavior, and poor sleep habits. Specifically, youth wheelchair users (YWU) are likely disproportionately affected by COVID-19 and have a higher risk of contraction due to underlying comorbidities. While we cannot control all of the negative long-term implications of COVID-19 for YWU, participation in positive 24-h activity behaviors can decrease chronic disease risk and the likelihood of long-term complications resulting from infection. This commentary is to extend the discourse on the importance of 24-h activity behaviors by focusing on YWU. Specifically, we discuss the importance of chronic disease prevention, provide a brief overview of 24-h activity behaviors, and outline some of the lessons that can be learned from the COVID-19 pandemic.
Introduction In Belgium and Western Europe, new HIV diagnoses have been declining for the last 10 years [1,2]. Men who have sex with men (MSM) are still at highest risk for HIV acquisition: In Belgium, more than half of new HIV infections were being diagnosed in this group in 2017 [1]. To further reduce the high number of HIV infections among MSM, primary prevention needs to be strengthened. Pre-exposure prophylaxis (PrEP), the use of antiretroviral treatment as prevention, has shown to be highly efficacious in reducing HIV infection risk, if used correctly [3,4]. Given this efficacy [4], PrEP-related research has increasingly focused on how to implement this novel biomedical prevention tool. In Europe, an increasing yet limited number of countries are now providing PrEP through national healthcare systems, including France, Norway, Belgium, Portugal, Luxembourg, Scotland, and Germany [5]. However, delivery and uptake may need to be upscaled and the implementation periods might have been too short to affect the overall course of the HIV epidemic across Europe [6]. --- of 15 Developing effective strategies for ensuring optimal PrEP uptake by populations at risk of HIV acquisition is an important implementation challenge. Ideally, such strategies should be based on active engagement of populations at risk, while also exploring PrEP use by individuals who are at lesser risk. Use of PrEP beyond the margins of clinical eligibility criteria may be costly [7][8][9], and may result in unnecessary exposures to potential side-effects. Another concern expressed on community-level has been prevention optimism, i.e., the belief that it is safe to engage in condomless sex because other men are perceived to take PrEP [10]. On the individual level, increased engagement in condomless anal sex, considered as 'risk compensation' may lead to more sexually transmitted infection (STIs) among MSM [11,12]. Willingness to take PrEP has frequently been used as a measure of acceptability and as a predictor of its uptake [13]. The construct "willingness" as part of a broader acceptability assessment has been investigated in other HIV-related health promotion areas, such as voluntary counseling and testing [14] and circumcision [15]. Examples outside the HIV field are prevention programs for cardiometabolic diseases [16] or mental health interventions [17]. Investigating this concept can contribute to better an understanding of how to disseminate theoretically promising public health interventions on a broader scale, translating efficacy into effectiveness. Willingness is believed to shape the pathway from behavioral intention to actual behavior, and it may partially predict actual behavior [18]. The perceived level of efficacy and barriers such as potential health consequences and social stigma are then typically presented as what may explain disparities between willingness and its actual uptake [18]. Studies conducted in high income countries showed PrEP acceptability rates among MSM between 40 and 60% [19]. Factors associated with willingness were younger age, having high HIV risk behavior (i.e., condomless anal intercourse (CAI) with casual sex partners, many partners) [20][21][22][23], and being aware of own HIV risk, for instance in Australia [23], England [24], and Germany [25]. Willingness was associated with previous post-exposure prophylaxis (PEP) use in Australia and England [23,24]. A study conducted in the United States found that men at highest risk (i.e., men of color, lower socio-economic status, and high HIV risk behavior) were most willing, but least likely to have access to PrEP [18]. MSM who reported to be unwilling to take PrEP, on the other hand, expressed concerns about side-effects, non-efficacy, lack of information, medical mistrust, and costs [26][27][28]. PrEP guidelines have issued eligibility criteria to identify individuals who qualify for PrEP based on known HIV risk factors [29,30], to ensure that PrEP is prescribed in a targeted way [31]. In Belgium, as in many Western countries, PrEP eligibility criteria for prescription and reimbursement issued by the Belgian Federal Office of Health include MSM with risky sexual behavior, people who inject drugs and share needles, sex workers, other people that may be exposed to greater HIV risk, and partners of HIV positive people whose viral load is detectable (see Box 1) [32]. A recent Belgian PrEP demonstration project showed that MSM at highest risk for HIV acquisition could be reached using similar screening criteria [33]. By November 2018, nine months after implementation of the Belgian prescription and reimbursement policy, 1352 PrEP users, predominantly MSM, were reported by the specialized HIV treatment centers qualified for PrEP delivery [34,35]. The willingness to take PrEP in the future so far has not been assessed against formal eligibility criteria. A better understanding of this relationship and its associated factors can inform tailored PrEP promotion and support strategies to optimize PrEP uptake. The aim of this study was to explore hypothetical willingness to take PrEP among MSM, and to assess it against the formal PrEP eligibility criteria. More specifically, we aimed to assess differences in terms of socio-demographic, knowledge-related, attitudinal, and behavioral factors between MSM who are eligible and willing to use PrEP and those who display incongruences between their eligibility and willingness. Box 1. Pre-exposure prophylaxis (PrEP)-eligibility criteria in Belgium * (* Meeting only one criterion qualifies as being eligible. Source: Rijksinstituut voor Ziekte [32]). --- Criteria for men who have sex with men (MSM), that Permit Reimbursement of PrEP: (1) Condomless anal intercourse (CAI) with at least two different partners in the last six months. (2) Diagnosed with multiple sexually transmitted diseases in the last year. (3) Taken multiple PEP treatments in the last 12 months. (4) Used psychoactive substances while involved in sexual activities. General PrEP eligibility criteria independently from sexuality: (1) People who inject drugs. (2) Sex workers. (3) Individuals that are being exposed to unprotected sex and a high risk of HIV. (4) Partners of HIV-positive patients who has a detectable viral load. --- Methods --- Study Design A cross-sectional study was conducted among a convenience sample of Belgian MSM, using an on-line questionnaire. --- Study Population and Recruitment The online questionnaire was promoted via social and sexual networking applications (e.g., Grindr or Hornet). Additionally, it was disseminated via social media of MSM community-based organizations in Belgium. It was online from 21st November 2016 to 27th February 2017, i.e., before PrEP was available for prescription and reimbursement. Inclusion criteria in this study were MSM or transgender; aged 16 years and above; self-reporting to be HIV negative or unknown serostatus; and living in Belgium or having a Belgian citizenship. --- Questionnaire and Variables Measured We developed a questionnaire via SoSci [36]. The questionnaire was intentionally kept short and included skip logics and filter options for non-applicable questions to limit the time needed to complete it. Questions about socio-demographics, sexual behavior, HIV risk, and protective behaviors were similar to those used in other PrEP research, i.e., the Belgian PrEP demonstration project "Be-PrEP-ared" [33,37]. To inquire about PrEP awareness, knowledge, and acceptability, we adapted questions from similar research among healthcare providers [38,39]. It was available in Dutch, French, and English to reduce potential language barriers, and was piloted for feasibility and user-friendliness within the research team. We measured willingness to use PrEP using the following statement: "If PrEP was available in Belgium, what is the probability that you would use PrEP?"; answering options were given on a five-point Likert scale ranging from 'certainly not', 'rather not', 'no opinion', 'rather yes', to 'certainly yes'. Answers 'rather yes' and 'certainly yes' indicated being willing to use PrEP. Any other answer denoted the absence of such willingness. Eligibility criteria were measured with questions assessing the relevant sexual and preventive behaviors as defined by the Belgian criteria [32]. To calculate eligibility, we focused on criteria specific for MSM (see Box 1). The questionnaire included questions on preventive and sexual behavior, asking about HIV test recency, PEP and PrEP use, use of psychoactive drugs during sexual activity, number of anal sexual partners (with or without condom) in the last six months and anticipated CAI in the next three months. Sociodemographic items collected information on age, sexuality, nationality, place of residence, education, and relationship status. Five items measured participants' attitudes towards PrEP through five-point Likert scales ranging from -2 'totally disagree' to +2 'totally agree'. For the current analysis, the scales were dichotomized, where +2 and +1 denote 'agree', whereas -2, -1, and 0 denoted an absence of agreement with the respective attitudinal statements. Cronbach's alpha for these five items was 0.68, hence we did not treat these items as one single scale. PrEP awareness was examined through a dichotomous 'yes' or 'no' question, asking whether participants had ever heard of PrEP. Participants also had to self-estimate their knowledge about PrEP on a four-point scale from'very bad' to'very good'. For the current analysis, the self-ratings'very bad' and 'rather bad' were merged into the category 'little knowledge' and 'rather good' and'very good' into 'good knowledge'. Self-perceived risk to acquire HIV was also measured through a five-point Likert scale ranging from'very little risk' to'very high risk'. Again, the middle category was added to the category implying absence of perceived risk. --- Statistical Analysis In this analysis, only completed questionnaires of participants matching the inclusion criteria for MSM were included. We analyzed cleaned data using IBM SPSS Versions 22.0 and 25.0 (IBM, Armonk, NY, USA). After forming four groups of participants according to their willingness and eligibility (group one: Eligible and unwilling to take PrEP; group two: Eligible and willing to take PrEP; group three: Ineligible and willing to take PrEP; and group four: Ineligible and unwilling to take PrEP), factors associated with either of the four groups were examined. We used a Chi-square test to examine the relationship between eligibility and willingness, and Chi-square or Fisher's Exact Tests to determine the relationships between the four groups and potentially associated factors (preventive and sexual behavior, PrEP knowledge and attitudes). Statistical significance was set at p <unk> 0.05. --- Ethics We obtained ethical approval for the study through the institutional review board of the Institute for Tropical Medicine Antwerp [1140 /16]. Before filling in the questionnaire, participants were informed about the study, the procedures, and voluntary nature of study participation. By clicking through, participants consented to participate. --- Results --- Description of Study Sample We received 1444 completed questionnaires (Figure 1). Participants' socio-demographic background characteristics are displayed in Table 1. Participants' median age was 36.5 years, with a minimum of 16 years and a maximum of 77 years. Almost all participants were male, except for four female-to-male transgender participants and for one person's gender was missing. Participants were predominantly Belgian (81.2%), living in Belgium (98.1%) and highly educated (79.3%). Almost half of the participants lived in metropolitan areas, i.e., 29.8% in the Brussels capital region, and 14.4% in the Antwerp region. In total, 44.3% of the participants were eligible for PrEP (see Table 2). The criteria most often applied were reporting CAI with at least two different partners (33.5%), and having used psychoactive substances while engaging in sexual activities (25.3%). Most participants (69.5%) were willing to use PrEP in the future: 84.0% of the eligible ones, and 58.0% of the illegible who were not eligible. These results will be discussed in more detail below when looking at the (in)congruence between the sub-groups. --- Results --- Description of Study Sample We received 1444 completed questionnaires (Figure 1). Participants' socio-demographic background characteristics are displayed in Table 1. --- Sexual and Preventive Behavior Most participants were sexually attracted to men (99.4%), 76 participants (5.3%) were also attracted to women (not shown in table). In the last 12 months, the median numbers of men with whom they had sex were seven, with whom they had anal intercourse five, and with whom they had CAI was one (not shown in table). About 39.8% reported that they had not engaged in any CAI with sexual partners during the last year. Sex under the influence of psychoactive drugs in the last six months was reported by 25.3%. Almost sixty percent reported having had their latest HIV test in the previous six months. PEP was used by 8.2% in the last year, 7.5% had used PrEP before. About one fifth (19.8%) perceived themselves at a high risk of acquiring an HIV infection, and 44.2% were in a steady relationship at the time of the survey. --- PrEP Awareness, Knowledge, and Attitudes A great majority of the participants (91.8%) reported having been aware about PrEP (see Table 5). About 55.2% of the participants rated their PrEP knowledge as good or very good. Participants' attitudes towards PrEP were generally positive: A vast majority perceived PrEP as a good extra prevention tool (84.9%) and agreed with the statement that "it's a good thing that HIV negative people can protect themselves with PrEP" (90.1%). Only 15.6% felt that PrEP was unnecessary due to better alternatives. About one third of participants (33.2%) expected that PrEP users will receive negative remarks from others. --- PrEP Eligibility and Willingness Participants who were eligible for PrEP were significantly more likely to be willing to take PrEP (p <unk> 0.001). Among those who were eligible, 16.0% were unwilling or unsure to use PrEP in the future. Among participants where were not eligible, 58% were willing to take PrEP. (Table 3). Overall, willingness was significantly associated with higher PrEP awareness (p <unk> 0.001), better PrEP knowledge (p <unk> 0.001), more risky sexual behavior (i.e., CAI) (p <unk> 0.001), and the relationship status'single' (p <unk> 0.001) (results not shown in table). No significant differences were found between the four groups in terms of other socio-demographic background characteristics. --- Eligible Participants: Factors Associated with Their Willingness to Take PrEP Participants who were eligible for PrEP but not willing (or unsure) to use it, were significantly more likely to be in a steady relationship, to not have tested for HIV in the last six months and to have had fewer male partners for anal sex in the last 12 months when compared with eligible and willing participants: e.g., 8.8% of eligible unwilling participants reported to have had CAI with more than five partners; compared with 28.9% among eligible willing participants. Eligible unwilling participants were also less likely to perceive themselves at high risk for HIV (see Table 4). In terms of awareness-related, knowledge-related and attitudinal factors the following differences were found (see Table 5): Eligible but unwilling participants were less likely to be aware of PrEP and to consider their PrEP knowledge to be good, when compared with willing participants (p <unk> 0.001). Eligible and unwilling participants were also significantly less likely to have a positive attitude towards PrEP (p <unk> 0.001), although they did not significantly differ in their opinion towards the use condoms to prevent other STIs while on PrEP. a : Self-estimated knowledge about PrEP:'very good (+2)' and 'good (+1)' versus 'bad (-1)' and'very bad (-2)'; b : 'totally agree (+2)' and 'agree (+1)' versus 'don' know or unsure (0)', 'disagree (-1)' and 'totally disagree (-2)'; and c : p-value for Chi 2 or Fisher Exact Test for association between 'willingness' and characteristic. Within 'eligible' or 'ineligible' groups. --- Ineligible Participants: Factors Associated with Their Willingness to Take PrEP Participants who were willing to use PrEP but ineligible to do so according to the Belgian criteria were more likely to be single, to have tested for HIV in the last six months, to perceive themselves at higher risk of getting an HIV infection and had a higher number of male partners for anal sex, when compared with those ineligible and unwilling (p <unk> 0.001 for these three variables; see Table 4). A substantial proportion in both groups anticipated that they may have CAI in the next three months (34.4% versus 40%, respectively). Willing participants were less likely to find it important to still use condoms when being on PrEP compared with unwilling ineligible participants (p <unk> 0.001). Willing MSM were also more aware of PrEP (p = 0.017) and were more likely to indicate that their PrEP knowledge was (very) good, when compared with ineligible unwilling participants (p = 0.008; see Table 5). --- Discussion In this online survey among Belgian MSM we aimed to explore the hypothetical willingness to take PrEP and to assess it against the formal PrEP eligibility criteria. About 44.3% of the participants were eligible for PrEP according to the Belgian eligibility criteria. More than two third (69.5%) were willing to start PrEP once it became available in Belgium. More than half of those were also eligible. We also found that a small proportion of those eligible were unwilling to take PrEP (16.0%), and that more than half of those ineligible at the time of the survey were willing to take PrEP in the future (58.0%). Among eligible participants, those unwilling to take PrEP reported relative lower levels of CAI and were more often in a steady relationship. This may have contributed to their lower individual risk perception. They also had lower awareness and knowledge of PrEP, and reported less positive attitudes towards this prevention method than their willing counterparts. Among ineligible participants, a distinct group of MSM willing to take PrEP emerged, in spite of not formally qualifying for PrEP prescription: Single men perceiving themselves at risk for HIV acquisition with relative high number of male partners for anal sex. While their attitudes were more favorable towards PrEP, they were less likely to anticipate PrEP related stigma than their unwilling counterparts and anticipated to have CAI in the future. The study provides new information regarding PrEP uptake within a framework of formal eligibility criteria, which may be useful for other Western (European) countries facing similar situations. Our findings point to a high congruence between PrEP eligibility and hypothetical willingness to use PrEP. Most eligible MSM in our study were also willing to take PrEP, which is in line with demonstration studies showing that those coming forward for PrEP are highly likely to be at risk for HIV [20,21,[23][24][25]39]. However, about 40% of participants showed incongruence between formal risk-criteria and their hypothetical willingness. A recent Australian prospective cohort study showed that 69.8% of gay and bisexual men who met the eligibility criteria had not yet commenced PrEP [40]. This is higher compared to our study where only 16% of those eligible were unwilling. On the contrary, more than half of ineligible participants were also hypothetically willing to use PrEP in the future. The two incongruent groups are important for HIV prevention and sexual health promotion since they require different approaches in sexual health promotion. The first group, albeit small in numbers, are the ones eligible yet unwilling to take it. This group may be concerning, given that eligibility criteria are based on factors known to be associated with high risk for HIV acquisition. The high proportion of participants in this group who had sex while using psychoactive drugs in the six months prior to the survey (52.9%) and the high proportion of participants having had CAI with at least two male partners in the last 12 months (57.8%) demonstrates that they are indeed at risk for HIV. Our data suggest that they might be unwilling (despite being eligible) to take PrEP due to an inadequate risk perception, and a less positive attitude towards PrEP. Only 10% of this group perceived themselves to be at high risk for HIV acquisition. This is in line with the lower proportion of participants that have recently tested for HIV in this group, when compared with willing, eligible participants. Overall, this may reflect misconceptions about the levels of risk required to advise PrEP use [40]. Eligible and unwilling MSM were also more likely to indicate that PrEP users will receive negative comments, potentially indicating an anticipated stigmatization of PrEP. Such anticipated social discrediting of PrEP may function as barrier in accessing PrEP [41][42][43][44][45]. The results suggest that this group and their sex-partners are at substantial risk for HIV acquisition, but are less likely to self-identify as being at risk and are less interested in measures such as PrEP. Future interventions should take into account that eligible, unwilling MSM are harder to reach, because they may be less likely to come forward themselves for HIV testing, counselling, and other sexual health promotion services, resulting in less opportunities for PrEP promotion [46][47][48]. Modifying HIV risk perceptions through educational interventions could be a promising strategy to promote PrEP among those who could benefit from it [49]. Also, when improving knowledge about PrEP, addressing potential negative associations is warranted. Factors related to attitudinal constructs are potentially modifiable, as are stigmatizing attitudes, and this should be considered in future interventions. The second incongruent group concerns MSM ineligible for PrEP who were willing to take it. As the reported willingness is hypothetical, this finding does not necessarily mean that this group will come forward for PrEP despite being ineligible. However, these results do suggest that we have to be aware that there will be potential PrEP use outside of the narrow margins of clinical criteria. In Australia, it was found that consistent condom use had dropped on a community level, to a similar extent that PrEP was taken up [50]. The group we identified as ineligible but willing to take PrEP may be most prone for contributing to such a community-level risk compensation effect. However, a decreasing trend of condom use was already existing prior to the introduction of PrEP [51,52], with parallel increases of STIs [12]. PrEP is recommended for those at highest risk of HIV infection [53], but little is known about the extent of MSM starting with PrEP with the intention of using less condoms or engaging in other 'high risk' sexual activities. In our sample, a majority (84.1%) was convinced that it is important to keep using condoms while taking PrEP, which is a promising finding. However, we suggest further research is needed to explore to what extent PrEP remains to be perceived as an 'additional tool' or becoming a 'condom substitute' within the MSM community, to develop information strategies accordingly. Ineligible and willing participants were more likely to perceive themselves at higher risk for HIV and to have tested for HIV in the last six months, when compared with those unwilling. Hence, it is not surprising that they were also more likely to be aware of PrEP and to self-rate their PrEP knowledge as good. The question remains whether this ineligible group is willing to take PrEP in the future should they be in need (i.e., when their HIV risk increases), or whether they are actually would be willing to take PrEP despite being at low risk, to feel better protected against HIV. Given their recent testing, such contact with the health care system provides an opportunity for counseling and helping in deciding whether or not PrEP use would be appropriate. In contrast to our findings, it was recently observed that MSM in England self-evaluated their risk of acquiring HIV appropriately, which lead the authors to recommend PrEP for everyone perceiving themselves at risk, resulting in broader eligibility criteria [54]. The high contextuality and fluctuation of sexual (risk) behavior over time [55] justifies positioning PrEP as a positive sexual health promotion and wellness framing tool potentially avoiding PrEP-related stigma [56]. In addition, such branding and roll-out would avoid the ethical dilemma arising when denying someone PrEP coming forward for this prevention tool because of currently insufficient 'risk behavior' [57]. The question should perhaps not be on what comes first, reduced condom use and hence being or becoming eligible, or the intention to use PrEP. It would be unwise to deny such efficacious HIV prevention tool. Instead, we argue that the challenge lies in promoting condom use concomitantly within a combination prevention approach. Based on our data, we suggest that efforts need to be strengthened to promote PrEP as prevention method for MSM who are eligible for PrEP. Simultaneously, measures to endorse condoms for the prevention of STIs among PrEP users as well as a general means for effective sexual health promotion among all MSM, as stipulated by existing guidelines [58,59], is equally important. --- Limitations Eligibility for PrEP should not be considered a static condition, because sexuality may quickly change over time in accordance with individual behavior, risky contexts, and situations leading to'seasons of risk" [60]. The concept of hypothetical willingness should be understood in this perspective. However, this study is the first to examine the proportion of MSM being eligible for PrEP and their willingness in an online sample in Belgium. We were able to obtain a relatively large nationwide convenience sample of 1444 participants. It may not be representative for MSM in Belgium and we cannot make inferences to the entire MSM population. The use of sexual networking applications for recruiting participants may have led to a selection bias, i.e., participants with high levels of sexual activity, seeking sex partners on the internet, or with a particular interest in PrEP. We kept the questionnaire intentionally short, so that a maximum number of participants would complete the questionnaire, although this did not allow us to obtain in-depth insights on topics such as their intentions regarding future sexual behavior and condom use. Given the likelihood of multicollinearity of several independent variables, the findings pertain to a set of covariates that are jointly related to the outcome of hypothetical willingness to use PrEP. --- Conclusions In spite of the above limitations, we conclude that most MSM in this study were hypothetically willing to take PrEP in the future, in particular those who were eligible according to formal PrEP criteria. --- Recommendations To increase PrEP uptake among those eligible but unwilling to take it, we recommend strategies that modify HIV risk and address potential misconceptions about PrEP. We also recommend further research to explore to what extent condom use is being replaced by PrEP on a community level, and how condoms can be promoted alongside PrEP. For PrEP to work optimally on a population level, its promotion should be embedded in a comprehensive combination prevention strategy, tailoring information and prevention needs, and including de-stigmatization of PrEP at the community level. Author Contributions: J.B. developed the data analysis plan, conducted the formal data analysis and wrote the first and sub-sequent drafts of the paper. T.R. was involved in the conceptualization of the study, development of the questionnaire, funding acquisition, supervised the data analysis and was involved in the writing of the paper. B.V. was involved in the conceptualization of the study and the writing of the paper. M.L. was involved in the conceptualization of the study and the writing of the paper. C.N. was involved in the conceptualization of the study, development of the questionnaire and supervised the writing of the paper including reviewing and editing. Funding: This research was funded by Gilead Sciences Benelux: Funding year: 2016.
Men who have sex with men (MSM) are at high risk for acquiring HIV in Belgium. This study explores MSMs' hypothetical willingness to use pre-exposure prophylaxis (PrEP), assesses it against formal PrEP eligibility criteria, and identifies factors associated with incongruence between eligibility and willingness. We used data from an online survey of n = 1444 self-reported HIV-negative MSM. Participants were recruited through social media of MSM organizations and dating apps. Univariate analysis described PrEP willingness and eligibility; bivariate analyses examined how specific co-variates (socio-demographic, knowledge-related, and attitudinal and behavioral factors) were associated with eligibility and willingness. About 44% were eligible for PrEP and about 70% were willing to use it. Those who were eligible were significantly more likely be willing to take PrEP (p < 0.001). Two incongruent groups emerged: 16% of eligible participants were unwilling and 58% of ineligible participants were willing to use PrEP. Factors associated with this incongruence were sexual risk behavior, HIV risk perception, partner status, PrEP knowledge, and attitudinal factors. Because the two groups differ in terms of profiles, it is important to tailor HIV prevention and sexual health promotion to their needs. Among those at risk but not willing to take PrEP, misconceptions about PrEP, and adequate risk perception should be addressed.
Introduction In the context of population ageing, 'care stands alongside the other great challenges, such as climate change, that we must face at the global level and in our own lives' (Fine, 2012: 66). Many countries rely heavily on care provided by unpaid family members or friends yet, amid increasing debates around 'balancing the work ethic with the care ethic' (Williams, 2004: 84), there is currently something of an 'impasse' concerning unpaid care and employment (Fine, 2012: 58). The need for care is rising and governments are keen to support the provision of unpaid care to meet this need. At the same time, partly to reduce the publicly funded costs of pensions, governments are extending working lives and encouraging older workers to continue in employment (Fine, 2012). However, older people of working age are those who are most likely to provide unpaid care, which is often incompatible with employment, particularly when provided for long hours. In this context, 'helping carers to combine caring responsibilities with paid work' is becoming a key policy objective in many countries (Colombo et al., 2011: 85). In England, over the last two decades, there has been an emphasis in government policy on enabling people to combine unpaid care and employment (Her Majesty's Government (HMG), 2008;HMG, 1999). The Coalition Government's Carers' Strategy has four priority areas, one of which is 'enabling those with caring responsibilities to fulfil their educational and employment potential' (HMG, 2010: 6). The emphasis in policy relating to unpaid care and employment in England, as in other countries, has primarily been on 'flexible working' as part of a work/life balance agenda (Fine, 2012;HMG, 2010HMG,, 2008)). In Britain, the Work and Families Act 2006 gave employees who care for adults the right to request flexible working arrangements and, from June 2014, the implementation of the Children and Families Act 2014 extended this right to all employees. However, as well as an emphasis on flexible working, there is now an emphasis on'replacement care' to support work and care in England. The term'replacement care' was initially used in government policy around carers and employment in the 2008 Carers' Strategy, which included a commitment to fund'replacement care for those who are participating in approved training', in order to help carers to re-enter the labour market (HMG, 2008: 100). In the Coalition Government's Carers' Strategy, there is a new emphasis on developing'social care markets' partly to meet carers' needs for'replacement care to enable them to continue to work' (HMG, 2010: 16). The current emphasis on'replacement care' goes further than previous policy because it implies ongoing support for working carers, rather than temporary support to help carers to re-enter the labour market. The Care Act 2014 stated that carers' assessments must consider whether the carer wants to work and introduced a new duty on local authorities to provide support to meet carers' needs (Care Act 2014). Explanatory notes make it clear that a carer's need for support may be met by providing support directly to the cared-for person, for example by providing'replacement care' (House of Commons, 2014). The notes also make it clear that'replacement care' refers to paid support and services for the cared-for person, stating that carers will not be charged for care and support provided to the adult needing care. The emphasis on'replacement care' in government policy is an important development because it represents a marked change from previous government policies on carers in England. Previous UK governments had rejected any notion of replacing, or substituting, unpaid care with paid services (Pickard, 2012(Pickard,, 2001)). In terms of the conceptualisation of carers in the service system, an emphasis on'replacement care' is consistent with a'superseded carer' model (Twigg, 1992). As such, it involves recognition of the 'dual focus of caring', acknowledging that caring takes place in a relationship and that policy should therefore focus on both the disabled or older person and the carer (Twigg, 1996: 85-6). Recognition of the need to provide better services for disabled and older people, as a means of supporting or replacing carers, is consistent with key approaches to policy around disability and caring, including disability rights and feminist approaches (Arksey and Glendinning, 2007;McLaughlin and Glendinning, 1994;Parker, 1993a). However, it is important to note, that the Coalition Government sees'replacement care' as taking the form of services that would be provided through'social care markets' and is therefore consistent with a neo-liberal approach to care provision (HMG, 2010: 16). A joint report by the government and employers emphasises 'ways in which people can be supported to combine work and care, and the market for care and support services can be stimulated to grow to encompass their needs' (HMG and Employers for Carers, 2013: 7), a position recently restated in the government's Carers' Strategy National Action Plan (HMG, 2014: 37). Despite the new emphasis on'replacement care' in England, little is known about its effectiveness as a means of supporting carers in employment in this country. Lilly and colleagues, in their systematic review of the international literature on unpaid care and employment, identify 'the relationship between the use of paid (formal) home-care services and unpaid caregiver employment' as needing further international analysis (Lilly et al., 2007: 675). They could identify only four papers on this issue in the period covered by their review, all from the United States (Bullock et al., 2003;Covinsky et al., 2001;Doty et al., 1998;White-Means, 1997). Two further studies, one carried out in the United States (Scharlach et al., 2007) and one cross-nationally (Lundsgaard, 2006), have also been reported recently. The existing international literature on the effectiveness of paid services as a means of supporting unpaid carers in employment is inconclusive. Two of the studies from the United States show a positive relationship between the use of paid home-care services by the care-recipient and carers' employment rates (Scharlach et al., 2007;Doty et al., 1998). Moreover, a study carried out for the Organisation for Economic Co-operation and Development (OECD) found that countries with more extensive formal home-care provision, such as the Scandinavian countries, tend to have higher employment rates for mid-life women than countries with limited or average formal home-care provision, such as the UK (Lundsgaard, 2006). However, two of the studies from the United States show no relationship between the use of paid services and carers' labour force participation rates (Bullock et al., 2003;White-Means, 1997). Further, one of the studies from the United States suggests that there is a negative relationship between the use of formal services by the care-recipient and carers' employment rates (Covinsky et al., 2001). The study shows that carers of people who use formal services are more likely to reduce their labour market hours than carers of people who do not receive services, suggesting that higher levels of both types of care may reflect the care-recipient's increasing care needs. Not only is the existing international literature on the effectiveness of paid services as a means of supporting working carers inconclusive, but it is also the case that none of the studies relates specifically to England, where a policy advocating'replacement care' to enable carers to work is being proposed. Research carried out in other countries is not necessarily applicable to England, because of differences in labour market conditions, community care arrangements and financing mechanisms for health and social care. However, there appear to have been no previous peer-reviewed studies on the effectiveness of services to support carers in employment in England. There are studies showing that access to services by working carers is low and that employed carers would like more service support (Milne et al., 2013;Yeandle et al., 2007;Phillips et al., 2002), but this is not in itself evidence that the provision of such support would be effective in supporting carers' employment. There are also small-sample qualitative studies, providing examples of paid services that enable carers to work (Vickerstaff et al., 2009;Arksey and Glendinning, 2008;Seddon et al., 2004), but there have been no previous studies of the effectiveness of services in supporting working carers using large-scale survey data. There is therefore a gap in the evidence relating to the effectiveness of paid services as a means of supporting working carers in England. The aim of the present paper is to contribute towards filling this gap by utilising large-scale survey data to examine how far paid care services for the cared-for person are effective in supporting carers' employment in England, and, if so, which services are most effective. The paper also aims to explore the implications of the results for the current policy emphasis in England on'replacement care' for working carers. --- Data and methods In order to examine the effectiveness of paid services in supporting working carers, the analysis examines the association between the use of paid social care services by the cared-for person and the employment rates of unpaid carers, controlling for covariates. An association between use of services by the caredfor person and carers' employment may not in itself indicate a causal connection. However, a positive association between paid services for the cared-for person and carers' employment rates can be regarded as a necessary condition if services for the cared-for person are effective in supporting carers' employment. If the analysis finds no association between use of services by the cared-for person and carers' employment, it is unlikely that services would be effective in supporting carers' employment. The analysis uses the 2009/10 Personal Social Services Survey of Adult Carers in England (PSS SACE) (Health and Social Care Information Centre (HSCIC), 2010a). The survey includes questions on both the employment of the carer and the services received by the cared-for person (described more fully below). Moreover, the survey has a very large sample size, comprising approximately 35,000 carers. Excluding the ten-yearly census for England and Wales, the 2009/10 PSS SACE was, at the time it was collected, the largest survey of carers ever carried out in England. The 2009/10 PSS SACE was administered through local authorities (Councils with Adult Social Services Responsibilities (CASSRs)) and was designed for adult carers in contact, either directly or via the person they cared for, with social services (HSCIC, 2010a). The 2009/10 PSS SACE was the first national survey of carers in contact with councils. From 2012, the PSS SACE is being conducted biennially and is compulsory for CASSRs but, in 2009/10, participation in the survey was voluntary and 90 out of 152 CASSRs participated. Although not all councils participated, the Health and Social Care Information Centre regards the survey as representative of CASSRs in England (HSCIC, 2010b: 8). In the survey, carers are defined as people who 'look after a family member, partner or friend in need of support or services because of their age, physical or learning disability or illness, including mental illness' (HSCIC, 2010a: 92). Questions about unpaid care provision relate to the main cared-for person. 1 The eligible population in the PSS SACE is defined as carers aged eighteen and over, caring for an adult aged eighteen and over, where the carer has been assessed or reviewed by social services during the previous year and, in some CASSRs, carers identified from the records of service users (known as 'carers by association'). An eligible population of 175,600 carers was identified and 87,800 were randomly selected and sent a postal questionnaire. A total of 35,165 carers then responded, giving a response rate of approximately 40 per cent (HSCIC, 2010a). Carers in the survey are more likely to care for longer hours than carers nationally (HSCIC, 2010b). However, as explained below, it is with carers who provide long hours of care that this paper is concerned. As already indicated, the survey asks the carer about both the services received by the person they look after and their own employment (HSCIC, 2010a: 94, 108). The question on service receipt asks whether the care-recipient has used a range of services in the last twelve months, including care home, personal assistant, home care/home help, day centre/day activities, lunch club and meals-on-wheels (HSCIC, 2010a: 94). The analysis of services in this paper combines day centre, day activities and lunch club into one service ('day care'). Home care/home help ('home care') refers primarily to help with personal care. The service described as 'care home' refers to either short-term breaks or permanent residence in a care home, but, for reasons explained later, it seems likely that care home use in the survey primarily refers to short-term breaks. 'Personal assistants' are people employed by individuals with care needs, who are often in receipt of personal budgets. 'Meals-on-wheels' are meals delivered to individuals at home. Services are classified according to whether they are used on their own or in combination with other services (described more fully later). The employment variable utilised here measures the labour force participation rate, that is, whether or not the respondent is employed, including self-employed. Although the distinction between full-time and part-time employment is also important, the focus of much of the international literature on caring and employment is on labour force participation per se, and this is clearly of importance in its own right (Lilly et al., 2007). The analysis focuses on 'working age' carers aged between eighteen and state pension age, which, at the time the data were collected, was sixty for women and sixty-five for men. Because relationships around unpaid care and employment vary greatly by gender, it is customary to examine men and women separately wherever possible (cf. Evandrou and Glaser, 2002), and this practice is observed here. The analysis focuses on 'intense' carers, providing unpaid care for ten or more hours a week, because previous research suggests that it is at this threshold that unpaid care has a negative effect on employment in England and carers' employment is 'at risk' (King and Pickard, 2013). The analysis begins by describing the characteristics of unpaid carers and the receipt of paid services by the people cared for in the PSS SACE. The analysis then looks at bivariate associations, in order to identify patterns that appear to be occurring in the data, before adjustment for relevant covariates. The bivariate analysis initially compares the employment rates of carers where the care-recipient does and does not use paid services. The bivariate analysis then looks at the employment rates of women and men carers by a range of variables that may affect provision of unpaid care and employment (described in detail later). Multivariate logistic regression analysis is then undertaken, with the dependent variables being whether or not the carer is in employment (described fully later). In all the analyses presented here, a level of 0.05 is used as the criterion to determine significance. All analyses are performed using the Stata 12.1 software package (StataCorp, 2011). --- Characteristics of unpaid carers and service receipt by cared-for people in PSS SACE Of the approximately 35,000 carers in the 2009/10 PSS SACE, approximately 10,500 are of 'working age' providing intense care for ten or more hours a week (Table 1). Not all were asked questions on service receipt and/or employment, because councils could choose whether to include these (and some other) questions. The sample size of intense carers under state pension age, answering questions on both services and employment, is 6,304 respondents (4,106 women and 2,198 men). The characteristics of these intense 'working age' carers are similar to the characteristics of all intense 'working age' carers in the sample and, for both, the overall employment rates are between 46 and 47 per cent for women and 38 per cent for men (Table 1). The higher employment rate for women carers in the sample probably reflects the higher percentage of women carers who work part-time, since part-time work is more compatible than full-time work with unpaid care provision (Evandrou, 1995). Table 2 shows the distribution of unpaid carers in the survey according to the use of services by the person they care for. In the table, carers who look after someone who receives at least one paid service are distinguished from those who look after someone who receives no services. The types of services received are further disaggregated into a number of mutually exclusive categories: use of one service only, use of combinations of two services only and use of combinations of three services only. The most frequently occurring combinations of services are included, 2 with the remaining combinations categorised as 'other combinations of paid services'. Services are categorised in this way in order to examine the independent effects of each individual service, as well as the main combinations of services. Table 2 shows that, although the PSS SACE is a survey of carers in contact with local authorities, not everyone looked after by carers receives paid services. Of the 4,106 women carers in the sample, 29 per cent (1,183) are looking after someone who does not receive any services, while this is true of 675 (31 per cent) of the 2,198 men. Among carers looking after someone who receives a paid service, the majority look after someone receiving only one service, the most frequently received being either home care or day care, with fewer people receiving help from a personal assistant, care home or meals-on-wheels. Of those caring for someone who receives more than one service, most receive two services. Some services are more likely to be received in combination with another service than on their own. In particular, care home and meals-on-wheels are both more likely to be received in combination with home care than on their own. Use of a care home is particularly likely to be combined with other services, including day care as well as home care, and this suggests that the service users are not permanently resident in care homes, where all these services would be provided. Therefore use of a care home in the survey is likely to refer primarily to short-term breaks. --- Employment rates of unpaid carers by cared-for people's use of paid services The analysis now explores whether the employment rates of unpaid carers vary according to the receipt of paid services by the cared-for person. Table 3 shows the employment rates of carers (according to gender) who provide unpaid care for ten or more hours a week. Employment rates are shown both where the care-recipient receives paid services and where they do not. The table shows that, in initial bivariate analysis, women and men providing intense unpaid care seem more likely to be in employment if the care-recipient receives at least one paid service than if the care-recipient does not receive any services. 3 The employment rate of women providing intense unpaid care for someone who does not receive a paid service is 37.3 per cent, but is 50.3 per cent where the cared-for person receives at least one service. The equivalent figures for men are 28.9 per cent and 42.4 per cent, respectively. Moreover, the employment rates of women and men caring for someone who receives any individual service or combination of services are always higher than the employment rates of those whose care-recipient does not receive any services. Employment rates of carers by characteristics of carers, cared-for people and caring As already indicated, as well as the receipt of paid services by the cared-for person, the employment rates of unpaid carers may be associated with other variables. These include the carer's age, health, ethnicity and region of residence; the health of the care-recipient; whether or not he or she lives with the carer; and the hours of unpaid care provided by the carer. Previous British studies suggest that the employment rates of carers are likely to be higher among people aged in their thirties and forties (rather than those nearing retirement), who are in good health; who live in the South East or East of England; who care for someone with relatively low disability or who does not co-reside with them; or who provide fewer hours of care (Carmichael et al., 2010;Buckner et al., 2009;Heitmueller, 2007;Henz, 2004). Ethnicity is also important as there is variation in the extent to which people from different ethnic backgrounds provide intense care (Young et al., 2005). In the analysis presented here, the age variable distinguishes those aged eighteen to thirty-four, thirty-five to forty-nine and fifty to state pension age. Carer health is measured in terms of presence or absence of illness or disability. The ethnicity variable distinguishes those with and without a black and minority ethnic background. Region of residence distinguishes nine English regions. The indicator for health of the care recipients distinguishes those with and without a condition that affects them mentally (dementia, mental health problem, learning disability/difficulty), given evidence that caring for someone who is affected mentally is more demanding for carers (Parker, 1993b). The 'locus of care' variable distinguishes cared-for people who do and do not live with the carer. The 'intensity of care' variable distinguishes care provided for ten to nineteen, twenty to thirty-four, thirty-five to forty-nine, fifty to ninety-nine and a hundred or more hours per week. Table 4 shows the employment status of women and men of working age who provide unpaid care for ten or more hours a week, in the 2009/10 PSS SACE, by a range of characteristics. Most of the results are as expected from the literature. Employment rates of carers appear to be higher if they are in their thirties and forties rather than if they are nearing retirement, although women carers in the younger age-groups (eighteen to thirty-four years) also have comparatively low employment rates. Employment rates of carers also seem higher where carers do not have an illness or disability, where the care-recipient is not co-resident with the carer and where fewer hours of unpaid care are provided. The employment rates of women carers vary significantly by region, with the South East appearing to have the highest rates, although employment rates of men providing care are not significantly different by region. In addition, the survey suggests that women carers from black and minority ethnic (BME) backgrounds seem less likely to be in employment than those who are not from BME backgrounds, although the reverse seems true for men. In the bivariate analysis shown in Table 4, there effectiveness of paid services in supporting working carers 579 is one relationship that seems unexpected from the literature. The employment rates of carers do not seem to vary significantly according to the health of the care-recipient, measured here by whether the person cared for has a mental health problem. Relationship between employment rates of unpaid carers and care-recipients' use of paid services, controlling for covariates The associations between the employment rates of unpaid carers and carerecipients' use of services are tested further using multivariate analysis, controlling for a range of covariates. The dependent variable is the employment status of women or men providing unpaid care for ten or more hours a week. Four models are reported, two each for women and men carers. The first two models include receipt of at least one service, while the second two models include receipt of individually identified services and combinations of services. Each model initially includes carer age, health, ethnicity and region of residence, whether or not the carer lives with the care-recipient, whether or not the care-recipient has a mental health problem and the intensity of caring. All of these latter variables are initially included, irrespective of whether they are significant in bivariate analysis, since their associations with employment status may vary when other factors are taken into account. The odds ratio for each variable is estimated, along with the significance level and 95 per cent confidence intervals (CIs). For each model, we compared the fit (based on likelihood ratio chi-squared statistics) of the full model, with all covariates included, and the final model, including only significant covariates. In each case, the final model has a better fit than the full model, and is reported here. Table 5 shows the results of the logistic regression analysis to determine the factors associated with the employment status of women providing intense unpaid care, including the use of at least one paid service by the cared-for person in the model, controlling for covariates. There is a significant association between the employment rate of women carers and the use of at least one paid service by the cared-for person. Women who provide unpaid care for ten or more hours a week have significantly higher odds (1.57, CI 1.34-1.85) of being in employment if the person they care for receives at least one paid service compared with if they receive no services, controlling for covariates. 4 Other factors significantly associated with being in employment for women carers are their age, health, region of residence, hours of care provided and co-residence with the care-recipient. Mid-life women and those in their fifties are more likely to be in employment than younger women, although this effect tends to be less marked for women nearing retirement. Women are also significantly more likely to be in employment if they do not themselves have an illness/disability, if they live in the South East of England and if they care for relatively few hours a week. In addition, a somewhat surprising result is that co-resident carers are more likely to be in employment than extraresident carers. Factors that are not associated with women carers' employment status are the ethnicity of the carer and whether the cared-for person has a mental health problem. (These results are discussed in more detail below.) Table 6 shows the results of the multivariate analysis to determine the factors associated with the employment status of men providing intense unpaid care, including the use of at least one paid service by the cared-for person. There is a significant association between the employment rate of men providing care and the use by the cared-for person of at least one service. Men who provide unpaid effectiveness of paid services in supporting working carers 581 care for ten or more hours a week have significantly higher odds (1.69, CI 1.34-2.12) of being in employment if the person they care for receives at least one paid service compared with if they receive no services, controlling for covariates. Other factors significantly associated with being in employment for men who provide unpaid care are their health and hours of care provided. Men who do not have an illness/disability and who care for relatively few hours are significantly more likely to be in employment. Factors that are not associated with the employment status of men providing care are their age and ethnicity, the region of residence of the carer, whether the cared-for person has a mental health problem and whether the care-recipient is co-resident with the carer. (Again these results are discussed in more detail below.) Relationship between employment rates of unpaid carers and care-recipients' use of individual services and combinations of services, controlling for covariates Multivariate analysis is used to look at the associations between the employment rates of unpaid carers and the care-recipients' use of individual services and combinations of services. As before, the dependent variable is the employment rate of women or men providing unpaid care for ten or more hours a week. Tables 7 and8 show, respectively, the results for women and men carers. The models control for other factors, including the characteristics of the carer, the cared-for person and the nature of the care provided, with the patterns of significance of these other factors being similar to those in the previous models. Tables 7 and8 show that care-recipients' use of home care only and use of a personal assistant only are significantly associated with the employment rates of both women and men carers. Women and men who are providing unpaid care for ten or more hours a week have significantly higher odds (1.64 and 1.69 respectively) of being in employment if the person they care for receives home care compared with if they receive no services. Similarly, women and men who are providing intense unpaid care have significantly higher odds (1.74 and 2.45 respectively) of being in employment if the person they care for receives help from a personal assistant compared with if they receive no services. Care-recipients' use of day care only and meals-on-wheels only are also significantly associated with women carers' employment. Care-recipients' use of a care home only is not significantly associated with the employment rates of either women or men carers, but use of this service is significantly associated with carers' employment when combined with other services (Tables 7 and8). Care-recipients' use of a care home, when combined with home care, is significantly associated with the employment of men carers, while care-recipients' use of a care home is significantly associated with women carers' employment when combined with day care or both home care and day care. In addition, although care-recipients' use of neither day care nor meals-onwheels on their own are significantly associated with the employment rates of men carers, each service is significantly associated with the employment of men providing care when combined with home care. --- Discussion and conclusions This study suggests that there is a positive association between the employment rates of unpaid carers in England and receipt of paid services by the person they care for. The analysis has focused on carers whose employment is 'at risk' which, consistent with previous research (King and Pickard, 2013), is defined here as those providing care for ten or more hours a week. Using large-scale survey data, the 2009/10 PSS SACE, the study finds that, where the cared-for person receives at least one paid service, women and men providing unpaid care for ten or more hours a week are more likely to be in employment than if the cared-for person does not receive any services. A positive association between carers' employment and receipt of paid services is a necessary condition if services for the cared-for person are effective in supporting carers' employment. Therefore, our results give some support to the hypothesis that services for the cared-for person are effective in supporting carers' employment. Carers' employment in England is associated with receipt by the cared-for person of some services more than others. The study finds that use by the carerecipient of home care only, or help from a personal assistant only, are both positively associated with the employment rates of women and men carers, while care-recipients' use of day care and meals-on-wheels are associated with women carers' employment. Gender differences in the association between paid services and carers' employment may be associated with the greater likelihood of women carers working part-time (Evandrou, 1995) since a service like day care, which tends not to be utilised by the care-recipient every day, may be more helpful to part-time than full-time workers. In addition, the study finds that use by the carerecipient of a care home only is not significantly associated with the employment rates of carers, although this service is associated with carers' employment when combined with other services. One reason for the difference in the association between carers' employment and care-recipients' use of this particular service may again be the frequency with which the service is provided. The study has suggested that use of a care home is likely to refer primarily to short-term breaks, a service that tends to be provided for a limited number of weeks a year, whereas, to facilitate employment, services that are provided regularly during the working week are likely to be needed. The results show that a number of factors, in addition to the caredfor person's receipt of paid services, are positively associated with carers' employment, including good health on the part of carers and providing fewer hours of care, as well as, for women carers, being in mid-life or older (compared to younger carers) and living in the South East. These results are broadly consistent with other studies in Britain (Carmichael et al., 2010;Buckner et al., 2009;Heitmueller, 2007;Henz, 2004). One result that is somewhat surprising is the finding that, controlling for other variables, women providing co-resident care are more likely to be in employment than those providing care to someone in another household, whereas the existing literature suggests that the employment rates of carers are higher when care is provided to someone who does not co-reside with them (Heitmueller, 2007). Closer examination of our results shows that co-resident women carers are more likely to be in employment than those providing extra-resident care when they care for thirty-five to forty-nine hours a week. This finding may be associated with the effect of the receipt of carer's allowance, since this benefit is only paid to carers providing care for at least thirty-five hours a week and receipt of carer's allowance can limit carers' employment opportunities, while interactions with other benefits could mean this differentially affects extra-resident carers (Fry et al., 2011). This could not, however, be explored further because information on carer's allowance was not included in the dataset used here. Other factors, such as the health of the care-recipient have also been shown elsewhere to affect carers' employment (Heitmueller, 2007), but were not significant in the multivariate analysis reported here. This may be because the variable used here to indicate the care-recipient's health did not sufficiently distinguish between those with severe and relatively minor problems, and this represents a limitation of the analysis (discussed in more detail below). However, the key implication of our multivariate analysis is that the employment status of women and men providing long hours of unpaid care is likely to be associated, not just with factors like their health and hours of care provision, but also with the use of paid services by the person they care for. The results presented here have important implications for social policy. The findings support the policies of recent governments in England of emphasising'replacement care' as a means of supporting unpaid carers' employment (HMG, 2008(HMG,, 2010)). This is because the results show that paid services for the cared-for person are associated with higher employment rates among unpaid carers. A key policy implication is therefore that, if a policy objective is to support people to combine unpaid care and employment, then there needs to be greater access to paid services for disabled and older people who are looked after by unpaid carers. More widely, our findings support disability rights and feminist approaches to policy, which have argued for better services for disabled and older people, as a means of supporting carers and of bringing together the interests of both carers and the people they care for (Arksey and Glendinning, 2007;McLaughlin and Glendinning, 1994;Parker, 1993a). However, our results also raise two important issues around recent government policies emphasising'replacement care' in England. First, the evidence raises questions about the use of the term'replacement care'. There is no evidence from this study that unpaid carers are replaced by paid services for the person they care for. The unpaid carer is still providing care, even when paid services are provided to
This paper explores the effectiveness of paid services in supporting unpaid carers' employment in England. There is currently a new emphasis in England on 'replacement care', or paid services for the cared-for person, as a means of supporting working carers. The international evidence on the effectiveness of paid services as a means of supporting carers' employment is inconclusive and does not relate specifically to England. The study reported here explores this issue using the 2009/10 Personal Social Services Survey of Adult Carers in England. The study finds a positive association between carers' employment and receipt of paid services by the cared-for person, controlling for covariates. It therefore gives support to the hypothesis that services for the cared-for person are effective in supporting carers' employment. Use of home care and a personal assistant are associated on their own with the employment of both men and women carers, while use of day care and meals-on-wheels are associated specifically with women's employment. Use of short-term breaks are associated with carers' employment when combined with other services. The paper supports the emphasis in English social policy on paid services as a means of supporting working carers, but questions the use of the term 'replacement care' and the emphasis on 'the market'.
, 2010)). This is because the results show that paid services for the cared-for person are associated with higher employment rates among unpaid carers. A key policy implication is therefore that, if a policy objective is to support people to combine unpaid care and employment, then there needs to be greater access to paid services for disabled and older people who are looked after by unpaid carers. More widely, our findings support disability rights and feminist approaches to policy, which have argued for better services for disabled and older people, as a means of supporting carers and of bringing together the interests of both carers and the people they care for (Arksey and Glendinning, 2007;McLaughlin and Glendinning, 1994;Parker, 1993a). However, our results also raise two important issues around recent government policies emphasising'replacement care' in England. First, the evidence raises questions about the use of the term'replacement care'. There is no evidence from this study that unpaid carers are replaced by paid services for the person they care for. The unpaid carer is still providing care, even when paid services are provided to the person they look after. This suggests that paid services for the care-recipients are better described as complementing or supplementing the care provided by unpaid carers. Use of this latter terminology would be more consistent with the international literature on substitution between formal and informal care, which suggests that paid domiciliary services, provided to disabled and older people living in their own homes, do not tend to replace the care provided by unpaid carers (Motel-Klingebiel et al., 2005). 5 What this suggests is that a new term is needed for'replacement care'. In the meantime, it is advisable to use the term in inverted commas, as in this paper. The second issue around a policy of'replacement care', as currently described in English government policy documents, relates to the emphasis on 'the market' to meet the needs of unpaid carers and the people they look after. The evidence from this study relates to unpaid care for adults, for whom most unpaid care in England is provided (HSCIC, 2010c). With regard to care for adults, the costs of'replacement care' are likely to fall to the care-recipient, typically a disabled or older person, who may lack the resources to purchase care on 'the market' (Lewis and West, 2014). It is therefore likely that more publicly funded'replacement care' is also needed. It is not clear that government policy would be so keen to advocate'replacement care' if this was publicly funded. As the feminist literature on unpaid care policy has long recognised, the major disadvantage of increasing publicly funded services to disabled and older people with carers is the cost (McLaughlin and Glendinning, 1994). Yet public investment in services could lead to savings in public expenditure. It has been estimated that the public expenditure costs of carers leaving employment in England are more than a billion pounds a year, based on the costs of carer's allowance and lost tax revenues on forgone incomes (Pickard et al., 2012). Therefore, greater public investment in'replacement care' to support carers in employment could represent good value for money. Further research examining the policy of'replacement care' is now needed. First, there is a need for evidence around the costs of providing publicly funded'replacement care' and whether these would be offset by public expenditure savings. In other words, there is a need for evidence not just about the effectiveness of'replacement care' as a means of supporting working carers, but about its costeffectiveness. Second, this paper has used cross-sectional data, the 2009/10 PSS SACE, to examine the association between paid services for the care-recipient and carers' employment, but, in order to examine causation, longitudinal analysis would be preferable. In particular, it has not been possible to show conclusively here whether it is services for the cared-for person that enable carers to remain in employment, or whether employed carers are better able to purchase services for the care-recipient. The international evidence suggests that it is more often the care-recipient, rather than the carer, who makes payments for formal care (Doty et al., 1998), suggesting that it is services that enable carers to work, but longitudinal data would help to establish the direction of causal influence in England. Third, there is a need for more informative data on the health of the care-recipient. The dataset used here did not contain detailed information on the health of the care-recipient and, specifically, did not allow for those with severe problems to be distinguished from those with relatively minor problems. In addition, the data used here did not allow for an examination of the potential impact of new technology on carers' employment. Many of these issues are now being pursued by the authors in further research on'replacement care' as a means of supporting working carers (Pickard et al., 2013). Nevertheless, what this study has shown is that there is already some evidence to support a policy of'replacement care' and that this type of policy may be central to resolving the current impasse around unpaid care and employment. --- Notes 1 The main cared-for person is the person that the carer spends most time helping. If carers spend an equal time caring for two or more people, they are asked to answer in relation to the person who lives with them. If carers live with two or more people that they spend an equal amount of time caring for, they are asked to choose one person as the main person they care for. 2 Combinations of two paid services are included if the underlying sample size of carers is at least thirty. The two most frequently used combinations of three paid services are included. The analysis includes each service used on its own, but where sample numbers are small, as in the case of meals-on-wheels, figures in tables are shown in square parentheses. https://doi.org/10.1017/S0047279415000069 Published online by Cambridge University Press 588 linda pickard, derek king, nicola brimblecombe and martin knapp 3 Bivariate tabulations do not constitute a platform for identifying associations; results adjusted for covariates are reported later in the paper. 4 This can be approximately interpreted to mean that, controlling for other factors, women carers have 57 per cent higher odds of being in employment if the person they care for receives at least one paid service compared with if they receive no services. 5 Where substitution between formal and informal care does take place, the evidence suggests that it occurs when the disabled or older person is in permanent residential care, rather than in their own home (Pickard, 2012).
This paper explores the effectiveness of paid services in supporting unpaid carers' employment in England. There is currently a new emphasis in England on 'replacement care', or paid services for the cared-for person, as a means of supporting working carers. The international evidence on the effectiveness of paid services as a means of supporting carers' employment is inconclusive and does not relate specifically to England. The study reported here explores this issue using the 2009/10 Personal Social Services Survey of Adult Carers in England. The study finds a positive association between carers' employment and receipt of paid services by the cared-for person, controlling for covariates. It therefore gives support to the hypothesis that services for the cared-for person are effective in supporting carers' employment. Use of home care and a personal assistant are associated on their own with the employment of both men and women carers, while use of day care and meals-on-wheels are associated specifically with women's employment. Use of short-term breaks are associated with carers' employment when combined with other services. The paper supports the emphasis in English social policy on paid services as a means of supporting working carers, but questions the use of the term 'replacement care' and the emphasis on 'the market'.
Introduction A ccording to the WHO, 1 obesity is defined as an abnormal excess in body fat, which represents a health risk. It is a major risk factor for many non-communicable diseases, 1-5 which incur high health costs 6 and increased mortality. It is estimated that, in 2016, 13% of adults worldwide suffered from obesity. 1 Many epidemiological studies show that the prevalence of obesity has clearly increased in recent decades. [7][8][9][10][11][12][13][14][15][16][17][18][19] Since 1975, the rate has nearly tripled globally. 1 Trend prognoses indicate that obesity will also further increase in the future. 20,21 However, obesity is largely preventable. 1 First of all, however, the identification of major risk groups is necessary, allowing specific support for target groups at the individual and societal levels. For this reason, it is necessary to investigate this serious public health problem within the wider framework of the general population. Longitudinal data on rates of obese subjects collected over several decades allow a precise assessment of trends in obesity. 22 The authors found only a few current long-term studies for obesity in European countries which were extensive enough to provide reliable estimates to be made for the prevalence of obesity in subgroups of adults. 8,22 In Austria, the latest trend analysis for obesity in the general adult population was investigated for the time span of 1973-07. During that time, the prevalence clearly increased, and higher rates were consistently observed among more women than men. The age-adjusted prevalence of obesity was 14.5% for the whole Austrian population in 2007. 22 In 2014, the most recent representative health survey was conducted in Austria, which allows researchers to obtain information on the further development of obesity on the national level. Therefore, the aim of this study was to analyse the most current obesity trends (1973-14) for Austrian adults according to their sex, age and educational status. Studies reported that the social gradient of subjects should be considered when investigating obesity. 18,19,[22][23][24] Socioeconomic health inequalities should be addressed from a gender perspective, since the effects of socioeconomic status differ between women and men. 24 Another objective of our study was to present the magnitude of inequality related to obesity among educational groups for women and men during the study period. For each survey, a random sample was drawn from the national population register. For the sake of representation, the sample was stratified by the 32 administrative districts in Austria. Microcensus data and data from AT-HIS 2007 were collected in standardized face-to-face interviews with persons aged 15 years or older. Interviews were held in private homes or long-term care facilities by trained interviewers. In 2014, computer-assisted telephone interviews were conducted with the participants, and this data was combined with data collected using individual questionnaires. The data were weighted using age, sex and region-specific weights to ensure the representativeness of the sample. 25 The data analysis was limited to data for adults aged!20 years, since the AT-HISs concerned only entire age bands (in 5-year intervals). Therefore, data from subjects younger than <unk>20 years of age were excluded in all surveys (n = 64 611). In cases of missing data regarding sex (n = 4124) and body mass index (BMI) (n = 25 585), data were excluded. Cases with implausible BMI values (BMI 10 kg/m 2, BMI! 75 kg/m 2 ; n = 11 457) were also removed from the database. The proportion of individuals included in the analysis was 64.7% (N = 194 030; 53.5% = female). --- Methods --- Data source and sampling --- Variables In all six surveys, demographic, socioeconomic and health data were collected via an interviewer questionnaire. Individuals self-reported their body height (without shoes) and body weight (without clothes). To identify obesity, the BMI (kg/m 2 ) was calculated. Participants with a BMI! 30 were categorized as obese. 1 Four age groups were formed according to the WHO 26 age group codelist: 20-34 years, 35-54 years, 55-74 years and!75 years. The age groups were chosen so that the distribution of participants within the groups is similar. 'Educational status' was measured as the highest educational level reached and then categorized as primary school or vocational school (low educational level), secondary school with general qualification for university entrance (middle educational level), university or college of higher education (high educational level). In German-speaking countries this is a frequently occurring categorization of the educational level. 3,4,22,27 The International Standard Classification of Education (ISCED) 28 would allow a more precise categorization of educational groups and better international comparability. However, such data are lacking that comparability for educational status over the entire study period would not have been possible. Analyses for educational status are presented only for the period of 1983-14, since the educational level was not assessed in the first survey. --- Correcting for self-reporting bias Based on results of a preliminary study, in which the validity of selfreported body weight and height was investigated, data correction factors for BMI were applied. 26 Correction factors for BMI were only applied to data for individuals 45 years and older, because deviations between self-reported and measured data on BMI only increased in those subjects (correction factors for women: 45-59 years: +0.41 kg/m 2,!60 years: +1.09 kg/m 2 ; correction factors for men: 45-59 years: +0.50 kg/m 2,!60 years: +0.54 kg/m 2 ). --- Data analysis All statistical analyses were conducted using IBM <unk> SPSS <unk> Statistics 25.0. Selected and comparable variables were entered into a common database. The crude and age-standardized prevalence values were calculated using the WHO European standard population for direct standardization. Binary logistic regression analyses were conducted for the whole study period using the dichotomous variable obesity as a dependent variable and the survey period as a predictor. Age was integrated as a correction variable, with the youngest age group forming a reference category. To quantify trends in the prevalence of obesity, the percentages of absolute change (AC) were assessed. The aetiologic fraction (AF), a ratio measure, was calculated and represented the subgroup with the greatest relative risk for obesity. The AF denotes the percentage portion of the disease risk. To calculate the AC and AF, the prevalence values for the first (Pf) and last (Pl) years were used, as estimated using binary logistic regression models. The AC was defined as Pl-Pf, and the AF, as (Pl-Pf)/Pl. 29 The exact formulas are described as follows: AC 1<unk>4 1= 1 <unk> exp1<unk>2<unk>B0 <unk> B <unk> T <unk> <unk> 1=<unk>1 <unk> exp1<unk>2<unk>B0<unk> AF 1<unk>4 <unk>RR <unk> 1<unk>=RR RR 1<unk>4 Relative risk 1 <unk> exp1<unk>2<unk>B0<unk> = 1 <unk> exp1<unk>2<unk>B0 <unk> B <unk> T <unk> B 1<unk>4 Regression coefficient; B0 1<unk>4 Intercept; T 1<unk>4 Time period in years: The magnitude of inequalities for obesity between educational groups was measured by calculating the relative index of inequality (RII). 30 The RII describes the percentage of the predicted rate for the lowest level in the hierarchy with relation to the predicted rate for the highest level in the hierarchy. The variable 'educational level' was transformed into the variable 'fractional rank', when ranking the sample by educational level. In doing so, the population at each educational level was allocated a modified rigid score, which was based on the midpoint of the range in the cumulative spread of the population. A binary logistic regression with the dichotomous variable obesity and the predictor fractional rank (correction variable: age) was performed to obtain the exponentiation of the regression coefficient, representing the RII = ((exp (B) <unk>1)<unk>100). Statistical tests were two-sided and a P <unk> 0.05 was considered as statistically significant. Pearson's 2 test was carried out to analyse the statistical significance of the data during the survey period. --- Ethical concerns Participation was voluntary. Verbal informed consent was obtained from all subjects, witnessed and formally recorded for every survey. This study was approved by the Ethics Committee of the Medical University of Graz (EK-number: 30-077 ex 17/18). --- Results In 2014, the crude prevalence for obesity was 16.8% (95%CI: 16.1-17.5), and the age-standardized prevalence for obesity was 15.8% (95%CI: 15.1-16.6) in the general adult population. Stratified by sex, the prevalence was only higher for men than for women (16.8% vs. 14.6%, P <unk> 0.001) in the latest survey. Subjects aged 55-74 years old showed the highest rates during the period investigated, with more men being affected. The lowest prevalence of obese subjects was observed among the youngest age group, with higher prevalence seen for male adults in the latest survey. Regarding the educational level, the prevalence of obesity was highest among subjects with the lowest educational level. The differences in the rates of obesity between individuals' middle and high levels of education were low, with somewhat lower values seen for participants with a high educational level. The RII consistently showed higher values for women than for men. For men, the RII showed a rising trend during the study period with similar values obtained in the last two surveys (table 1). In figure 1, the crude and age-standardized prevalence rates observed during the study period are illustrated separately for women and men. The lowest obesity proportion was estimated in 1983. Up until 1991, the prevalence of obesity strongly increased. From 2007 to 14, the values stabilized for women, while the prevalence for obesity continued to rise for men. Calculated trends for the prevalence of obesity are presented for the period of 1983-14, due to the increase in the obesity prevalence in 1983. In the whole population, an AC of 2.0% for obesity prevalence between 1983 and 14 was found. The AC for the prevalence of obesity was higher among men than women. A larger AF means a greater dynamic, which was observed for men. The strongest increase in obesity prevalence was noted among the oldest women (13.3%), men between 55 and 74 years (10.9%) and those with a low educational level (women: 1.9%, men: 2.8%). Results for the AC among women with a high educational level showed the lowest increase. The AF was highest among the youngest participants, who were women with a middle educational level and men with a low educational level (table 2). To identify interactions between age and educational level, two educational groups were grouped into four age groups to gain a more precise outcome. Subjects with a middle educational level were allocated to the group with a high educational level, because the obesity prevalence between individuals with middle and high levels of education was similar during the study period and to obtain comparably large groups. Men aged 75 years and older with a high or middle educational status had the highest AC (16.2%). Among the women, the oldest age group with a low level of education had the highest increase in the prevalence of obesity (14.2%). The AF was highest for men in the oldest age group with a high or middle educational level (table 3). --- Discussion Obesity decreased less strongly between 1973 and 1983 among the different subgroups in Austria, but an increase was observed for both sexes between 1983 and 2007, and a peak occurred between 1991 and 2007. From 2007 to 2014, the prevalence did not significantly change for women, but increased for men. In 2014, we observed the highest prevalence of obesity in older age groups, with the highest rate seen in men aged 55-74 years. Subjects with a low educational level also showed the highest obesity prevalence, with somewhat higher rates seen for men with low levels of education than for women. A high AC in obesity prevalence was found for the oldest women, for men aged 55-74 years and for subjects with a low level of education. Only a small increase in obesity prevalence was found among highly educated women. The most prominent AC in the prevalence of obesity between 1983 and 2014 was observed for men in the oldest The analysis refers to the period from 1983 to 2014. a: B is the regression coefficient. b: AC% is the absolute change in obesity prevalence during the study period computed from logistic regression. c: AF% is the aetiologic fractions of obesity prevalence during the study period computed from logistic regression. age group with high or middle levels of education, while the lowest AC was found for the youngest women with a high educational level. The youngest and oldest men with high/middle educational status showed the greatest AFs. The magnitude of educational inequalities related to obesity was higher among women than men. Comparison with the literature Among Austrian adults, the age-standardized prevalence for obesity was 15.8% in 2014. Compared with the results of European studies carried out in Germany, 13 Switzerland, 11 Norway 14 and Poland, 16 our estimated prevalence of obesity is low. Population-based studies for European countries 23 or worldwide 12,17 also reported higher rates for obesity for most developed countries. It was observed that the percentages varied widely from country to country, with highest prevalence seen in North-eastern European countries and lowest prevalence in Western and Southern European countries, and especially Mediterranean countries. 15,23 Extremely high obesity prevalence has been reported from the USA. 18,19,31 In 2016, the prevalence of obesity was 39.9% for US adults. 31 This result is more than twice as high as that obtained in our study. A strong increase in the prevalence of obesity has been reported from 1990 in other countries on, as in our study. 7,8,14,17 In the USA, a similarly striking increase in obesity was cited 10 years earlier. 31,32 High obesity rates are mainly attributed to a reduction in physical activity and to the increased production of low-cost and energydense foods. 1 These factors seem to be the universal consequences of industrial development and improved living conditions. 33 The strong increase in obesity prevalence observed in Austria might be due to the higher living standard achieved in the early 1990s and on. 22 It is interesting that, for the first time, the prevalence for obesity is higher among men than women in Austria. Studies have shown that women usually have higher obesity rates. [12][13][14]18 More recent studies and also a future prognosis, however, cite higher prevalence for men than for women in Europe. 16,20,23 In England, it is estimated that about 50% of women and 60% of men will be obese by 2050. 20 Poland shows similar trends as Austria. In 2005, the prevalence for obesity was higher in women (22.3% vs. 20%) in Poland, while in 2014, the rates were higher for men (24.2% vs. 23.4%). The proportions of obesity in men have also been growing more rapidly in the last decades. 22,34 The European male aging study concluded that weight and BMI are especially rising among men in countries that are undergoing socioeconomic and political transitions. 15 It seems that public health strategies in the past were better received by women than by men. Further reasons for these trends could be that women paid more attention to maintaining a healthy diet and exercised more, because social norms have increased women's awareness of their appearance. 22 We assume that this is especially true for highly educated women in Austria and, thus, they still had the lowest obesity prevalence and showed the lowest increase in obesity prevalence. This may be the reason that the prevalence of obesity remained stable among women during recent surveys, but increased among men in Austria between 2007 and 14. Regarding age, we observed the highest obesity prevalence among subjects aged 55-74 years old. This outcome is in accordance with that of similar studies. 10,16,19,23 Gallus et al. 23 reported that European adults aged 65 years and older showed the highest obesity prevalence. In our study, men aged 55-74 years old showed the highest prevalence among all age groups for obesity and a high AC for obesity. This differed from former trend estimations for obesity in Austria. 22 It was also noticeable that women aged 75 years and older suffered more often from obesity than men in the same age group, and that the oldest women showed a higher increase in obesity prevalence during the study period compared with men in the same age group. A higher number of obese, older subjects represent a major problem for Austria, because obesity among older subjects is associated with higher care needs, and, correspondingly, more resources are needed in the health care practice. 35 Our outcomes confirm that adults with the highest educational level have the lowest prevalence of obesity in high-income countries. 23,[36][37][38] The increase in the prevalence of obesity was also lowest among highly educated subjects compared with those with middle or low educational level. As seen in other studies, the inverse relationship between the educational level and obesity was observed to be more pronounced for women. 23,36 The education-based RII was still higher for women than for men, which is in accordance with results cited in former publications. 24,36 Devaux and Sassi 36 investigated social inequality and obesity in different countries. The greatest educational inequality related to obesity was found in France, Sweden, Austria, Spain and Italy. In our study, the educational inequality related to obesity tended to increase in men, although this has stabilized since 2007. The increase in the prevalence of obesity among Austrian adults has led to an increase in relative inequalities in men. However, in general, the results of the RII were quite variable in our study. This can be attributed to the fact that the proportion of subjects with a high educational level was low, especially among the women. The oldest men with high and middle educational levels showed the highest increase in the prevalence of obesity compared with all other investigated subgroups. This outcome was unexpected and not in accordance with the results of the former trend analyses for obesity in Austria, 22 therefore, it should be investigated more thoroughly in the future. Men aged 55-74 years old with a low educational level also showed a high AC. Reasons for this result could be that retired men with a low educational level had professions in the past in which they had to work hard and were more physically active. During their retirement, these people gained weight because their level of physical activity decreased while the energy flow remained the same. This result more strongly affects men in Austria. --- Limitations One limitation of the study was that no measured data were available. However, we tried to compensate for this by correcting the self-reported BMI. 39 Another limitation was that the socioeconomic status was only represented by the educational level, as other variables that could be used to measure socioeconomic status, e.g. income, were not available for most surveys. Investigating different income groups could have resulted in more stable values for the RII. Furthermore, data for educational status were only available from 1983 and onwards. It would have also been interesting to investigate other sociodemographic determinants of obesity. Personal interviews were held in the first five surveys, while in the last survey was conducted via telephone, which may influence the comparability of the results, because of the different ways of collecting from the sample. However, the computer-assisted telephone interviews and face-to-face interviews had similar measurement properties. 40 --- Implications and conclusions Regular monitoring of obesity makes it possible to control vulnerable groups. Monitoring the obesity prevalence is also essential to study the effectiveness of health-promoting policies at national and local levels. This study showed that it is important to examine trends in subpopulations to determine risk groups for obesity. Subjects aged 55 years and older with a low educational level and men in the oldest age group with middle or high educational level represented the greatest risk group for obesity in Austria. Long-term preventive strategies to control obesity epidemic should be address middleaged individuals before they become obese. This is important because a high obesity prevalence in a nation with an aging population threatens to overload the resources of the health systems. In the future, it will also be necessary to conduct research in Austria on caring for obese subjects. Furthermore, there is need for prospective population-based studies that are designed to investigate cultural determinants and lifestyle factors related to weight change. This could be beneficial for the assessment of causality and to construct effective prevention strategies for Austria. Due to the differences in the obesity prevalence and trends observed between the sexes, public health strategies should be developed that adopt a gender perspective. The promotion of physical activity for subjects with a low socioeconomic status, especially for men, was found to prevent obesity and reduce health inequalities. 24 The reduction in material inequality would be an important contribution in the fight against obesity as well 41 and, in general, to narrow social class inequalities in health care. Subjects need to have an access to a healthy lifestyle through sustained implementation of evidence-and populationbased strategies that make regular physical activity and healthy nutrition available and affordable, for example, by taxing highcaloric soft drinks. 1 Public health advocates should lobby for better nutritional standards for meal consumption, before obesity prevalence becomes even more acute. Our findings could help guide the development of effective health and social policies and programmes aimed at reducing the burden of obesity in Austria. --- Introduction --- O ver the last decades, a worldwide increase in the prevalence of childhood and adult overweight and obesity has emerged, posing a'major global health challenge'. 1 Especially childhood obesity is of great concern because it is a predictor of adult obesity, 2 and also because it is associated with psychological and physical problems, such as high blood pressure, type 2 diabetes and high cholesterol. 3 In major cities of the Netherlands, the trend of increasing overweight and obesity rates has recently been shown to level off and even decline in certain groups. However, between ethnic groups in the Netherlands the prevalence of overweight and obesity still differs considerably, with Dutch children showing the lowest prevalence of overweight including obesity (11.4%) and Turkish children the highest (32.4%). 4 Parental socioeconomic status (SES) was repeatedly found to be negatively associated with the presence of overweight and obesity in children. 5,6 In the Netherlands, children from non-western ethnic groups generally have a low SES. 7 This raises the question if overweight and obesity can be explained by SES alone or that ethnicity influences overweight and obesity prevalence independently of SES. A Dutch study of overweight and obesity in Dutch, Turkish and Moroccan adults showed that individual SES alone did not eliminate the differences between the ethnic groups. However, --- Conflicts of interest: None declared. --- Key points A trend analyses over four decades for the prevalence of obesity was made, showing a constant increase in obesity among men, who show higher prevalence rates than women for the first time. Subjects aged 55-74 years old with a low educational level and men aged 75 years and older with a high/middle educational level are at the greatest risk for becoming obese in Austria. Public health strategies should be developed from a gender perspective, due to differences in the prevalence and trends in obesity and educational inequalities related to obesity.
Background: The examination of obesity trends is important to plan public health interventions specific to targetgroups. We investigated long-term trends of obesity for the Austrian adult population between 1973 and 2014 according to their sex, age and education and the magnitude of educational-inequalities. Methods: Data were derived from six national, representative, cross-sectional interview surveys (N = 194 030). Data correction factors for self-reported body mass index (BMI) were applied. Obesity was defined as BMI ! 30 kg/m 2 . Absolute changes (ACs) and aetiologic fractions (AFs) were calculated to identify trends in the obesity prevalence. To measure the extent of social inequality, the relative index of inequality was computed based on educational levels. Results: In 2014, the age-adjusted prevalence of obesity was 14.6% (95%CI: 14.0-15.3) for women and 16.8% (95%CI: 16.1-17.9) for men. Obesity was most prevalent among subjects aged 55-74 years and those with low educational status. The AC in the obesity prevalence during the study period was highest for men aged 75 years and older with high/middle educational levels (16.2%) and also high for subjects aged 55 years and older with low educational levels. The greatest dynamics for obesity were observed among the oldest men with high/middle educational levels. Educational inequalities for obesity were higher among women, but only increased among men. Conclusions: Since 1973, the prevalence for obesity was observed to be higher for men than women in Austria for the first time. Men showed the greatest increase in prevalence and risk for obesity during the study period. Further studies are needed to determine the drivers behind these trends.
Study question: What are the factors that influence elective egg freezers' (EEF) disposition decisions towards their surplus frozen oocytes? Summary answer: Achieving motherhood or dealing with grief if motherhood was not achieved, the complexities of donating to others, and a lack of information and professional advice. What is known already: Most women who undergo EEF do not use their oocytes. Consequently, there is an abundant, but unquantified, number of women with surplus oocytes in storage globally. Many women are deciding about the disposition of their surplus oocytes due to storage limits in countries such as Australia, Belgium, Finland, and Taiwan. However, no studies have examined the factors that influence EEF oocyte disposition decisions. Research exploring factors relevant to embryo disposition and planned oocyte donation may not be relevant. Consequently, women are making the challenging and stressful decision regarding the fate of their oocytes with limited research available to support them. Study design, size, duration: Thirty-one structured interviews took place in Australia between October 2021 and March 2022. Recruitment was via: Facebook (paid advertising, posts on relevant groups and organisation sites), newsletters and emails from universities and professional organizations, emails to eligible patients from an IVF clinic, and snowballing. A reflexive thematic approach was planned; data collection and analysis occurred concurrently. Recruitment occurred until the process of analysis did not identify any new themes and saturation have been reached. Participants/materials, setting, methods: Eligible participants (EEF with surplus frozen eggs, 18<unk>, living in Australia) were interviewed and included women who had previously made a disposition decision (n 1<unk>4 7), were currently deciding (n 1<unk>4 6), or who not yet considered the decision (n 1<unk>4 18). Interviews took place on recorded teleconference, were transcribed verbatim and anonymised. Transcripts were iteratively coded via NVivo and analysed, and themes developed inductively. The researcher reflected on their subjectivity with co-authors to ensure accuracy and clarity of data interpretation. Main results and the role of chance: Six inter-related themes were identified related to the decision-making process: 'decisions are dynamic'; 'triggers for the final decision'; 'achieving or not achieving motherhood'; 'conceptualisation of oocytes'; 'the impacts of egg donation on others'; and 'external factors affecting the final disposition outcome'. All women reported a type of trigger 'event' for making a final decision (e.g. completing their family). Women who achieved motherhood were more open to donating their oocytes to others, wanting to share the joy of motherhood, but were concerned about the implications for their child (e.g. donor-conceived half-siblings) and also felt responsibility for potential donor children. Women who did not achieve motherhood were less likely to donate to others due to the grief of not becoming a mother, often feeling alone, misunderstood, and unsupported. Reclaiming oocytes (e.g. taking them home) and closure ceremonies helped some women process their grief. Donating to research was viewed as an altruistic option as oocytes would not be wasted and did not have the "complication" of a genetically linked child. Decisions were often made based on misinformation and a lack of knowledge of the available disposition options and their consequences, with few women seeking professional advice on their decision. Limitations, reasons for caution: Most participants had not considered the decision and their stated intentions may not reflect their final decision. Women who had previously made disposition decisions were difficult to recruit despite comprehensive study advertising. Other limitations were the use of convenience sampling and conduct of interviews via teleconference (due to COVID). Wider implications of the findings: Due to a lack of understanding of the disposition options, their impacts, and women not seeking professional advice, decision support (e.g. counselling, decision aids) is suggested. Counselling should occur at least at the beginning and end of the process, address disposition options, impacts, grief, and gaining support from others. What is known already: Men have an important role to play in the decision-making process regarding family building. However, research on this topic has historically focused on women. Furthermore, existing research focuses primarily on data from high-income countries with limited perspectives from men from low-and middle-income countries. This study aimed to explore the factors influencing men's attitudes and behaviours regarding family building decisions across low-, middle-, and high-income countries. Study design, size, duration: A systematic review was conducted via a search on PubMed, Psych Info and Web of Science databases using the following keyword combinations; fertility AND intention OR desire OR pregnancy AND childbearing OR family building OR reproductive decision making AND attitudes OR motivations OR desires OR behaviours AND parenthood OR fatherhood OR men. Study designs were either qualitative, quantitative or mixed-methods. Participants/materials, setting, methods: Studies were included if they examined men's attitudes and behaviours regarding family building decisions, involved only male participants or male and female participants if the results for male participants were reported separately. Male participants undergoing fertility treatment, participants with or without children, or homosexual participants were included. Studies from any country, published between years 2010-2022, and in English language only were included. Main results and the role of chance: A comprehensive search yielded 1745 articles, with studies being excluded if they involved female participants only, results were aggregated for studies including male and female participants and studies involving participants undergoing surrogacy or adoption. As a result, 22 studies were included in this review. From the 22 included studies, 2 main themes were derived; personal and social factors. The personal theme consisted of factors at the individual level related to finance, education, health, age, sexuality, masculinity, knowledge and other personal factors. The social theme related to wider issues, including social pressure, social support and marital status. Across included studies, the most common personal factor influencing men's attitudes and behaviours regarding family building decisions was financial issues, that is, being financially stable/secure. The most common social factor across included studies was discovered to be support, that is, receiving support from family, society and workplace. Half of the included studies reported the stability of men's relationship with their partner as a factor that influences their intention for fatherhood. Interestingly, masculinity was a recurring theme, with men reporting fatherhood as being an expression of masculinity and a way to fulfil their masculine roles and identity within their family, society and community. Limitations, reasons for caution: Of the 22 studies included in this review, 8 of the studies involved young participants of ages 25 years, thus results obtained from these studies were not representative of the attitudes and behaviours of all adult men regarding family building decisions. Wider implications of the findings: This is the first review to include studies of men from a combination of low-, middle-and high-income countries. Understanding men's attitudes and behaviours regarding family building decisions can help raise and promote fertility awareness among men, thereby helping men achieve their desired reproductive intentions. Trial registration number: not applicable --- Summary answer: Spanish and U.S. egg donors differ in their desire for anonymity, awareness of consumer ancestry testing, and the implications of ancestry testing for maintaining anonymity. What is known already: In the literature, many have expressed concern that without the promise of anonymity people would be unwilling to donate eggs and sperm. A related concern is that the rise in consumer ancestry testing will mean the end of anonymous donation and therefore contribute to a reduction in donors. Study design, size, duration: This is a mixed-methods study drawing upon surveys and interviews with oocyte donors in the United States (341) and Spain (126). The study was conducted between the years of 2018 and 2022 and included participants from multiple fertility clinics throughout Spain and the United States. Participants/materials, setting, methods: This is a multi-sited study. Participants include current and former compensated oocyte donors who completed an online REDCap survey. Text boxes were provided in the survey so the participant could elaborate where appropriate. A subset of donors (200 U.S. and 76 Spain) in each location agreed to participate in a semi-structured, open-ended interview with one of the investigators. Interviews were conducted in person or over Zoom in the participant's language of choice. Main results and the role of chance: Of 341 U.S. respondents, nearly two-thirds (214, 63%) preferred that open or known donation rather than anonymous. Of the Spanish respondents, 38% stated they would prefer nonanonymous donation, 50% were unsure, and 11% stated they would not want non-anonymous donation. Both groups, 178 US (52%) and 57 Spain (51.4%), equally expressed a desire to someday meet the people born from their eggs or would be open to contact to share medical information. Of the U.S. donors, only 17 (5%) expressed a desire for no future contact with the people born from their donations, while 9 (8.7%) of the Spanish donors expressed desire for no future contact. U.S. donors were almost unanimously aware of the existence of consumer ancestry testing and 66 (19%) had attempted to use such tests to either find their donor-conceived offspring or make themselves available to be found. Among 111 Spanish respondents, 24 (21.6%) were not aware that consumer ancestry testing exists or that it could be used to find them, but 57 (51.35%) expressed a desire to be found if it were to become more widely used in Spain. Findings indicate that egg donors in both locations are mostly open to the idea of non-anonymous donation. Limitations, reasons for caution: Study limitations include a potential bias in the survey sample as it is possible that people who participate in research might be more open than those who do not. We attempted to ameliorate this possibility by recruiting participants from a wide range of clinics, practices, and other sources. Wider implications of the findings: Findings indicate that concerns surrounding the impact of consumer ancestry testing and the loss of anonymity for donors are overestimated. While there are cultural differences surrounding donation in the U.S. and Spain, assumptions surrounding oocyte donors' desires for anonymity are not well-aligned with donor sentiments in either location. Trial registration number: not applicable Abstract citation ID: dead093.900 P-566 Is the seminal oxidative stress the mirror of psychological stress perceived by infertile men? Study question: To assess the association between anxiety and depression scores and the levels of oxidative stress in seminal plasma of Tunisian infertile men. Summary answer: Depression in hypofertile men is associated to higher levels of catalase in seminal plasma and thus to oxidative stress. What is known already: Within the last decades, the knowledge concerning the link between psychological and oxidative stress in infertile men has been surging. In a limited number of studies aiming to elucidate the psychological aspect of male infertility, data related to increased incidence of depression and anxiety have been reported. It has also been reported that anxiety and depression may trigger the production of reactive free oxygen radicals leading to disruption of the balance between free radicals and antioxidants semen properties. Study design, size, duration: This was a cross-sectional study performed in the Laboratory of Cytogenetics and Reproductive Biology of Fattouma Bourguiba University Teaching Hospital (Monastir Tunisia). A total of 282 patients were assessed for levels of anxiety and depression and evaluated for i456
European countries. A thematic analysis was performed using Atlas.ti software. The study used purposive sampling technique in order to capture heterogeneity of young participants (gender, age, residence, marital status/relationship, sexual orientation, education and religion) Main results and the role of chance: Young adults perceive infertility as a topic that is not discussed very much in public. The individuals affected by it tend to keep it private, reluctant to discuss it within their social environment which contributes to the taboo of infertility and may limit access to MAR techniques. Despite this, many individuals, male and female, face infertility problems, including data n these countries. In all four countries, young people agree that infertility imposes great pressure on both males and females. In certain countries, religion affects the use of MAR techniques, whereas LGBT people are faced with stigmatization while using MAR techniques. Young interviewees reported general knowledge about MAR treatments and specifically, certain techniques they are familiar with, such as in vitro fertilization or artificial insemination. In addition, surrogacy was a process that many participants were familiar with. However, all young interviewed participants claim that more information about MAR is needed and they are not confident about where they should search for it. Limitations, reasons for caution: This study is first of its kind in the MAR research body and its results are useful for policy-makers dealing with (in)fertility. However, information provided by the young participants in these 4 countries would serve as an overview of gaps and concerns about MAR techniques. Wider implications of the findings: The results of this study are used to develop National Guidelines aimed for policy makers and MAR clinics to improve information about infertility among young people.
more symptoms of depression, are more likely to seek inpatient treatment for emotional disturbances and report more suicide attempts than their heterosexual peers (Remafedi, 2002;Silenzio, Pena, Duberstein, Cerel, & Knox, 2007). Homeless G/B young people often lead highly chaotic and dysfunctional lives, and are similarly isolated from school and community networks where they might find supportive adults and peers outside of their family. These young adults may suffer from exposure to high levels of family disorganization, ineffective parenting, and intolerable levels of maltreatment (Paradise et al., 2001;Tyler, 2008). Problems at home, such as interfamily conflict, poor communication, dysfunctional relationships, and physical/sexual abuse or neglect, are predictive of runaway episodes (Baker, McKay, Hans, Schlange, & Auville, 2003) and symptoms of anxiety and depression (Whitbeck, Hoyt, & Bao, 2000). These problems may stem from, or be exacerbated by, conflict related to sexual identity. Ryan and colleagues (2009) found that sexual identity conflict was the primary cause of G/B young people leaving or being ejected from their home and that family rejection on the basis of sexual identity was strongly associated with a number of negative health outcomes, including a six-fold increase in depression (Ryan, Huebner, Diaz, & Sanchez, 2009). Family rejection and social stigma can also result in internalized homophobia which can contribute to increased depression and anxiety among G/B individuals (Igartua, Gill, & Montoro, 2003). G/B young people who experience family rejection on the basis of sexual identity are more than three times as likely to use illegal drugs (Ryan et al., 2009) compared to those not experiencing rejection. Moreover, homeless G/B young adults reporting family rejection during adolescence are over eight times more likely to report suicide attempts and six times more likely to report high levels of depression than peers with a strong family support system (Ryan et al., 2009). For homeless G/B young adults, compounded stressors of being homeless and a part of a sexual minority may produce emotional distress and an overwhelming sense of alienation from the mainstream society (Rosario, Schrimshaw, Hunter, & Gwadz, 2002). Among G/B youth and young adults, the link between depression, alcohol and drug dependency is well established (Rohde, Noell, Ochs, & Seeley, 2001). Alcohol and other substance dependence is higher among G/B young adults than among their heterosexual counterparts (King et al., 2008). In fact homeless G/B youth and young adults are also more likely to use "hard drugs" such as amphetamines than their heterosexual peers (Noell & Ochs, 2001). In this study, the Comprehensive Health Seeking and Coping Paradigm (CHSCP; (Nyamathi, 1989) served as the theoretical framework. This framework which originated from the Stress and Coping Model (Lazarus & Folkman, 1984) and the Health Seeking Paradigm (Schlotfeldt, 1981), and has been applied to investigations focusing on understanding HIV, hepatitis and TB risk and protective behaviors and health outcomes (Nyamathi, Christiani, Nahid, Gregerson, & Leake, 2006;Nyamathi et al., 2002;2005;Nyamathi et al., 2002;Nyamathi, Dixon, Wiley, Christiani, & Lowe, 2006) among homeless, and impoverished adults. Identifying predictors of depressive symptoms will provide valuable information to those engaged in disease prevention and intervention efforts. The CHSCP is composed of a number of variables that guide data collection. These include socio-demographic factors, situational and personal factors, cognitive and social resources and coping responses. Situational factors that might be relevant to predictors of depressive symptoms (also defined as depressed mood in this paper) among G/B young adults include age and education. Situational factors include length of time homeless. Personal factors for this paper incorporate the perception of pain and health status and internalized homophobia. Social factors may include social support while cognitive factors may include internalized homophobia and knowledge of HIV/AIDS and hepatitis. Coping responses include use of drugs and alcohol. Given the increased vulnerability to depression among homeless and G/B young adults, it is important to explore correlates of depressed mood among those who experience the compound stigma of being homeless and gay or bisexual. Guided by the CHSCP, this paper describes the socio-demographic, personal, cognitive, social and coping response correlates of depressed mood in a sample of homeless, male, G/B young adults in Hollywood California. --- Methods --- Design Baseline data were collected as part of a randomized clinical trial focused on assessing 267 stimulant-using gay and bisexual young men (aged 18-46) who were randomized into one of two programs designed to reduce stimulant use. The Human Subjects Protection Committees for the University of California, Los Angeles (UCLA) and the Friends Research Institute (FRI), a community drop-in site for G/B adults approved this study. --- Sample and Setting The sample consisted of 267 methamphetamine, cocaine and crack-using G/B young adults who frequented a community site in Hollywood, California. Eligibility criteria included: a) homelessness; b) gay or bisexual identity; c) age 18-46; d) stimulant use (methamphetamine and/or cocaine/crack use) within the last three months; and e) no self-reported participation in drug treatment in the previous 30 days. Urine testing was used to validate recent (within the previous 72 hours) stimulant use at screening. If the urine test was negative, hair analysis was conducted that could detect stimulant use within the previous 3 months. A homeless person was defined as any individual who spent the previous night in a public or private shelter, or on the streets (Necessary Relief: The Stewart B. McKinney Homeless Assistance Act., 1988). In total 564 men were screened of which 267 met the eligibility criteria and were enrolled into the study. The 297 individuals who were not enrolled were rejected on the basis of negative hair test result, reported not to be gay or bisexual, over the age limit, no stimulant use in the last three months, were not homeless, and were in drug therapy for last 30 days. --- Procedure Participants enrolled as part of a clinical trial designed to reduce stimulant use and promote hepatitis/HIV prevention. The research staff was trained extensively prior to the onset of the study by the principal investigator, co-investigators and project director. Potential participants were a community-based sample recruited by current or former participants, inservice presentations at community-based organizations that serve the targeted population, or by responding to a flyer distributed in the community. The research staff reviewed the informed consent form in a private location with potential participants who were interested in the study and administered a short screening assessment to confirm eligibility. The screener took approximately two minutes to complete and assessed demographic characteristics, homeless status, and substance use and dependency using the TCU Drug Screener (Simpson & Chatham, 1995). Eligible participants were asked to provide a blood sample to be tested for Hepatitis A Virus (HAV), Hepatitis B virus (HBV) and Hepatitis C Virus (HCV). The participants were asked to return after two days to receive their hepatitis test results from the study nurse, after a second informed consent for the full study was reviewed and signed. Once data on HBV status was collected, and data relating to age (18-29 vs 30-46), race (white vs non-White, and drug status (abuse vs dependence) was entered, a computerized randomization table assigned enrolled participants into one of two treatment arms. Baseline assessment was administered by the research staff. Participants were compensated $10 to complete the brief screening questionnaire and $20 to complete the baseline assessment. --- Measures Socio-Demographic Information-A structured questionnaire was used to collect sociodemographic information including, age, birthdate, ethnicity, education, employment, relationship status, and history of homelessness. Health Status-A self-reported one-item measure was used to measure health status which asked about general health, ranging from excellent to poor and dichotomized as fair/ poor vs. good/excellent and bodily pain in the previous 4 months dichotomized as severe/ very severe vs. none/very mild/mild/moderate (Stewart, Hays, & Ware, 1988). Social Support-A 6-item scale used in the RAND Medical Outcomes Study (Sherbourne & Stewart, 1991) was used to measure social support. The items elicit information about how often respondents had friends, family or partners available to provide them love and affection, help with chores, etc. on a 5-point Likert scale ranging from "none of the time (1) to "all of the time" ( 5). The instrument has demonstrated high convergent and discriminant validity and internal consistency (Sherbourne & Stewart, 1991). The Cronbach's alpha for this sample population was.88. Participants were considered as having received no social support if they answered "none of the time" for all 6 social support items. Social support was thus dichotomized as "None" versus "Any" as no social support was a significant factor for depressed mood in the preliminary analysis. Hepatitis B Knowledge-A modified 17-item instrument was used in a prior Hepatitis B study (A. Nyamathi et al., 2010) to measure knowledge of and attitudes toward Hepatitis B. Items were measured on a five-point scale ranging from "definitely true" (1) "don't know" (5). Cronbach's reliability coefficient for the instrument in this population was.81 for the knowledge subscale and.92 for the attitude subscale. HIV/AIDS Knowledge-A modified 21-item Centers for Disease Control (CDC) scale was used to measure knowledge of and attitudes toward HIV/AIDS (NCHS, 1989). Range was 0-21. Modifications to the CDC instrument have been detailed elsewhere (Leake, Nyamathi, & Gelberg, 1997). Internal consistency reliability for the overall HIV knowledge and attitude scale was.86 in this homeless population. Drug Use and Related Problems-The Addiction Severity Index (ASI, (McLellan et al., 1992), a standardized clinical interview that assessed the client's self-reported substance use. The author reveals excellent inter-rater and test-retest reliability, as well as discriminate and concurrent validity. Self-report of substance use utilized a 30-day report. A slightly modified version of the ASI has been used by Reback et al., (2010) with similar populations. Sexual Behavior-The Behavioral Questionnaire (BQ) -Amphetamine (Chesney, Barrett, & Stall, 1998) was used to assess substance use in relation to sexual behavior. This scale has been validated with methamphetamine-using populations (Twitchell, Huber, Reback, & Shoptaw, 2002) and assesses specific sexual behaviors alone and accompanying substance use both with primary and other partners, relating to unprotected anal insertive sex and receptive anal sex, as well as number of sexual partners over the previous 30 days. The BQ-A has excellent overall reliability of.92 (Veniegas et al., 2002). In addition, participants were asked if current or past sexual partners had injected drugs, traded sex for money or drugs, and had sex while incarcerated. Internalized Homophobia-Herek, (1998) Attitudes toward gay men scale was used to assess internalized homophobia. The 5-item assessment was used to assess responses to questions on feelings about being a man who has sex with a man. Answers were scored on a 5-point Likert scale from "disagree strongly" (1) to "agree strongly" (5). The questions were summarized to yield a scale score of 1 to 25, with higher scores indicating a higher degree of internalized homophobia and lower scores indicating greater acceptance of gay men. A man was considered to have a high level of internalized homophobia if his summary score was over 15, which indicated that on average he "agreed" or "agreed strongly" with the five internalized homophobia questions. Depressed Mood-A short form version of the Center for Epidemiologic Studies-Depression Scale (CES-D) (Radloff, 1977) was used to assess depressed mood, a term used to replace depressive symptoms in this study. The short form CES-D is a 10-item scale that measures depressive symptoms on a 4-point continuum. The CES-D has well-established reliability and validity. Scores on the CES-D range from 0-30, with higher scores indicating greater depressive symptomatology. Internal consistency reliability for this scale was.82 in this homeless population. For purposes of this study, depressed mood was defined as having a CES-D score of at least 10 on this 10-item CES-D scale. This cut-point OF 10 has been used to identify persons in need of psychiatric evaluation for depression in previous work (Andresen, Malmgren, Carter, & Patrick, 1994). --- Data Analysis Summary statistics were performed to present to describe participants' demographic and clinical characteristics as well as other independent variables. Due to the large number of variables we collected from the survey, model selection technique was applied to study the possible predictors of depression among the homeless gay men. Chi-square and paired t-tests were carried out to examine the bivariate correlates of depression. Stepwise multiple logistic regression analysis was then used to create a model of depression, including variables such as medical visit, HIV knowledge, education, general health status, body pain, homeless status, homophobia and social support, which were associated with depression at the 0.15 level in the preliminary analyses. This justification was based on the fact that frequently, two variables that are not significantly associated based on zero-order correlations will be significantly associated when another variable is controlled. While allowing the.15 allowed important correlates to be viewed, stepwise techniques used in the final model to reduce chance of spurious result. Covariates that were significant at the 0.05 level were retained in the final model. Multicollinearity was assessed and model fit was examined with the Hosmer-Lemeshow test. All statistical analyses were conducted using SAS, version 9.1. --- Results --- Sociodemographic Characteristics This G/B young adult male population reported an average age of about 34 years (S.D. 8; range 18 to 46, was predominantly high school educated (74%) and was infrequently employed (8%) (See Table 1). The majority of the participants was White and reported being homeless for the entirety of the previous four months. Approximately one in every five of the participants met the criterion for internalized homophobia. Almost two-thirds of the participants (61.4%) reported a lifetime history of injecting recreational drugs. Of these, approximately one-third (31.4%) reported injecting heroin, other opiates or pain relieving drugs. Approximately two-thirds (60%) reported that they had been given information about hepatitis prior to participating in the study. Just under over half (51%) of the participants were infected with HBV, 29% were infected with HCV, and 17% were infected with HIV. Co-infection rates between HIV and HCV were also high at 6%. Few participants (8%) reported having no social support. Depressed mood was commonly reported in this population (62%). The level of HIV knowledge was moderate (M = 16, SD = 3.9), whereas HBV knowledge was lower (M = 9.1, SD = 5.1). In general one-fourth or fewer respondents reported good to excellent health (24%) and a mean of 1.2 visits for medical problems within the previous four months. --- Associations with Depressed Mood Table 2 reports unadjusted correlates of depressed mood. Age, employment, partner status and number of children were not found to be significantly associated with symptoms of depression. HIV/HBV/HCV status and injection of recreational drugs were also not associated with depression symptoms. However, fewer visits to health care providers and being less knowledgeable about HIV were important correlates. Internalized homophobia was a significant correlate as well (p= 0.015). In addition, not having graduated from high school and not having social support also correlated with symptoms of depression. Additional significant correlates of depressed mood included fair/poor health status, having severe/very severe bodily pain, and having ever injected heroin, opiates or painkillers. In terms of environmental and psychosocial factors; having been homeless all the time for the previous four months was positively correlated with depressed mood. --- Multivariate Results The adjusted odds of reporting clinically relevant symptoms of depression were almost 11 times greater for persons who reported no social support and almost six times greater for those who reported severe or very severe body pain (Table 3). Being homeless all the time in the previous 4 months was positively associated with high depressive mood scores. Also, those who reported fair/poor health status and those whose responses indicated high levels of internalized homophobia were more likely also to report high levels of depressed mood. In addition, those who had ever injected of heroin, opiates, or painkiller were about 2 times more likely to report significant depression complaints than those who did not inject. --- Discussion Findings from this study revealed that G/B homeless young adult men who lacked social support were more likely to report high levels of depressed mood and among those who reported elevated levels of internalized homophobia, they were also more likely to report high levels of depressed mood. Further, participants who reported a history of injecting heroin, opiates, or painkillers and those homeless in the previous four months were more likely to have high levels of depressed mood. Understanding the correlates of depressed mood among G/B young adult men who are homeless can help service providers design more targeted treatment plans and provide more appropriate referrals to ancillary services. A negative impact of lack of social support on the emotional state of individuals has been found previously, as absence of social support has been associated with more depressive symptoms among homeless young adults (Stein, Dixon, & Nyamathi, 2008), G/B youth (Doty, Willoughby, Lindahl, & Malik, 2010) and among other populations at high risk for poor mental health, such as methadone-maintained adults (Nyamathi, Hudson, Greengold & Leake, in press) and parolees (Marlow & Chesla, 2009;Nyamathi et al., 2011). In a recent qualitative study, Hudson et al. (2010) found that homeless young adults craved support from family, friends, and homeless peers and were constantly subjected to rejection and discrimination from passersby and law enforcement. It is very likely that the homeless G/B young adults in this study had experienced similar social isolation combined with social stigma, and perhaps this is magnified when internalized homophobia is taken into account. Despite increasing mainstream exposure to homosexuality, G/B young adults, particularly when homeless, find themselves alone, and unable to share feelings when subjected to social taunts and attacks (Almeida, Johnson, Corliss, Molnar, & Azrael, 2009). Thus, by understanding the importance of social support in relation to depression, healthcare and service providers may need to consider avenues where social interaction can occur among G/B young people, such as group education activities. While the participants were not asked directly about the social stigma, prejudice and discrimination often associated with minority sexual orientation, we did find that reports of high levels of internalized homophobia corresponded with other negative psychosocial complaints. This is consistent similar findings from a report of older, urban very poor MSM in Los Angeles (Shoptaw et al., 2009). We believe there is a link between depressed mood and internalized homophobia among G/B young men, as having higher levels of anxiety and depression, feeling downhearted and blue and having high levels of nervousness may lead G/B men to have negative attitudes towards their own homosexuality. Family and societal stigma may also contribute to depression in G/B men which could lead to internalized homophobia (Dudley, Rostosky, Korfhage, & Zimmerman, 2004). Study findings add to the literature specific to G/B homeless populations that are active stimulant users. While further research needs to be done with this specific subgroup, other researchers have found elevated rates of depression symptoms and diagnoses in G/B people (Cochran, Mays, & Sullivan, 2003;de Graaf, Sandfort, & ten Have, 2006;King et al., 2008;Mays & Cochran, 2001;McCabe, Bostwick, Hughes, West, & Boyd, 2010). Implications again exist for service providers to maintain G/B-friendly drop-in sites where young populations can gather and socialize and have the freedom to express themselves without concern about stigma or acceptance. Findings demonstrated that G/B young homeless men who reported injecting heroin, opiates or pain killers were more likely to report experiencing a depressed mood. In previous work with a similar population, 75% met the Beck Depression Inventory criteria for mood disorder and 33% met criteria for major depressive disorder; however, amphetamine/ methamphetamine injection was significantly associated with depression rather than opiod injection (Reback, Kamien, & Amass, 2007). The relationship between injecting opiods and depression has also been identified in other populations, such as needle exchange clients and older adults (Rosen, Morse, & Reynolds, 2011;Volkow, 2004). Injection use often reflects an advanced state of drug dependency (Marshal, Friedman, Stall, & Thompson, 2009), which may be more emotionally distressing among G/B young adults. Furthermore, the fact that homelessness was associated with depression is not surprising as homelessness represents a state characterized by a confluence of stressors. Homeless G/B young adults may well fear for their safety, not know where their next meal is coming from and be exposed to the elements. Clearly, they are also more vulnerable to violence and victimization simply by being more visible (Gwadz et al., 2009). In another study with homeless young adults, mental health issues were the most commonly reported health concern and some young homeless adults reported using illegal drugs as an attempt to alleviate the symptoms of feeling depressed or hearing voices (Nyamathi et al., 2007). While a longitudinal study is required to assess the causation of comorbid conditions, other studies have highlighted the link between depression, alcohol and drug dependency (Bazargan-Hejazi, Bazargan, Gaines, & Jemanez, 2008;Chen et al., 2010;Gratzer et al., 2004), and the fact that adolescents with substance abuse and comorbid psychiatric disorders have poorer drug treatment outcomes than youths with only substance abuse disorders (Riggs, 2003). Therefore ongoing investigation of causes of depression and its identification and treatment can be also considered a tool in the prevention of continued drug and alcohol addiction and dependence. An equally important relationship was identified between the health status of the young homeless G/B young adults in this study and depressed mood. As such, depressed mood was associated with having experienced severe body pain within the previous four months and was inversely associated with good to excellent health status. This association between severe bodily pain and depressed mood is novel as no other studies have found this relationship. This finding helps to advance the understanding of the link between health status, and in particular pain experienced and level of depressed mood. Our findings also suggest that participants who wanted to get treatment for mental health were less likely to report having a depressed mood. Healthcare providers who work with a population who suffer from numerous and often severe physical and mental health problems are a vital link in providing services and treating these health issues to improve the health outcomes of those most vulnerable. Access to care for this population is often challenging. Acting as a link or facilitator to more intensive social and health resources is critical (Christiani, Hudson, Nyamathi, Mutere, & Sweat, 2008). Traditional barriers to care faced by these young adults include concerns regarding confidentiality, the cost of services, lack of insurance, lack of transportation, cultural issues including homophobia, spiritual and discrimination, distrust of healthcare providers, feeling embarrassed to ask for healthcare, and distrust of social workers and police (Christiani et al., 2008;Solorio et al., 2008). Another study found a marked difference in the amount of respect and consideration homeless people receive from health care delivery systems in comparison to the general population (Martins, 2008), an inequity that has been found to result in homeless persons being less likely to seek health care (Wen, Hudak, & Hwang, 2007). --- Limitations While this study reported unique findings relative to the mental health of G/B young adults, this study had several limitations. First, the participants were exclusively homeless G/B young men, so our ability to generalize to G/B older men or to women is limited. Second, all study participants were selected from an area surrounding the research site; whether these participants differ from those further from this site or in other cities is unknown. Moreover, most of the data are self-reported and a clinical screener for depression was not used. Finally, it was not possible to assess the direction of influence between mental health and substance use. Thus, longitudinal studies are needed to examine such influences. --- Conclusions This study is one of the first to assess the impact of severe body pain and depressed mood among G/B homeless young adults. These findings advance our understanding of the link between pain experienced and experiencing a depressed mood. Moreover, the desire of many participants to access mental health treatment and the relationship to lower odds of depressed mood provides useful information for practice and the provision of services for these vulnerable young adults. Future investigations will be critical to prospectively assess the impact of identifying and providing services for stimulant-using homeless G/B young adults who report high depressed mood in terms of both ongoing mental health and substance use issues. --- Table 2 Bivariate Correlates for Depressed Mood (N=267) Depressed
Homeless gay and bisexual (G/B) men are at risk for reporting suicide attempts and have high risk of depressed mood, defined as elevated level of depressive symptoms. This study describes baseline socio-demographic, cognitive, psychosocial and health-and drug-related correlates of depressed mood in 267 stimulant-using homeless G/B young men who entered a study designed to reduce drug use. G/B men without social support were 11 times more likely to be experience depressed mood than their counterparts who had support while persons who reported severe body pain were almost 6 times more likely to report depressed mood than those without pain. Other factors that increased risk of depressed mood included being homeless in the last four months, injecting drugs, reporting poor or fair health status and high levels of internalized homophobia. This study is one of the first to draw a link between pain experienced and depressed mood in homeless young G/B men. Understanding the correlates of depressed mood among homeless G/B young men can help service providers design more targeted treatment plans and more appropriate referrals to ancillary care services.Homeless; gay and bisexual; young men; depressed mood; stimulant-using Elevated levels of mental disorders and suicidality have been found in studies among gay and bisexual (G/B) men. In an extensive review and meta-analysis of publications on mental disorders, suicide, and deliberate self-harm behaviors among G/B men, King and colleagues (2008) found that G/B men had over a two-fold increase in suicide attempts compared to a heterosexual men and were at significantly higher risk for depression and anxiety disorders, suicidal ideation, substance misuse, and deliberate self-harm than their heterosexual peers. These trends are important to note; however, among the younger G/B subpopulation other challenges may be significant. The transition from adolescence to adulthood is a difficult time for many young people. In particular, for G/B youth and young adults, physical, mental and social developmental changes compounded by the emergence of a sexual identity which deviates from the heterosexual norm (Saewyc et al.
Introduction The coronavirus disease (COVID-19) has become a global health concern. The World Health Organization characterized COVID-19 as a pandemic on March 11, 2020 [1]. As of November 13, 2020, the number of global confirmed cases and deaths has risen to over 52,657,000 and 1,291,000, respectively. In Japan, more than 113,600 infections and 1,800 deaths were confirmed [2]. Effective antivirals and vaccines are currently being developed, and effective therapeutic solutions have not been ultimately approved [3,4]. Therefore, protecting citizens from new infections and health care institutions from using up capacities has become extremely important for all the countries. Many governments conducted lockdowns and interruption of citizens' economic/social activities during rapid infection increases. These countermeasures were remarkable, while their effectiveness depended on the knowledge, attitudes, and preventative practices (KAP) toward COVID-19 among citizens, according to KAP theory and previous experiences [5,6]. Meanwhile, countermeasures dramatically converted citizens' lifestyles and daily behaviors, and thus change in the mental health, well-being, and psychological impacts related to COVID-19 have also been highlighted and investigated [7,8]. For example, a largescale international survey to analyze citizens' mental well-being at the onset of the COVID-19 pandemic [9] and a large-scale international survey to evaluate the students' well-being have been conducted [10,11]. In Japan, the government issued several foundational policies for preventing and controlling COVID-19, including an emergency declaration for Tokyo, Kanagawa, Saitama, Chiba, Osaka, Hyogo, and Fukuoka on April 8, 2020, and later for the whole country until May 25, 2020 [12]. Although the emergency declaration was significant to control the rapid increase of COVID-19 infections, it can not last long due to socioeconomic losses. After lifting of declaration, increases in COVID-19 infections started, including those in the young generation [13]. At the onset of COVID-19, a survey on Japanese citizens' behavioral changes and preparedness against COVID-19 conducted by Muto et al. revealed that being younger was among the factors associated with reluctance to follow prevention measures [14]. Owing to the explanation and awareness that older individuals are at a highest risk of becoming severely ill or passing away [15,16], there is a possibility that the young Japanese generation has not paid enough attention to COVID-19. Although later research stated that "COVID-19 does not spare young people" [17,18], there is still a risk that the young generation will not undertake precaution measures as necessary. Moreover, young asymptomatic cases have the possibility of spreading viruses to the high-risk population. Therefore, the young Japanese generation's KAP that influence the compliance with countermeasures should be evaluated. Japan's 2019 university/junior college entrance rate was 58.1% [19], suggesting that university students account for most young individuals. University students have higher economic independence, higher autonomy, and are less dependent on parents than high school students or those with a lower level of education [20]. High school students or those with a lower level of education are more likely to follow their parents' lead and further obey the government countermeasures. In contrast, intelligent university students can judge their surroundings and exhibit behaviors based on their judgment, especially those who live separately from their families. Meanwhile, university students engage in vigorous activities, such as academic activities, sports clubs, and part-time jobs, and for this reason, they have more opportunities to get in contact with others. These aspects increase the importance of analyzing the university students' KAP in Japan, as reported from other countries [21,22]. Knowing the states of KAP toward COVID-19 among university students and further analyzing the KAP factors can play vital roles in planning and confirming the countermeasures for the young generation in the COVID-19 prevention. So far, to the best of our knowledge, there is a survey that investigated Japanese university students' awareness and actions toward COVID-19 [23], but the knowledge of COVID-19 and factors influencing KAP have not been evaluated yet. Previous surveys on KAP and well-being in other countries showed that citizens and university students had related high levels of knowledge about COVID-19 and displayed positive attitudes and low-risk practices. Differences in gender, age, education level, and major fields of studies/backgrounds affect the levels of knowledge, the practices such as appropriated hygiene and social distancing behaviors, and sometimes psychological health (i.e., anxiety, depression, etc.) [5,21,[24][25][26][27]. Whether these factors affect Japanese university students' KAP toward COVID-19 has not been studied. Moreover, because most studies only focus on practice, not much is known what affects the extent of knowledge and attitude. This information should be paramount in improving university students' knowledge and attitudes. Besides the factors listed above, psychological factors are also assumed to be crucial to university students' behaviors and practices. University students are independent of their families, forming their own identities [28]. They are very concerned about how they present themselves and how people see them. In an emergency such is the COVID-19 pandemic, in which behaviors are strictly restrained, one's behavior is more frequent based on the viewpoints of their own and others. Self-consciousness [29] is considered to significantly influence young people's behavior as a factor determining their behavior. Another critical factor is personality. It can be assumed that the extroverted nature of actively interacting with others determines university students' range of action. This study conducted an online survey to evaluate KAP toward COVID-19 among university students in Japan. Meanwhile, we attempted to examine KAP's differences related to the factors such as gender, education level, nationality, residence, responders' major, and their psychological characteristics (i.e., extroversion and self-consciousness) and the relationship between KAP and these factors. --- Materials and methods --- Study design, participants, and data collection This cross-sectional study administered an anonymous survey using a questionnaire constructed using Google Forms. Participants' inclusion criteria were university/college/junior college students who lived in Japan and could read and understand Japanese or English. Due to the emergency and particular period, we adopted a convenience sampling. We distributed our online survey form together with a quick response (QR) code via direct deliberation in laboratories, online lectures and lecture homepages, university club mailing lists, and social networks (Facebook, Twitter, Line, and WeChat). The answer procedures, the voluntary nature of participation, and anonymity declaration within the explanation of informed consent were presented on the questionnaire's top page. Responders answered the questionnaire via their internet surroundings. Data were collected from May 22 to July 16, 2020, crossing the emergency declaration lift at 8 weeks, which is the earliest time after receiving an ethical review approval. --- Questionnaire design We designed a questionnaire (S1 File in S1 Data) so as to have seven categories: demographic information, knowledge about COVID-19 and virology (Knowledge), approach and frequency of obtaining information and comprehension level (Information), social behaviors and actions (Behavior), personal-psychological aspects (Psychological aspects), change in awareness before and after the "declaration of emergency," (Awareness) and concerns about online lectures (Lecture). The questionnaire was designed in Japanese with an English translation. Demographic information included gender, respondents' majors (i.e., "humanities subjects" OR "medicine, dentistry, or pharmaceutical science" OR "science subjects (related to biological subjects)" OR "science subjects (not related to biological subjects"), grade, age, nationality, residence, and so forth. Awareness included two questions asking whether there was a change in awareness before and after the emergency declaration and the percentage of change. Psychological aspects were designed based on two instruments: extroversion from a short form of the Japanese Big-Five Scale (5-point Likert scale) [30] and public/private self-consciousness scales from the self-consciousness scale for Japanese (7-point Likert scale) [31]. Knowledge, Information, Behavior, and Lecture were self-designed as 6-point Likert scales (1 meaning "disagree at all" to 6 meaning "strongly agree") according to several questionnaires for infectious diseases [32][33][34]. --- Data preprocessing Typos were corrected. Different terminology for the same nationality or residences were unified. Inconsistent responses were corrected in an interpretable direction. For example, when the "self-defense change" was answered as "increased," but its percentage was answered with 0, we corrected the answer to "no change." To align the positive with the desired direction and provide a concise view, we exchange responses to reverse questions (S1 File in S1 Data) using the largest scale + 1-original value. Scales for each category were validated using factor analysis, and variables for the following analyses were generated. The number of factors was decided by Wayne Velicer's minimum average partial criterion (MAP) or Bayesian information criterion (BIC). If the number of factors was not 1, the Promax rotation was conducted. Questions with the magnitude of loadings <unk> 0.4 were removed. Variables were created from the subscale scores obtained from the mean of the questions within factors. Finally, 10 continuous variables indicating psychological aspects and KAP (extroversion, public self-consciousness, private self-consciousness, basic knowledge, advanced knowledge, info acquisition, info explanation, info anxiety, self-restraint, and preventative action) with acceptable internal consistencies (Cronbach's <unk> = 0.73 <unk>0.92) were obtained (S2 File in S1 Data). For the following subgroup comparisons, based on the response time recorded, we created a binary variable response time ("early" responses obtained in 0-4 weeks vs. "late" responses obtained in 5-8 weeks). Several variables were organized into binary variables: major subject ("bio-backgrounds": "medicine, dentistry, or pharmaceutical science" or "science subjects (related to biological subjects)" is selected vs. "non-bio-backgrounds": others, education level ("undergraduate" vs. "graduate or above"), nationality ("Japanese" vs. "others"), and residence ("capital region" vs. "others"). Psychological aspects (extroversion, public self-consciousness, private self-consciousness) were also converted into binary values as "high" or "low." --- Data analysis Descriptive statistics. To directly evaluate KAP toward COVID-19 among university students in Japan, the responses to the questions were aggregated, and the extent and magnitude of KAP were confirmed. Subgroup comparisons. To examine responses to what questions are different caused by the factors (e.g., response time, gender, major subject, education level, nationality, residence and psychological aspects [extroversion, public self-consciousness, and private self-consciousness]) we carried out subgroup comparisons. Subgroup comparisons for all the questions were conducted to determine if there were any differences in responding to questions within subgroups. The normality and homoscedasticity of data were tested by the Shapiro-Wilk and F-tests. If data were non-normally distributed and homoscedastic, differences were tested using the Mann-Whitney method. Brunner-Munzel method was used to test the significance between two groups of non-normal but heteroscedastic data. The Bonferroni method was adopted for multiple comparisons. The subgroups were those generated in data preprocessing. Logistic regression. Logistic regression models were constructed to evaluate whether the previously reported important variables from other countries' surveys and psychology aspects mentioned above influence KAP toward COVID-19 among Japanese university students. KAP outcomes were generated from factor analysis, and in logistic regression models were used as binary outcomes. Here, we created binarization (high/low or safe/unsafe) of basic knowledge, advanced knowledge, info acquisition, info explanation, info anxiety, self-restraint, and preventative action as outcomes. For the former four outcomes, explanatory variables used are response time, gender, major subject, education level, nationality, residence, and psychological aspects (extroversion, public self-consciousness, and private self-consciousness). For the last two outcomes indicating the actions, continuous variables of basic knowledge, advanced knowledge, info acquisition, info explanation, info anxiety, and self-restraint were further added, as we also want to confirm whether the knowledge and attitude to information influence the practices. Multiple linear regression. Multiple linear regression (MLR) models were constructed to further confirm the factors of practices further. The outcomes were set to be binary in the above logistic regression, while in MLR, the outcomes were used as continuous values. Therefore, the MLR models were quantitative. The MLR models for self-restraint and preventative action were constructed. The explanatory variables used were the same as above. Determinant factors were decided when Akaike's Information Criterion (AIC) reached a minimum. Data were first normalized, and the variance inflation factor (VIF) was also calculated to confirm multicollinearity. The regression models' predictive power was assessed by the mean R 2, which is calculated from five-fold cross validation by repeating 50 times (a machine-learning approach). A significant level of 0.1 was adopted. Software and package versions. All the analyses were conducted using R (version 3.6.2), RStudio (1.2.5033), Mephas Web (2020-01) [35], and R packages psych (1.9.11), coin (1.3-1), stats (3.6.2), lawstat (3.4), glm (3.6.2), lm (3.6.2), bestglm (0.33), car (3.0-8), and caret (6.0-86). --- Ethics The Ethics Committee of Waseda University and the Graduate School of Pharmaceutical Sciences of Osaka University approved this study (2020-HN005 and Yakuhito2020-5). Informed consent was obtained from each participant on the first page of the questionnaire. --- Results --- Demographic characteristics A total of 362 participants (female 52.8%) were included in the analyses after one participant's response was removed due to the invalidate residence input (Table 1). The age of the participants was 20.8 <unk> 3.5 years. Students whose majors were biology related (bio-backgrounds) accounted for 32.9%, undergraduate students for 79.0%, Japanese students 83.7%, and participants from capital region residence 35.4%. --- Overall results showed that respondents were inclined toward safety and good health The overall results showed that the respondents were inclined toward safety and good health. Proportions of responses larger than the theoretical median of 3.5 varied from 24.6% to 100% (S3 File in S1 Data). The highest response score was to "I know it's important to avoid enclosed spaces, crowded areas, and close situations." No response was less than the theoretical median, suggesting a deep understanding of the Japanese government's advisement of 3Cs [36] (Fig 1). High scores were also obtained in the knowledge or awareness of the infectious routes, vital signs, the severity of the virus, preventative measures (e.g., handwashing, maskwearing). Additionally, 68.5% of respondents showed a positive attitude toward early drug administration (e.g., Avigan) (Fig 1). --- Differences detected among subgroups The significant differences among subgroups were extracted (S4 File in S1 Data). Late response time (5-8 weeks) showed significant lower medians in the questions related to the tension toward COVID-19, suggesting that with time, although the basic knowledge has increased, tension has been eased. For gender differences, females showed conservative/safer attitudes than males in the stance to close bars and go to university. Compared to non-biobackgrounds, bio-backgrounds showed a higher level of advanced knowledge as expected. Surprisingly, non-bio-backgrounds showed a higher score on the opinion, "I think that I can naturally heal without medical care such as hospitalization even the novel coronavirus infects me (reversed)," suggesting a safer consideration on the COVID-19 infection. Similarly, for education level, students at the graduate level or above had significantly higher advanced knowledge, stronger willingness to accept anxious news, and slightly higher satisfaction about online lectures and assignments. In terms of nationality, Japanese students knew more basic knowledge and were more sensitive to the emergency declaration. In contrast, international students knew more advanced knowledge and had a stronger willingness to accept anxious news and correct information explanations. When considering the residence, students living in the capital region showed stronger self-restraint and acted more safely than others. From the psychological aspects, more active information collection was detected in the high extroversion and the high private self-consciousness groups. --- Factors influencing university students' knowledge levels Logistic regressions were conducted to explore the determinant factors on high basic knowledge and advanced knowledge (Table 2). For basic knowledge, gender, major subject, education level, nationality, residence, extroversion, and private self-consciousness were significant determinant factors (Table 2A). The odds of Japanese students gaining basic knowledge was 3.33 times greater than others (international students), which strongly implies that Japanese students gained more basic knowledge than international students. The odds ratios (ORs) of extroversion and private self-consciousness were also >1, suggesting that extroverts and individuals with high private self-consciousness were likely to possess more basic knowledge than others. The ORs of gender, major subject, education, and residence were <unk>1, suggesting that females, with bio-backgrounds, at the level of graduate or above and in the capital region could positively influence the acquisition of basic knowledge. For advanced knowledge, except for response time, all the explanatory variables were significant (Table 2B). Bio-backgrounds, living in the capital region, low public self-consciousness, and high private self-consciousness strongly positively influence the acquisition of advanced knowledge. --- Factors influencing university students' attitudes The university students' attitudes to COVID-19 were assessed considering the frequency and activities of information acquisition (info acquisition), the correct explanation of the information (info explanation), and willingness to collect anxiety information (info anxiety) (Table 3). For the frequency and activity of information acquisition, only psychology aspects showed significance and were positively associated with the outcome (Table 3A). The determinant factors for the correct explanation of the information were female, residence, public self-consciousness, and private self-consciousness (Table 3B). Incorrect explanations may affect students' protection and further increase their infection risk. For the willingness to collect anxiety information, response time, nationality, and extroversion are significant. International students and low extroversion are more likely to receive information regardless of whether it can make people anxious (Table 3C). Notably, the OR (late vs. early responses) was 0.56, suggesting that more students were willing to receive anxious news and countermeasures with time passing. --- Factors influencing university students' preventative practices and behaviors Preventative practices and behaviors included self-restraint and preventative action. Two logistic regression models for these behavior terms were constructed using the variables mentioned above and variables of knowledge and information as explanatory variables (Table 4). For the self-restraint, response time, gender, residence, extroversion, info explanation, and info anxiety, there was statistical significance (Table 4A). Female, living in capital region, and low extroversion show a relatively safer self-restraint. The self-restraint is attenuated compared to early responses. Moreover, we found that information explanation (info explanation) positively, and the anxiety of information (info anxiety) negatively affected the self-restraint. For the preventative action, residence, private self-consciousness, basic knowledge, info acquisition, and info anxiety were significant (Table 4B). We also applied MLR when treating self-restraint and self-action as continuous values. After variable selection using AIC, important variables remained (Table 5). The predictive abilities (R 2 ) for the models were 0.45 and 0.34, respectively. Nested cross-validation revealed similar predictive abilities. The self-restraint model revealed that correct information explanation and Japanese nationality were associated with strong self-restraint (Table 5A). On the other hand, male, non-biobackgrounds, living in the non-capital region, high advanced knowledge, and unwillingness to receive anxious information, and extroversion negatively influenced the self-restraint. The model for preventative action was constructed using less explanatory variables. The coefficients of private self-consciousness, basic knowledge, and variables about information were positive, and that of residence in the non-capital region were negative (Table 5B). The results generally consist of logistic regressions: Students living in the capital region, having more basic knowledge, frequently inquiring information, correctly explaining information, and with high private self-consciousness tend to act more safely. --- Discussion --- University students in Japan exhibited a relatively high level of basic knowledge and awareness University students in Japan belong to a representative group of the young generation. The outbreak of unprecedented COVID-19 pandemic directly impedes their daily lives and activities. Compared to the employees and other adult populations, university students exhibit less financial independence but have more free time and broader activities. Meanwhile, university days are the most crucial period in forming one's self-will, and university students are much more likely to act on their self-judgment than other students. To control the spread of infections, governments provided guidelines and countermeasures, and KAP influences adherence to them. Since university students belong to a separate population according to the aspects above, we first evaluated their KAP toward COVID-19. Overall, we found a high level of basic knowledge on COVID-19 and control measures among university students in Japan. For example, with regard to the question about avoiding enclosed spaces, crowded areas, and close situations (three Cs) [36], all the responses were no less than the theoretical median, and the frequency of handwashing and mask-wearing no less than the theoretical median were both 96.4%, indicating university students clearly understand the importance of avoiding the "overlapping three Cs" and the basic protective methods. These results were in line with the previous survey result in February according to which approximately 83.8% of Japanese citizens always or sometimes conducted hand hygiene [37]. These results could be a possible reason for effective control of the infection in the early period after the emergency was declared. Several similar surveys from other countries also showed high levels of university students' KAP toward COVID-19. Our results agree with these previous surveys and compensate for the lack of Japanese data. For example, considering the view on mask-wearing, approximately 52.1% of Jordan's university students [21] and 98.0% of Chinese (Wuhan region) university students [5] wore a facemask when leaving home, according to their responses from March and January, respectively, and 86.9% of Indonesian undergraduate students wore masks frequently when in a crowded place [22], according to their responses collected in April and May. Our results showed that the rate of frequent mask-wearing in Japanese students was 96.4%, which is still relatively high compared with the global results. Although it has not been proven, a commentary published in April hypothesized that Japanese culture, which is inherently suited for social distancing and face mask use, prevents viral spread [38]. This may be the reason why our results were relatively high. Students with high education levels and bio-backgrounds were found to have more advanced knowledge about viruses, vaccines, and drug targets. Interestingly, international students were prone to having more advanced knowledge, while Japanese students more basic knowledge. For some basic knowledge, such as diarrhea and taste disorders being symptoms of COVID-19, Japanese and international students showed slight differences, indicating that the route of obtaining basic knowledge may be different. Therefore, particular education/ information about Japan's countermeasures for international students may be necessary. Other KAP studies in global did not separate knowledge into basic and advanced. Therefore, our results provide novel information. Moreover, we found that females had more basic knowledge and explained the information more correctly than males, which agrees with previous studies in other countries showing that females have higher knowledge about COVID-19 and a proper attitude [39,40]. --- Students living in the capital region conduct themselves with strong selfrestraint and safe preventative action, however, there is a decrease in selfrestraint over time The behaviors of students were evaluated using self-restraint and preventative action. For all the models, the residence was a significant factor, and students living in the capital region exhibited safer behaviors when compared to other regions. Until now, infections have been most extensive in Tokyo, enhancing the awareness of capital region residents. Therefore, behaviors took a safer direction. A survey targeting Indonesian undergraduate students showed that rural students showed significantly higher KAP than those living in cities, which seems to be in disagreement with the current results. However, we compared the capital region and others in Japan, and the severity of the infection situation was different, making students living in the capital region higher KAP toward COVID-19. One noteworthy observation was that the logistical regressions and MLR showed the response time to be significant for self-restraint, indicating a decrease in self-restraint over time. --- Psychological aspects influence students' behaviors Behavior patterns of individuals differ depending on their personality. "Extroversion" and "introversion" have been terms widely used, and they depend on how people direct their energy, that is, externally or internally [41]. Extroverts tend to be more interested in the outside world and the decision-making process of things. They follow the "environment" when it comes to directions and common sense recommended by others. From the perspective of sociality, extroverts are more active, social, adapt faster, and are active when working with others. Their thinking patterns are realistic, execution first, and centered on others. Therefore, it can be said that an extrovert is highly other-centered and environment-dependent, and he or she can correctly recognize and make a judgment on the surrounding situation. During the COVID-19 pandemic, it has been presumed that extroverts can actively learn and absorb new knowledge but may require synchronized actions owing to the emphasized relationships with others. The aforementioned results of university students support this hypothesis: The higher the score of extroversion, the higher the coronavirus knowledge, and the more self-defense measures implemented. However, there was a high tendency to go out with others. Therefore, the character of extroversion has the useful side of knowledge acquisition and self-defense, and the opposite effect of not strictly following the stay-at-home order. A recent report on a psychological and behavioral survey on COVID-19 in Japan revealed that extroverts scored highly for infection prevention behavior and mask-wearing behavior [42], which was consistent with the findings above. Another aspect to note is that people's behaviors are influenced by how they pay attention to themselves; that is, self-consciousness [29]. Private self-consciousness is a measure of individual differences considering the extent to which they pay attention to those aspects of themselves that are not directly observed by others, such as inner feelings, emotions, and moods. When self-consciousness is high, people monitor themselves, act in harmony with their own will and values, and provide planning in their lives to achieve their goals. This refers to the socalled spirit of self-denial or severity for oneself. Conversely, public self-consciousness shows individual differences in the degree to which they pay attention to the aspects of themselves that others can observe, such as clothes and hairstyles, or their behavior toward others. They refrain from self-centered behavior that may be criticized by the group or use expectations of others and norms of the field as their behavioral standards. Their appearance varies depending on the situation [43]. The results of this study point to the behavioral tendencies of self-consciousness. Concretely, to cope with infection, a person high in private self-consciousness does not easily neglect the infectious disease and positively acquires knowledge about it. More importantly, they take strict self-protection measures. On the contrary, although people who are high in public self-consciousness are interested in the spread of infectious diseases, it cannot be said that they place importance on the risk of infectious diseases. In addition, similar to highly extroverted people, it can be said that they tend to violate the stay-at-home order when invited by others because they focus on how others see them. --- Education of students per the critical factors of behaviors may provide benefits This survey was designed and conducted to evaluate the KAP on COVID-19 among university students in Japan and investigate determinant factors. Meanwhile, through this quantitative evidence, suggestions about reasonable control of the spread of infections among university students and the entire society, including university managers, expert teams, and policymakers, are expected. Via the aforementioned analyses, the following can be concluded from the aspects of imposing stricter self-restraint and/or acting more safely: 1. Living in the capital region is associated with higher KAP. 2. Being female is associated with higher KAP. 3. Japanese students exhibit slightly stronger self-restraint than international students. 4. Basic knowledge is more important, compared to advance knowledge. --- Frequent information acquisition, correct explanations of the information, and willingness to receive anxious information are essential. 6. Extroversion is positive to safer preventative action but negative to self-restraint. 7. Those with high private self-consciousness act more safely, but high public self-consciousness influences strict self-restraint negatively. 8. The strength of self-restraint decreases over time. The ongoing COVID-19 pandemic has substantial impacts on peoples' lives accompanied by economic damage. Financial support was provided by governments to individual households or small companies to make living easier and maintain social sustainability during the self-restraint or lockdowns. Within the social surroundings and university measures to COVID-19, campus life, lifestyle behaviors, and economic status have dramatically changed for university students, accompanying other changes in their mental health and well-being [11]. These points are incredible when considering university students. For example, the Ministry of Education, Culture, Sports, Sciences and Technology in Japan created a series of financial support measures for university students, including Emergency Student Support [44]. Instead of focusing on university students' mental health and well-being, this study focused on their frequent activities, assuming that there was a possibility that young university students may exhibit low adherence to self-restraint and protective action. Contrary to our initial expectations, university students in Japan generally showed a high KAP level. However, when we search the factors that influence KAP, we obtained the findings described above. Therefore, we suggest that universities, media, and the government should consider these aspects of university students, devising publicity and education measures to achieve a more significant education effect. For example, it is particularly beneficial and vital to educate or inform those who are less careful of the current situation, or those who are not cautious enough. --- Comparisons with the KAP COVID dashboard from Johns Hopkins University and a recent report on the comparison of knowledge, precaution practice, and depression among students in South Korea, China, and Japan Our data in this report come from the survey completed on July 16, 2020. While we are revising this paper, similarly, a group from Johns Hopkins University conducted a global KAP COVID survey in July 2020 and published a dashboard online [45]. Although the KAP COVID dashboard focused on the whole population, and no psychological factors were analyzed, which is different from the current study, it provides results for each country and subgroups (e.g., college/university or graduate school vs. secondary school education or lower), making it possible to compare the results indirectly. The KAP dashboard implied that a high percentage of individuals with a college education or above showed high self-reported prevention behaviors in Japan (mask-wearing, 96%; physical distancing, 74%; handwashing, 92%). These results are similar to our findings from the Japanese university students' survey described previously and illustrated in Fig 1. Furthermore, the dashboard shows that individuals with a college degree or above have better knowledge than the other groups when it comes to three or more symptoms of COVID-19 and no treatment or vaccine at the time of the survey. These results also agree with our findings that university students possess a high level of knowledge. Our results further showed that basic and advanced knowledge levels varied in gender, major subjects, education levels, nationality, residence, extroversion, and self-consciousness from logistic and multiple linear regressions. Moreover, the dashboard showed that 70% of individuals with college education or above accepted vaccines. According to our results, 68.5% of college/university students were willing to use newly developed drugs (Fig 1). Therefore, despite the different population target groups, our results agree with results from the KAP COVID Dashboard from Johns Hopkins University, suggesting a high reliability. A paper on the comparisons of knowledge, precaution practice, and depression of students from South Korea, China, and Japan has been reported more recently [46]. The main difference between this report and the present study is that our study focused on KAP factors (e.g., psychological aspects), while the other study explored the depression symptoms. If we only focus on KAP, the previous report indicates that in all three countries, students showed good knowledge and high levels of COVID-19 awareness, and the Japanese group performed better than the other two considering hand hygiene. They also clarified that females tended to have a higher level of preventative measures than males [46]. These results are in line with our findings discussed above. --- Strength, limitations, and future work The strength of this study is in its target group-the Japanese university/college students. When we launched the study, to the best of our knowledge, no other studies had been reported to investigate the KAP toward COVID-19 among this target group. We conducted this survey to provide evidence. Meanwhile, unlike other KAP studies, we analyzed the determinant factors, including psychological factors (i.e., extroversion, private self-consciousness, and private self-consciousness) for not only the practices of self-restraint and preventative actions, but also for the knowledge and the attitudes toward information. The factors of knowledge and attitudes toward information have not been analyzed in other studies. Understanding determining factors can help us improve KAP among university students during the COVID-19 pandemic. This study has certain limitations. It mainly adopts a convenience sampling method, which is a nonrandom and nonprobability selection. Recruitment bias may have occurred, sampling error could not be calculated, as well as the response rate, and the anonymity of participants to each other may have been violated. The samples were limited in number, and they exhibited imbalances in the subgroups. Thus, the results may not sufficiently represent the whole population of Japanese university students. More importantly, with regard to response bias due to spontaneity, only students with high awareness responses to the questionnaire may provide results with favorable evaluations, and these self-reported responses may not be the same for the whole population. Furthermore, this study is cross sectional, and the results are timedependent. As the COVID-19 situation is changing rapidly, the KAP among university students is also changing. The results reported here represent the situation during the survey period. Additionally, the responses obtained before the emergency declaration lifting are far fewer than those obtained after the lifting, making it impossible to analyze behaviors due to the emergency declaration lifting. Therefore, this should be considered when discussing the results. As this study can only show the related determining factors, further studies are required to clarify causal relationships between the aforementioned factors and the behaviors/actions by controlling baseline information. Moreover, because this survey is cross sectional, to examine the time-dependent KAP changes and factors, longitudinal studies are also necessary. --- Conclusion Japanese university students have been inclined toward safety and good health preservation during the COVID-19 crisis. Gender, major subjects, education levels, nationality, residence, private self-consciousness, and extroversion have all been associated with knowledge and --- PLOS ONE attitudes toward COVID-19. Capital regions, high levels of basic knowledge, high information acquisition, and correct information explanations have all contributed positively to preventative action
The coronavirus disease (COVID-19) pandemic has greatly altered peoples' daily lives, and it continues spreading as a crucial concern globally. Knowledge, attitudes, and practices (KAP) toward COVID-19 are related to individuals' adherence to government measures. This study evaluated KAP toward COVID-19 among university students in Japan between May 22 and July 16, 2020, via an online questionnaire, and it further investigated the associated determining KAP factors. Among the eligible respondents (n = 362), 52.8% were female, 79.0% were undergraduate students, 32.9% were students whose major university subjects were biology-related, 35.4% were from the capital region, and 83.7% were Japanese. The overall KAP of university students in Japan was high. All respondents (100%) showed they possessed knowledge on avoiding enclosed spaces, crowded areas, and close situations. Most respondents showed a moderate or higher frequency of washing their hands or wearing masks (both at 96.4%). In addition, 68.5% of respondents showed a positive attitude toward early drug administration. In the logistic regressions, gender, major subjects, education level, nationality, residence, and psychological factors (private selfconsciousness and extroversion) were associated with knowledge or attitudes toward COVD-19 (p < 0.05). In the logistic and multiple linear regressions, capital regions, high basic knowledge, high information acquisition, correct information explanations contributed positively to preventative action (p < 0.05). Non-capital regions, male gender, non-bio-backgrounds, high public self-consciousness, high advanced knowledge, incorrect information explanations, and high extroversion contributed negatively to self-restraint (p < 0.05). Moreover, self-restraint was decreasing over time. These findings clarify the Japanese university students' KAP and the related factors in the early period of the COVID-19 pandemic, and they may help university managers, experts, and policymakers control the future spread of COVID-19 and other emerging infections.
. Furthermore, this study is cross sectional, and the results are timedependent. As the COVID-19 situation is changing rapidly, the KAP among university students is also changing. The results reported here represent the situation during the survey period. Additionally, the responses obtained before the emergency declaration lifting are far fewer than those obtained after the lifting, making it impossible to analyze behaviors due to the emergency declaration lifting. Therefore, this should be considered when discussing the results. As this study can only show the related determining factors, further studies are required to clarify causal relationships between the aforementioned factors and the behaviors/actions by controlling baseline information. Moreover, because this survey is cross sectional, to examine the time-dependent KAP changes and factors, longitudinal studies are also necessary. --- Conclusion Japanese university students have been inclined toward safety and good health preservation during the COVID-19 crisis. Gender, major subjects, education levels, nationality, residence, private self-consciousness, and extroversion have all been associated with knowledge and --- PLOS ONE attitudes toward COVID-19. Capital regions, high levels of basic knowledge, high information acquisition, and correct information explanations have all contributed positively to preventative action. Non-capital regions, male gender, non-bio-backgrounds, high public self-consciousness, high levels of advanced knowledge, incorrect information explanations, and high extroversion have all contributed negatively to self-restraint. Moreover, self-restraint has decreased with time. The understanding of these factors and trends may help university managers, experts, and policymakers in planning countermeasures that would control the future spread of COVID-19 among university students and Japanese society. --- All relevant data are within the manuscript and its Supporting Information files. --- Supporting information --- S1 Data. (XLSX) --- Author Contributions Conceptualization: Xinhua Mao, Zheng Wen, Tatsuya Takagi. Data curation: Xinhua Mao, Yi Zhou, Yu-Shi Tian. Formal analysis: Asuka Hatabu, Xinhua Mao, Yi Zhou, Yu-Shi Tian. Methodology: Xinhua Mao, Yi Zhou, Yu-Shi Tian. Project administration: Yu-Shi Tian. Resources: Asuka Hatabu, Yi Zhou, Norihito Kawashita, Zheng Wen, Mikiko Ueda, Yu-Shi Tian. Software: Asuka Hatabu, Xinhua Mao, Yi Zhou, Yu-Shi Tian. Supervision: Mikiko Ueda, Tatsuya Takagi. Validation: Xinhua Mao, Yi Zhou. Writing -original draft: Asuka Hatabu, Xinhua Mao, Yi Zhou, Yu-Shi Tian. Writing -review & editing: Asuka Hatabu, Xinhua Mao, Yi Zhou, Norihito Kawashita, Zheng Wen, Mikiko Ueda, Tatsuya Takagi, Yu-Shi Tian.
The coronavirus disease (COVID-19) pandemic has greatly altered peoples' daily lives, and it continues spreading as a crucial concern globally. Knowledge, attitudes, and practices (KAP) toward COVID-19 are related to individuals' adherence to government measures. This study evaluated KAP toward COVID-19 among university students in Japan between May 22 and July 16, 2020, via an online questionnaire, and it further investigated the associated determining KAP factors. Among the eligible respondents (n = 362), 52.8% were female, 79.0% were undergraduate students, 32.9% were students whose major university subjects were biology-related, 35.4% were from the capital region, and 83.7% were Japanese. The overall KAP of university students in Japan was high. All respondents (100%) showed they possessed knowledge on avoiding enclosed spaces, crowded areas, and close situations. Most respondents showed a moderate or higher frequency of washing their hands or wearing masks (both at 96.4%). In addition, 68.5% of respondents showed a positive attitude toward early drug administration. In the logistic regressions, gender, major subjects, education level, nationality, residence, and psychological factors (private selfconsciousness and extroversion) were associated with knowledge or attitudes toward COVD-19 (p < 0.05). In the logistic and multiple linear regressions, capital regions, high basic knowledge, high information acquisition, correct information explanations contributed positively to preventative action (p < 0.05). Non-capital regions, male gender, non-bio-backgrounds, high public self-consciousness, high advanced knowledge, incorrect information explanations, and high extroversion contributed negatively to self-restraint (p < 0.05). Moreover, self-restraint was decreasing over time. These findings clarify the Japanese university students' KAP and the related factors in the early period of the COVID-19 pandemic, and they may help university managers, experts, and policymakers control the future spread of COVID-19 and other emerging infections.
Introduction Public health and health equality are essential for human development. Health is both a medical and social issue compounded by structural, economic, and environmental factors. If these factors are compromised, vulnerabilities can create health inequalities and human disasters (1). Low socioeconomic status is associated with poor birth outcomes, infectious diseases, chronic conditions, and life expectancy, which result from disparities that include poor access to health care, financial constraints, environmental differences, differential access to information, geographic locality, and behavioral factors (2). Economic instability is associated with worse health outcomes, forcing individuals to prioritize other issues such as rent and utility bills over food and health needs. Some key barriers to obtaining food include reduced access to supermarkets with healthier food options, as well as difficulty accessing federal nutrition assistance programs and food from food banks or pantries due to lack of these nearby, lack of transportation to get to them and complicated and time-consuming application process to access federal assistance. Informational barriers like the lack of awareness or understanding about available food and housing resources also may contribute to low utilization. In addition, the stigma associated with participation in public assistance programs may affect access as well (3). Food security (FS) is "access by all people at all times to enough food for an active, healthy life" (4). Food insecurity "exists whenever the availability of nutritionally adequate and safe foods or the ability to acquire acceptable foods in socially acceptable ways is limited or uncertain" [(5), p. 1560]. Food insecurity is a risk factor for all types of malnutrition, food deficiencies, excess or imbalance of energy, as well as under and over nutrition like being overweight or obese due to insufficient intake and overconsumption of high-calorie/low-nutrient-dense foods (6). Food insecurity is more prevalent in urban areas, immigrant communities and among racial/ethnic groups, which are tied to lack to equity of resources leading to poor health outcomes that during periods of economic downturn, tend to increase (7). In addition, systemic inequities drive food and nutrition insecurity. Differences between racial and ethnic groups highlight a lack of equity that may lead to health disparities among food-insecure populations (8). Housing security (HS) is defined as "availability of and access to stable, safe, adequate, and affordable housing and neighborhoods regardless of gender, race, ethnicity, or sexual orientation" [(9), p. 99]. Housing insecurity is a lack of access to safe, affordable, and quality housing, and it includes homelessness, housing instability, poor housing conditions, and low household or neighborhood safety (9). Housing insecurity is a determinant of multiple high-risk behaviors and poor health outcomes among adults (10), and it also contributes to several low health outcomes among children (11). In the United States, approximately one in 10 college students is homeless and 45% live in an unsafe environment with a wide range of challenges related to housing affordability and stability (12). The relationship between education and health at both individual and regional levels is salient (1). In the United States, accessibility to colleges and universities has increased in the past 50 years, resulting in demographic composition changes with more low-income, first-generation, racial, and ethnic minority students enrolled than ever before (13,14). Nationally, the demographic characteristics of university students are shifting, and it is becoming more common for students to have children and work fulltime while enrolled as full-time students (14). Food insecure students are also more likely than food secure students to experience housing insecurity, gain weight while attending college, partake in unhealthy diets with higher sugar and fat content, and experience psychological distress (15). Among higher education students, basic needs insecuritywhich includes food and housing insecurity-contributes to poor academic and health outcomes. Food and housing security is a basic need and if students' needs are not met, then they will be unable to engage in higher-level learning (13). Basic needs insecurity among college and university students is associated with several negative health outcomes, including decreased cognition and sleep quality, increased rates of certain chronic diseases, higher body mass index, higher odds of stress and depression, more emergency room visits and hospitalizations, and higher mortality rates (7,13,14). A study by College and University Food Bank Alliance (16)(17)(18) revealed that 30% of college students in the U.S. are food insecure, and 56% of these students are employed, 75% receive financial aid and 43% participate in some type of campus meal plan. In addition, 36% are housing insecure, a number that increases to 51% for community college students, and 14% of students are homeless. The growing cost of campus tuition, health care, books, transportation, and living expenses have resulted in students having to decide between paying for bills or securing food forcing some students to leave college without obtaining degrees with financial concerns as the primary cause (16)(17)(18). The COVID-19 pandemic exacerbated the financial challenges for many US households. Higher unemployment due to lockdowns and social distancing measures resulted in new or worsening economic barriers to basic needs security. In addition, public transportation was disrupted due to social distancing requirements, presenting a physical barrier to obtaining food for millions of Americans (7). While young people are less vulnerable to severe illness from COVID-19, their education, work, and social lives have been interrupted by the pandemic (19). These interruptions have important consequences for public health, including an increase in anxiety and depressive symptoms and increased risk of psychiatric diagnosis (20). Beyond mental health, the combination of COVID-19 and food insecurity was found to promote gut anomalies, which could have acute or long-term health implications for infections and chronic conditions (21). --- Importance of university response to FS and HS It is critical to improve our understanding of the impact of the COVID-19 pandemic on food and housing security among higher education students. By measuring changes in basic needs security for this population, we can prepare for the likely public health and social consequences in the short and medium term. Furthermore, by identifying the key factors that are associated with food and housing security, we can more effectively direct limited resources to the students who are most in need and improve student academic outcomes in the long run. In this article, we analyze FS and HS among higher education students. The paper focuses on variables of importance that contribute to food and housing security to highlight some of the differences that coincided with the COVID-19 pandemic. In conclusion, we make recommendations for other institutions experiencing similar effects of the pandemic on student food and housing security. --- Materials and methods --- Participants The study used a cross-sectional, survey-based design to examine FS and HS among university students at an urban Hispanic-Serving Institution. The survey study compares 2 years of data, including before and during the COVID-19 pandemic. The study setting is a Hispanic-Serving University located in the U.S.-Mexico border region. The student population is representative of the local community: over 83% of students are Hispanic and nearly 50% self-identify as a first-generation student (22). --- Procedure In 2019 and 2020, online surveys were administered to students via a university platform to collect, analyze, and translate data in real time. Author and co-authors prepared the study protocol and instrument, which was piloted in the focused population by a trained interviewer (first and senior author), and student feedback from the pilot survey helped inform the final version of the survey questions. Using a Customer Relationship Management Program (CRM), survey invitations were sent to all students at the HSI in Fall 2019 (October 7-23, 2019) and Fall 2020 (November 5-20, 2020). The student population over the age of 18 enrolled at the university in 2019 was 25,177, and in 2020 was 24,879. Four emails were sent by CRM, including the initial invitation and three reminders in both years. Participants who voluntarily accepted to be in the study consented electronically and completed the survey online. The survey contained 30-36 questions, took approximately 10 min to complete, was anonymous, and was open for at least 16 days each year. Participants had the option to enter a raffle for four $75dollar electronic gift cards. A total of 6,484 (26%) participantswho met the inclusion criteria of being at least 18 years old and enrolled at the university at the time of study-completed the survey in 2019, and 12,536 (50%) participants completed the survey in 2020. --- Measures Both surveys contained questions that provide key measures of food security, housing security, and potential determinants of these outcomes among survey respondents. To measure FS, authors used the validated survey questions and scoring procedures from the six-item short form of the U.S. Department of Agriculture (USDA) Household Food Security Survey Module (23,24). The USDA survey questions ask about different aspects of household food security in the past 12 months, and each response option corresponds to a score. The responses to the sixitem USDA survey were scored, summed, and categorized using the validated food security status groups reported in Bickel et al. (23). The resulting three categories of FS are: very low FS, low FS, and high or marginal FS. To measure HS, two survey questions were adapted-using input from college students in the target population-from the Los Angeles Community College District Survey of Student Basic Needs (25). The two HS measures were most suitable for the population of interest given the characteristics of their sample (25) and our community. The first HS survey question was: (Q18) "In the past 12 months, have you had a permanent address?" A "yes" response indicates higher HS, whereas a "no" response indicates lower HS. The second HS question was: (Q19) "Have you had to spend a night (or more) in any of the following: hotel or motel; home or room of a friend or acquaintance; home or room of a family member; shelter; transitional living center; public spaces like library, abandoned buildings, or a car." Higher frequency responses indicate lower HS, whereas lower frequency responses indicate higher HS. For measures of potential determinants of FS and HS, the survey asked questions on income, education (enrollment status and academic level), employment (status, location, and number of weekly h), age, gender, race/ethnicity, transportation (mode and reliability), and living situation. For the survey question on gender, respondents were asked to indicate their preferred pronouns (he/him, she/her, they/them, other, or prefer not to respond). Some of the standard questions were taken or adapted from the Los Angeles Community College District Survey of Student Basic Needs to meet our community characteristics (25). The study was IRB approved as exempt in September 2019 and amended and approved in 2020, and it was launched by the University's Dean of Students Office. --- Data cleaning and validation All identifying information from the survey data was removed to protect confidentiality of participants, as well as responses with fully missing data. A missing value analysis was conducted for the remaining data in order to detect any further missing answers or patterns of missingness. However, data was deleted since missingness was not random (MAR) but exhibited strong patterns. Following this analysis, the observations that did not have levels recorded for food and housing security were deleted from the data. This results in a reduction in data as shown in Figure 1 consort diagram. Following this preprocessing stage, the data was readied for analysis by matching 28 variables common to both surveys. Some minor editing of variable levels was conducted in order to match results of the surveys. This was minor and inconsequential in each case except for household income where each year was aggregated to two levels (<unk>$50,000 annual income and >=$50,000 annual income) since the levels provided as choices did not match with higher granularity. Finally, the USDA categories for food security were programmatically created using the six measures included in each year's survey. These categories were validated by the USDA (23) and are used for reporting out food security results. --- Statistical analysis Descriptive statistics of the variables to both years were tested for association with the USDA food security outcomes and the housing security outcomes. When the factor was continuous, a simple F test from an ANOVA model was used to detect any difference in the means. When the data were categorical, exact Fisher tests with simulated p-values were used to test for association. These tests results were summarized with p-values in the analysis. Following the statistical tests, data visualizations were utilized to probe important factors that differ across the years. When a factor was deemed significant in 2020 but not 2019, we summarized this outcome using an appropriate visualization to understand the nature of the shift. All analyses were conducted in R (26) and made use of the ggplot2 (27) and summary (28) packages. --- Results Initial analysis implies that food security increased from 2019 to 2020, and there is some evidence that housing securityas measured by a permanent address-increased as well (see Table 1). The housing security results are mixed, because a higher percentage of respondents reported (at least sometimes) experiencing a lack of any address in 2020. The housing and food security results are complex and due to a variety of factors, some of which may be temporary in nature. We explore the factors below, and we return to these findings in the discussion. To investigate the intersectionality of food and housing security across 2019 and 2020 regarding gender, ethnicity, age, --- Food security results According to the survey results, several variables have a different relationship with food security across survey years. In Table 2, there is a change in the employment status across the 2019 and 2020 cohort and its association with food security (pvalue (2019) =0.4, p-value (2020) <unk>0.001). Figure 2 illustrates the change in employment status across the 2 years. Note that the level "no" was not an option in 2019 and, hence, excluded. Additionally, the location of employment differs in association across the years (p-value (2019) <unk>0.001, p-value (2020) =0.2). Figure 3 illustrates this change in association. Finally, also regarding employment, the level of employment is also different across years (p-value (2019) =0.3, p-value (2020) <unk>0.001), as demonatrated in Figure 4. In general, for the variables about employment status, there were more parttime employed students and fewer students working on campus during the pandemic than before. Moreover, the association between this and being food secure was associated with employment variables. Regarding variables focused on student characteristics, there was an association now between academic level and food security that did not exist prior to the pandemic (see Figure 5). More senior and junior students were having issues with food security relative to other academic levels. The number of dependents also was no longer associated with food security (pvalue (2019) =0.002, p-value (2020) =0.6). This was indicated particularly by less impact by number of dependents. Finally, other students' characteristics were associated with food security across both data collections. --- Housing security results The survey results also demonstrate changes in relationships between some key variables and housing security across survey years. significant, with the exception of Enrollment. Regarding housing security (permanent housing-yes or no), there was a slight difference in association for employment status and housing security (p-value (2019) = 0.03, p-value (2020) = 0.08). This indicates that more full-time students were housing secure during the pandemic as depicted in Figure 7. Ethnicity also indicates a decrease in Hispanic/Latino students during the pandemic who have permanent housing as shown in Figure 8 (p-value (2019) <unk>0.001, p-value (2020) =0.13). Other variables were and remain to be associated with housing security across 2019 and 2020. --- Discussion The results suggest that food security and one dimension of housing security-possessing a permanent address-improved among university students in the 2019 and 2020 samples. Specifically, levels of high or marginal food security increased from 44 in 2019 to 55% in 2020; levels of very low food security decreased from 32 in 2019 to 23% in 2020; and possessing a permanent address increased from 89 in 2019 to 95% in 2020. In contrast, for the second measure of housing security (the frequency of lacking any address), there was an increase in the percentage of students who reported that at least sometimes they lacked any address. Despite the pandemic's upheaval of academic, economic, and social structures, our findings demonstrate that fewer students at this HSI experienced very low food security and (one form of) low housing security during the first year of the pandemic. We are unable to determine why food and housing security improved among university students during the pandemic, but social assistance interventions-including the expanded efforts by the government, community organizations, and the University-may have played a key role (29)(30)(31). It also is important to note that the percentage of students in the sample who lived off campus with family increased from 70 in 2019 to 80% in 2020 (see Table 1), which could account for some of the increase in food security. Below we highlight some key factors that are associated with student food and housing security across 4. Employment status and other related employment variables were altered during the pandemic. Nationally, many who had worked full-time reduced their employment to part-time status or no employment (37). This change in employment status, along with a halting on payment plans for student loans and the financial assistance provided by the CARES Act (38), may have affected the changes in association with food and housing security. The results --- FIGURE Hours worked per week and food security. suggest that educational and higher education institutions need to shift to providing more employment opportunities to students on campus when possible and consider that many students are still struggling to adjust to the end of CARES funding and will need additional income generating opportunities. It is important to emphasize that the student population at an HSI is not monolithic: key differences in food and housing security exist across subgroups. For example, regarding housing security, it is evident that Hispanic students experienced a decreased access to permanent housing. Prepandemic, 84% of Hispanic students had access to permanent housing and during the pandemic it decreased to 77%. This presents an opportunity for higher education and educational institutions to address this change by providing support services centered on locating affordable housing on and off campus. Considering this evidence, it is recommended that educational institutions be flexible and responsive regarding needs for affordable and accessible housing, and University leaders may want to target information campaigns to vulnerable student groups. Overall, the article has some important strengths. Food and housing security is assessed among students at an HSI. students, so the results fill a key gap in our understanding of food and housing security in higher education. In addition, the article presents food and housing security data both before and during the pandemic. By assessing food and housing security in two different time periods, the article improves our understanding of how food and housing security changed after the start of the pandemic. Furthermore, the study has high survey response rates. The high response rates by students may be due to the use of a trusted online survey platform and convenient email distribution methods. --- Previous studies often have low percentages of Hispanic --- Recommendations Along with other forms of social assistance, University interventions can play an important role in addressing basic needs and inequities among HSI higher education students. Given the bio-psycho-social-economic factors and stressors associated with the COVID-19 pandemic, it is imperative to provide students with continued financial, psychological and support services to mitigate the mediumand long-term effects of the pandemic. Government tuition and relief support programs are needed to help students in their education, to provide nutrition and housing to struggling students, and to improve the quality of life of the community. Tailored interventions are needed (1) to address stigma associated with accessing psychological, counseling, food and housing support services, and (2) to meet student's cultural and linguistic realities. To assist with student retention and academic success, it is key to reduce barriers, such as chronic hunger and sustained risk of unstable housing. Food distribution centers on campus are key environments to assist students in acquiring enough nutrient-dense food to overcome dietary limitations and reduce health disparities. It is important to orient students on public assistance and other campus and community resources to increase FS and HS, including the existence and eligibility of the Supplemental Nutrition Assistance Program (SNAP); Special Supplemental Nutrition for Women, Infants, and Children (WIC); Medicaid; Children's Health Insurance Program (CHIP); and local food banks and hunger relief centers. In the informational campaigns, a special emphasis should be placed on reaching vulnerable student subgroups, including those who work, are head of household, have children, receive health and human services, and have limited or no transportation. Instructors can provide information on assistance resources in the course syllabus, program/department web pages and social media pages. The establishment and promotion of campusbased programs and services through no-questions-asked food distribution and assistance venues for students is necessary. It also is essential to develop and implement food, housing and financial security tools for higher education students, so that the University can provide programming on campus to promote a secure campus environment with visual appeal, a comprehensive safety net, and culturally and linguistically responsive services (36). Based on the study results and the reviewed literature, we conclude that it is important to bring access and excellence University shifted to provide a range of financial assistance and support services. Pantry was one of the few sites that remained operational due to the essential service it provided. Campus pantry adapted its model to seek donations through social media and a digital platform, where donors could browse, purchase and send non-perishable items delivered directly to campus. Additional investments in the pantry by the University to help meet growing student needs and expanded its efforts by providing grocery store gift cards and donating additional holiday gift baskets to ensure that students had sufficient food during long holidays (32). In addition, the Foster, Homeless, and Adopted Resources (FHAR) Program provided financial and other support services for students with severe housing insecurity (33). Pantries with perishable, frozen and non-perishable items of high nutritional value, with online and pick up options. Open an integrated eligibility office to enroll in SNAP and other public benefits. Offer nutrition and health promotion education through professionals to orient on nutrients and meal preparation. Collaborate up with campus food services, food banks, and community-based organizations to bring hot meal kitchen services to campus. Inform of external food distribution centers and housing assistance sites Generate and disseminate directories of housing, food, transportation, health and human services online and hard copies. Identify and participate in health fairs and community events to promote food and housing security. Post event announcements on the online and bulletin boards, campus venues and student health centers. Reduce stigma surrounding use support services Ensure that course syllabus includes resource links to food, housing, transportation and other support services and encourage faculty to promote access. Offer regular tours to faculty, staff and student advisors of the university food pantry and Foster Homeless and Adopted Resources and promote access. Motivate faculty, staff and students to visit the support services on campus to demystify and mitigate stigma. Secure grants, financial or in-kind support from private and public donors and funders to increase the food bank's nutritious options and make campus food services affordable to students. Rename campus food pantry based on student input to make to more inclusive. Conduct ongoing food and housing security assessments to inform campus leadership on way to address social and political determinants. Create opportunities for community-engaged scholarship Engage faculty, staff and students in the development and implementation of a food and housing security strategy. Designate student ambassadors or advisors in Campus Colleges and Schools to promote food, housing and transportation security. Institutionalize support services Generate policies to secure and expand nutritional food services and improve access to affordable housing, transportation, and health services. Develop a food, housing and financial security toolkit to guide programming on campus. Ensure adequate space, equipment, and personnel for food storage and distribution. Include the food pantry and student support services in university interactive maps and expand h of operation evenings and weekends to meet the needs of working students. --- Study limitations The study contains some key limitations. The cross-sectional study design limits our ability to make causal inferences regarding key factors and food and housing security. Also, the self-reported instrument relies primarily on subjective responses from students, which may be biased. Furthermore, food-and housing-insecure students may be less likely to respond to a survey, which will overestimate food and housing security levels. Despite these limitations, the findings from this study have several important implications for research, practice and policy. --- Conclusion The current study contributes to the literature on food and housing security in higher education by focusing on college students-both before and during a pandemic-at an HSI. Higher education plays an important role in the generation of social capital, mobility, and health. To ensure that university students thrive academically, succeed socially and ultimately graduate, it is necessary to ensure that education institutions secure food and housing assistance for marginalized and vulnerable populations. Designing programs and policies with input from students is essential if we want to increase the utilization of assistance and prevent hunger and homelessness. Being responsive to changes in food or housing security also is crucial and requires concerted work to achieve. Multidisciplinary and collaborative work is required to mitigate food insecurity on campus, advance health and academic outcomes, improve the on-campus food and housing environments, and provide subsidized food options to facilitate equitable access to food. These efforts require guidance from health professionals, including nutritionists to assist students with meal preparation and budgeting skills. Ensuring equitable access to healthy food and affordable housing on campus is essential. Future research can evaluate the use and effectiveness of campus resources in improving food and housing security of university students. The challenges of the pandemic create an opportunity for universities to strengthen food and housing security among students. Economic and health crises do not guarantee increased levels of basic needs insecurity. Instead, higher education institutions can shift to a new, more comprehensive model of food and housing assistance. The model shift will improve student basic needs security and academic outcomes, increase opportunities for higher education and upward social mobility, and create stronger and more successful communities. --- Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s. --- Ethics statement The studies involving human participants were reviewed and approved by University of Texas at El Paso (IRB number 1470143). The patients/participants provided their written informed consent to participate in this study. --- Author contributions Conceptualization, writing-review and editing, and writing-original draft preparation: EM, AW, GS, and SC-B. Methodology: EM, AW, and GS. Analysis: AW. Investigation: EM, GS, and JA. Visualization: AW and PD. Project administration: EM and JA. All authors contributed to the article and approved the submitted version. --- Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. --- Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
University students occupy a socially marginal position and therefore are often underserved by academic and service institutions. This article analyzes food and housing security among students at The University of Texas at El Paso, a Hispanic-Serving Institution located in the U.S.-Mexico Border region. Findings of a sample of n = , university students are presented in the first cross-sectional, two-year food and housing security study on campus administered via platform Campus Labs Baseline. The first sample in consisted of n = , students representing . % of student enrollment ( , total enrollment), and the second sample in was n = , representing . % of student enrollment ( , total enrollment). To measure food security, the six-item short form of the U.S. Department of Agriculture (USDA) Household Food Security Survey Module was used. To document housing security, we created questions informed by student input. In this study, survey results are reported, and tests are conducted to assess the relationships between various student characteristics and food and housing security. Student characteristics significantly impacting food and housing security are probed further using data visualizations and subpopulation analysis with a focus on analyzing factors impacted by the COVID-pandemic. Results indicate that employment status, consistent employment status, hours per week, academic level, number of dependents, and gender are all factors associated with food security during the pandemic but not prior to the pandemic. Other factors, including, college a liation, ethnicity/race, having any dependents and being head of household, living alone, mode of campus transportation and mode of the transportation, household income, and age, all were associated with food security in both academic years. Using these results, a critical analysis of past interventions addressing food and housing security is presented with a focus on Frontiers in Public Health frontiersin.org Wagler et al. . /fpubh. . changes made during the pandemic. Recommendations are made for further data-driven interventions and future steps.
Introduction Sexual minorities experience pervasive health disparities associated with stigma, discrimination and violence [1][2][3][4]. Chronic stress stemming from these social contexts of stigma, discrimination and violence contributes to health inequities [3,5]. For example, a national US study (n 1<unk>4 34 653) reported that lesbian, gay and bisexual (LGB) persons had higher risk for the onset of post-traumatic stress disorder (PTSD) than heterosexuals, in part due to LGB person's elevated exposure to interpersonal violence [6]. Lesbian, bisexual and queer (LBQ) women experience sexual violence at similar [e.g. adult sexual assault (ASA)] or higher (e.g. childhood sexual abuse) rates than heterosexual women [7][8][9][10][11][12][13][14]. There is an urgent need to better understand the interplay between sexual violence, health outcomes, individual, social and structural factors among LBQ women. --- Sexual violence among sexual minority women Although study findings highlight that LBQ women experience health disparities and are particularly vulnerable to certain types of violence (e.g. hate crime, childhood sexual abuse) there is a dearth of LBQ population-specific information about types and correlates of sexual assault (in line with previous studies [10,11,14] we use the term'sexual assault' to include the scope of sexual assault, sexual violence, forced sex, sexual abuse and rape; we use'sexual assault' and'sexual violence' interchangeably. We acknowledge there are varying conceptual definitions of sexual assault and sexual violence [14]) [6,8]. A recent systematic review by Rothman et al. [14] explored prevalence rates for various types of sexual assault victimization, including lifetime sexual assault (LSA), childhood sexual assault (CSA), ASA, intimate partner sexual assault (IPSA) and hate crime-related assault, and concluded that these types of violence were elevated for LGB populations. For example, although estimates from the United States indicate 11-17% of women have experienced LSA [15,16], systematic review results reported lesbian and bisexual (LB) women had prevalence rates of LSA from 16% to 85% [14]. The wide variance was in part attributed to study design, with population-based studies reporting lower rates of sexual violence in comparison with convenience samples, as well as widely varying sampling methods and definitions of sexual assault and sexual orientation [14]. Friedman et al.'s [9] meta-analysis indicated that female sexual minorities were 1.5 times more likely to have experienced CSA than female heterosexuals. Although LB women were more likely than gay/bisexual men to report CSA, ASA, LSA and IPSA, LB women remain greatly underrepresented in sexual violence research in comparison with GB men [14]. Few studies assess the gender or sexual orientation of perpetrators of sexual violence among LB women, and this is also true for IPSA studies where there is a knowledge gap regarding whether perpetrators were same or opposite-sex partners [14]. --- Social and structural contexts of health among sexual minority women Systematic reviews and population-based studies highlight elevated risks for mental health issues among sexual minority women. Social ecological approaches explore the complex associations between health disparities and social and structural environments [17,18]. Health outcomes are shaped by individual (e.g. knowledge, attitudes, behavior), social (e.g. social support, sexual networks) and structural (e.g. stigma and discrimination, access to health care) level factors [18]. For example, numerous studies indicate higher rates of depression among sexual minority women in comparison with heterosexual women [1,3,[19][20][21]. Sexual stigmaa structural factor-has been associated with these higher rates of depression [3] and psychological distress [22]. Sexual stigma refers to social and institutional processes of devaluation of sexual minority identities, communities and same-sex relationships [23]. Forms of stigma include perceived, or felt-normative stigma that refers to awareness of negative societal attitudes and fear of discrimination; and enacted stigma such as overt acts of discrimination and violence [23,24]. Internalized homophobia refers to individuals' acceptance of negative beliefs, views and feelings toward the stigmatized group and oneself [23,24]. Experiences of sexual violence may also be correlated with depression [13] and PTSD [6,25] among lesbians. Internalized homophobia was associated with PTSD among gay male [26] and lesbian [27] sexual assault survivors. Among heterosexual women sexual violence appears to be consistently correlated with alcohol abuse, yet the pattern with LBQ women appears to be different. For instance, one study found that among LBQ women, CSAbut not ASA-was positively associated with alcohol abuse [10]. Limited research has examined physical and sexual health correlates of LSA among sexual minority women. Childhood sexual abuse was associated with functional pain in a US based cohort study of sexual minority youth [28]. Functional pain refers to pain without a diagnosed pathology, such as headaches, abdominal and pelvic pain [28]. Another study reported that a history of CSA was associated with the likelihood to engage in sexual risk behaviors in adolescence and adulthood, thus contributing to HIV and sexually transmitted infections (STIs) risk among'mostly heterosexual' C. H. Logie et al. young women [29]. Sexual abuse was also correlated with HIV risk behaviors among LB students in a US and Canadian cohort study [30]. --- Response strategies to sexual violence among sexual minority women Sexual minority and heterosexual women tend to have passive response strategies following incidences of sexual violence: they either do nothing or only tell someone they trust, as opposed to authorities that can intervene [8,31]. However, some researchers argue that sexual minority women are better at coping with violence as a result of managing the stigma of being a sexual minority and subsequently developing stronger support networks [7]. Although the literature indicates that LBQ women do access certain support services, there are barriers to access for LBQ women. Concerns regarding stigma and discrimination may result in LBQ women choosing not to disclose their sexual orientation to healthcare providers (HCP) [32,33]. The experiences of LBQ women accessing care may also differ based on sexual identity. For instance, bisexual and'mostly heterosexual' women may feel uncomfortable accessing services for lesbians [30]; similarly, lesbians may feel uncomfortable accessing services that are perceived to be for heterosexual women (e.g. rape crisis centers, shelters) [7,31]. --- Study goals and objectives We aimed to address two important gaps in the literature in this study. First, scant research has explored the impact of LSA among sexual minority women [14]. Sexual violence research with LBQ women has predominately focused on prevalence, rather than the impact of such violence on various dimensions of women's lives, including sexual and mental health outcomes [7]. Second, most studies among sexual minority women have not explicitly measured the associations between LSA and structural factors, such as sexual stigma and barriers to health care. The social ecological approach of understanding individual, social and structural factors associated with LSA among sexual minority women therefore warrants further exploration. Our study was informed by the social ecological framework. The objective of this study was to contribute to understanding regarding associations between experiences of LSA and: health outcomes (depression, STI, self-rated health), individual factors (self-esteem, resilient coping, substance use), social factors (safer sex practices, social support) and structural factors (utilization of HIV and STI testing services, barriers to healthcare access, sexual stigma) among sexual minority women in Toronto, Canada. --- Methods --- Study design and population We conducted a structured cross-sectional internetbased survey with sexual minority women in Toronto, Canada in December 2011. Inclusion criteria for survey participants were adults aged 18 and over, capable of providing online informed consent, who self-identified as (i) a women, (ii) a sexual minority and/or a women who has sex with women, including lesbian, gay, bisexual, queer, 'other' and (iii) residing in the Greater Toronto Area. We hired 10 peer research assistants (PRAs), defined as someone who identifies as a sexual minority women to facilitate participant recruitment; PRAs represented diverse ages, ethnicities and sexualities. --- Data collection We used modified peer-driven recruitment, where each PRA recruited a pre-specified number of participants (n 1<unk>4 25) as well as convenience sampling, whereby participants could invite additional participants. Recruitment was primarily undertaken by PRAs through word of mouth and emails to social networks, LGBTQ agencies and ethno-cultural agencies. There was an email that briefly outlined the study purpose and inclusion criteria that included a direct link to the survey; this email was distributed by PRAs and agencies. We used a self-administered survey that participants completed online in a location of their choosing; the survey took 60 min to complete. We aimed to recruit 425 participants. The recommended sample size for logistic regression Correlates of sexual assault among sexual minority women (odds ratio: 1.3, P <unk> 0.05, power: 0.80) is 406 as calculated using G*Power 3.1. Research Ethics Board approval was obtained from Women's College Hospital at the University of Toronto. We designed a survey to collect information on socio-demographic variables, health outcomes, individual, social and structural factors. We pilot-tested the survey with a focus group of sexual minority community representatives (n 1<unk>4 12) (e.g. LGBT event promoters, artists, community organizers) to acquire feedback to enhance clarity and content validity. No identifying information was collected; participants had the option to choose to include their email address to receive a $20 gift card as honorarium for survey completion. Email addresses were erased after the gift card was sent. At the end of the survey, participants were provided with a list of community and online resources for sexual minority women and health and supportive services. --- Measures We report measures used and Cronbach's alpha coefficients from the current analyses. The survey included 105 items. Measures were chosen based on (i) conceptual relevance for the social ecological framework, (ii) established reliability and validity in the North American context, and where possible among LGBQ persons and (iii) shortened scales where possible to reduce participant burden (e.g. with depression symptoms, resilient coping). We summed scale items to calculate total scores for: sexual stigma, depression, safer sex practices, resilient coping; sub-scale and total scores were calculated for social support. The intervals for the measures were one unit (e.g. 1 year of age, one scale unit). --- Lifetime sexual assault We used a single dichotomous item: 'In your life have you ever experienced forced sex (for example rape or sexual assault)?' to assess if participants had a history of LSA. --- Health outcomes Participants self-reported if they had ever been diagnosed with an STI. Self-reporting of HIV/STI history was effective in a previous study with LBQ women in the United States [34]. The twoitem Patient Health Questionnaire-2 was used to assess depression symptoms [35], Cronbach's a 1<unk>4 0.89 (scale range: 0-6). Participants rated their health using single global self-rated health response recommended by the World Health Organization (score range: 1-5) [36]. --- Socio-demographic variables We collected the following socio-demographic information: age (years), annual personal income (Canadian dollars), sexual orientation (queer, lesbian, bisexual, gay, other, with options to specify), ethno-racial identity (self-identified) and highest level of education (less than primary, primary, some secondary, secondary, some postsecondary, post-secondary, graduate and postgraduate). --- Individual factors Resilient coping was measured using the Brief Resilient Coping Scale [37], Cronbach's a 1<unk>4 0.69 (scale range: 4-20). Self-esteem was measured using the Single-Item Self-Esteem Scale that has participants respond the statement: 'I have high self-esteem' on a five-point Likert scale (score range: 1-5) [38]. Substance use was assessed using an eight-point Likert scale single-item measure regarding frequency of drug and alcohol use in the past 3 months (score range: 1-8). --- Social factors Safer sex practices were measured using the 'Safer Sexual Behaviors Among Lesbian Women Scale' [39], Cronbach's a 1<unk>4 0.70 (scale range: 9-36). The social support measure was based on the Multi-dimensional Scale of Perceived Social Support [40] (Cronbach's a 1<unk>4 0.91) (scale range: 12-60) that includes sub-scales to assess support from family (Cronbach's a 1<unk>4 0.93) (sub-scale range: 4-20), friends (Cronbach's a 1<unk>4 0.92) (sub-scale range: 4-20) and a significant other (Cronbach's a 1<unk>4 0.95) (sub-scale range: 4-20). --- Structural factors The sexual stigma measure was based on the Homophobia Scale [5] (Cronbach's a 1<unk>4 0.78) (scale range: 12-48), that includes sub-scales to examine both perceived (Cronbach's a 1<unk>4 0.70) (sub-scale range: 4-16) and enacted (Cronbach's a 1<unk>4 0.72) (sub-scale range: 8-32) stigma. Participants responded to questions asking if they had ever received (i) an HIV test and (ii) an STI test (not including HIV). Participants also responded to questions asking if they had ever experienced the following barriers to accessing health care: (i) cost travel, (ii) cost medications and (iii) belief that their HCP was not comfortable with their sexual orientation. --- Data analysis We conducted descriptive analyses to calculate frequencies, means and standard deviations for each variable. Data analyses were conducted using IBM SPSS 20. Cronbach's alpha was conducted to assess scale reliability for each scale. We conducted logistic regression analyses based on the social ecological framework to examine sexual violence and its association with health outcomes, individual, social and structural factors. Multivariate logistic regression analyses were conducted to determine correlates of having experienced sexual assault in one's lifetime. We first conducted unadjusted logistic regression analyses, followed by analyses that controlled for socio-demographic variables (age, education, income, ethnicity, sexual orientation). We also present relative risks (RRs) for significant variables to illustrate the probability of the outcome for those who have a history of LSA in comparison with those with no LSA. --- Results --- Study population There were 439 women who participated in the survey; 415 completed the item on LSA and were included in the analyses. Socio-demographic and health characteristics of participants (n 1<unk>4 415) are described in Table I. The mean participant age (n 1<unk>4 396) was 31.44 (SD: 8.13), and the median annual income was $29 000.00 (range: 18-70). Most participants identified as queer (45.5%) followed by lesbian (29.2%), bisexual (16.1%), gay (4.6%) and other (4.1%). Almost half (41.7%) of participants reported having experienced sexual assault. One-fifth (20.5%) of participants reported ever being diagnosed with an STI. --- Correlates of having experienced sexual violence Logistic regression results are presented in Table II --- Discussion This study's examination of correlates of experiences of LSA among LBQ women revealed deleterious health outcomes associated with LSA, including exacerbated risk for depression, STI and lower self-rated health. Our findings that individual (self-esteem), social (social support) and structural (barriers to care, access to STI testing, sexual stigma) factors were associated with LSA support the utility of the social ecological framework to understanding LBQ women's experiences of sexual violence. A social ecological conceptual framework that incorporates these multi-level domains associated with LSA among LBQ women is illustrated in Fig. 1. We categorized multi-level-structural, social, individual-factors and health outcomes Correlates of sexual assault among sexual minority women United States [14]. Our findings that LSA was associated with reported higher mean frequencies of depressive symptoms and prevalence of STI, and lower mean frequencies of self-rated health and self-esteem, corroborate previous research. The negative effects of sexual assault are well documented and include PTSD in up to 50% of sexual assault survivors, and concurrent depression [6,13,21,41]. Our findings that LSA was correlated with lower self-rated health align with Roberts et al. [28] US based cohort study findings with sexual minority youth that reported associations between functional pain and CSA [28]. Indeed our study found associations between LSA and higher STI rates, supported by previous research with samples of predominately heterosexual [42,43] and sexual minority [29] women. We found lower self-esteem was associated with LSA, self-esteem has been associated with CSA [44]. Similar to another study with lesbians, we found no significant association between substance use and sexual violence [10]. Despite the disproportionate rates of sexual violence experienced by sexual minority women in comparison with heterosexual women, we found no other studies that explicitly measured associations between LSA and perceived or enacted sexual stigma. Our findings that LSA was correlated with higher reported mean frequencies of overall, perceived and enacted sexual stigma suggests that sexual violence may be associated with experiences, perceptions and subsequent expectations of homophobia and discrimination. The belief that one's HCP was uncomfortable with their sexual orientation is another example of a structural barrier experienced by LBQ women with a history of LSA. This finding is corroborated by research that highlights heterosexism in women's sexual health care [45,46], fear of discrimination from disclosing sexual orientation to HCP [32,33] and discomfort utilizing services not tailored for their sexual orientation [7,29,31]. Other barriers emerged for participants who experienced LSA-medication costs-even when controlling for income and education. This suggests that factors such as sexual stigma and fear of discrimination may enhance perception of other barriers to healthcare access. Despite these barriers, those with a history of LSA did in fact access STI testing more frequently than those without a history of LSA. We are not aware of research that has explored this phenomenon. Other research, however, suggests that coping with stigma and discrimination has resulted in utilization of services by LBQ women. For example, some authors have suggested that as a result of having had to cope with sexual stigma, LBQ women tend to access therapy at a higher rate than heterosexual women [7,10,13]. LBQ women who have experienced LSA may, therefore, be more accustomed to accessing healthcare services, and this may also be viewed as a strategy of resilience. Those with a history of LSA reported lower mean frequencies of overall social support and social support from family. This could be associated with sexual stigma from family members due to their sexual minority identity [47,48]. Lower levels of family social support could also be associated with a history of CSA, especially if the perpetrator was a family member or close to the family. Family support is often compromised with divided loyalties or outright disbelief when the accused perpetrator is a family relation or friend [49]. --- Structural factors (access to STI testing, barriers to care, sexual stigma) --- Social factors (social support, safer sex practices) --- Individual factors (resilient coping, substance use, self-esteem) --- Health outcomes (depression, STI, self-rated health) Fig. 1. Social ecological approach to understanding correlates of LSA experienced by sexual minority women in Toronto, Canada (n 1<unk>4 415). C. H. Logie et al. Participants with a history of LSA were more likely to identify as queer than lesbian. There is very little understanding of the interplay between LSA and sexual orientation. Previous studies with young women reported higher rates of CSA [29] and LSA [30] among women who identified as'mostly heterosexual' or bisexual [9,38] in comparison with those identifying as heterosexual. Austin et al.'s [29] thoughtful discussion regarding possible reasons for sexual orientation differences in rates of sexual assault include (i) response bias, (ii) sexual identity formation and (iii) different risk factors. First, a woman identifying as queer-a fluid sexual orientation that moves beyond the dichotomies of lesbian/ heterosexual [50]-may be more willing to report sexual violence than other women as they may feel less stigma about having forced sex with men. Second, depending on when sexual assault occurred, it could influence sexual identity formation [29]; for example, adopting a queer identity, rather than a lesbian one, could be more congruent with a history of LSA. Third, persons identifying as lesbian may have more positive group identity and social support than those with other sexual minority identities such as'mostly heterosexual' [28], and this strong group identity and support may reduce vulnerability to abuse by parents, adults and youth [51,52]. The interplay between sexual orientation and LSA warrants further attention. The study design has several limitations. First, the non-probability sample limits the generalizability of findings. The sample was recruited by diverse PRAs but oversampled white LBQ women; our sample included approximately one-third visible minorities while almost one-half-47%-of persons in the City of Toronto are visible minorities [53]. Our sample also had higher education levels-with almost 65% holding a bachelor's degree or higher-than the general population of Toronto where 33% hold a bachelor degree [54]. The online survey method may have contributed to oversampling LBQ women with access to internet and computer/written literacy; Meyer and Wilson [55] discussed a digital divide in the United States where persons with internet access were more likely to be white. The online survey method and sampling strategy may therefore have introduced selection bias. Second, because of the cross-sectional survey design we could not assess causation. Therefore, a longitudinal design could be more conducive to understanding the relationships between sexual violence, mental and sexual health outcomes, and sexual stigma. As we did not measure PTSD, it is possible that higher rates of depression were associated with PTSD stemming from sexual assault. In addition, we did not measure internalized homophobia-this could have enhanced our understanding of sexual stigma correlates of LSA. Third, we only had one sexual violence occurrence question, limiting understanding of age and frequency for which sexual violence occurred, gender of perpetrator and the number of perpetrators. We did not explore whether sexual violence occurred within a relationship or differentiate between the types of sexual violence. Fourth, our measure of resilient coping may not have adequately captured the complexity of resilience (e.g. coping with trauma, adapting to one's socio-cultural environment), the ability to cope with multiple risks (e.g. stigma, sexual assault) or access to multiple resources [56,57]. Fifth, we used a single item question regarding substance use that did not differentiate between alcohol and other substances, precluding an in-depth understanding of this phenomenon among participants. Given these limitations, further research could engage more diverse samples of LBQ women-perhaps using both on/ offline methods, explore additional resilience and substance use measures and include more detailed questions regarding types and perpetrators of sexual violence. Despite these limitations our study has several strengths. First, this study contributes to theoretical development by exploring health outcomes and individual, social and structural factors associated with LSA among LBQ women. Our findings support the utilization of a social ecological approach to better contextualize sexual violence and its impacts among LBQ women. Second, to our knowledge this is the first study to demonstrate associations between enacted and perceived sexual stigma with LSA among LBQ women. Third, this study highlighted the importance of understanding barriers to Correlates of sexual assault among sexual minority women accessing health care and the need for training HCP to demonstrate trauma-informed, LGBTQ affirmative practice. Enhanced understanding of correlates of LSA among LBQ women can inform the development of multi-level interventions to promote health and reduce stigma and violence. Our findings suggest that LBQ women who have experienced LSA have unique health needs, as they may be particularly vulnerable to sexual stigma, depression, STI and report lower self-esteem and self-rated health. These myriad health challenges require a syndemics approach that targets the interaction between these risk factors to allay the health impacts of LSA [9]. On a micro-level, interventions could focus on counseling to build strategies to cope with experiences of sexual violence as well as address self-esteem, internalized sexual stigma, depression and STI prevention [9,29]. Meso-level interventions could foster peer support and solidarity, and address family issues. To illustrate, support groups for survivors of LSA could address the particular needs of LBQ women who may have less social support from family due to sexual stigma. On a macro-level, interventions could focus on enhanced competence among providers across a range of systems-educational, mental and sexual health, social services-to better support sexual minorities and provide referrals to LGBTQ community resources [9,30]. Clinicians should practice from a trauma-informed approach that screens all patients for a history of sexual assault [14,30]. For example, as LBQ women who have experienced LSA may be more likely to offer STI testing, sexual health clinics could screen patients for a history of sexual violence and provide resources for support and counseling. Practice competence must involve an understanding of the fluidity and multiplicity of sexual identities, such as queer, that are not captured in standard categorizations of lesbian, bisexual, gay or heterosexual identities [28,29]. Programming should also promote empowerment and engage youth and others in advocacy [9]. Community-based approaches and interventions to challenge sexual stigma-and sexual violence-are urgently needed [14]. Putting into practice strategies that concomitantly build coping, address depression and STI risk, challenge sexual stigma within community norms and healthcare practice and reduce violence can promote health and wellbeing among sexual minority women. --- Conflict of interest statement None declared.
Stigma, discrimination and violence contribute to health disparities among sexual minorities. Lesbian, bisexual and queer (LBQ) women experience sexual violence at similar or higher rates than heterosexual women. Most research with LBQ women, however, has focused on measuring prevalence of sexual violence rather than its association with health outcomes, individual, social and structural factors. We conducted a cross-sectional online survey with LBQ women in Toronto, Canada. Multivariate logistic regression analyses were conducted to assess correlates of lifetime sexual assault (LSA). Almost half (42%) of participants (n ¼ 415) reported experiences of LSA. Participants identifying as queer were more likely to have experienced LSA than those identifying as lesbian. When controlling for socio-demographic characteristics, experiencing LSA was associated with higher rates of depression, sexually transmitted infections (STIs), receiving an STI test, belief that healthcare providers were not comfortable with their LBQ sexual orientation, and sexual stigma (overall, perceived and enacted). A history of sexual violence was associated with lower: self-rated health, overall social support, family social support and self-esteem. This research highlights the salience of a social ecological framework to inform interventions for health promotion among LBQ women and to challenge sexual stigma and sexual violence.
Race/ethnicity related stressors --stressors that are a function of the cultural background and the context of the individual that are unique to being a member of a racial/ethnic minority group --can make racial/ethnic minority young adults susceptible to tobacco and marijuana use (Kam, Cleveland, & Hecht, 2010;Williams, Neighbors, and Jackson 2003). For example, perceived racial/ethnic discrimination is a type of racial/ethnic stress that has been linked to increased smoking (Williams, Neighbors, and Jackson 2003) and higher odds of lifetime marijuana use (Borrell, Jacobs, Williams, Pletcher, Houston, Kiefe, &2007).The National Conference on Tobacco and Health Disparities highlighted a need for researchers to examine the social and cultural context of tobacco use among racial/ethnic groups (Fagan, King, Lawrence, Petrucci, Robinson, Banks, Marable, & Grana, 2004).Past research has also stressed the problematic perspective of viewing tobacco or marijuana use as an isolated problem, rather than being viewed as a part of a larger, more complicated picture that includes social and cultural components (Duff, 2003;Lunnay, Ward, &Borlagdan, 2011;Spooner, Hall, &Lynskey 2001). Additionally, health promotion researchers note that culturally specific interventions are important in addressing smoking-related health disparities. Culturally specific interventions refer to the degree to which ethnicity, attitudinal and behavioral norms, shared beliefs, history, and environment are integrated into the intervention (Resnicow, Baranowski, Ahluwalia, & Braithwaite, 1999). For example, the Pathways to Freedom is a smoking cessation guide developed for African Americans that incorporates known smoking patterns of African Americans, religious quotes, pictures of African Americans, and emphasizes family and community (Robinson, Orleans, James & Sutton,1992). Definitions of culture vary, but for the context of this paper, we focus on race/ethnicity, and the shared characteristics within these groups, whichcomprise religion, language, and nationality. The historical experiences of different racial/ethnic groups create unique physiological and social characteristics that can include lifestyle and value systems (Hays &Erford2014;Napier et al., 2014).Past research examining cultural variables has primarily focused on racial/ethnic minority individuals in relation to the dominant culture, or mainstream U.S. culture, (i.e. discrimination, racism, acculturative stress), however an individual can also experience stress emanating from tensions within their own racial/ethnic group.This phenomenon, known as intragroup marginalization,refers to the perceived interpersonal distancing by members of one's racial/ethnicgroup when the individual diverges from racial/ethnicnorms (Castillo, Conoley, Brossart, & Quiros, 2007). Deviating from racial/ethnicnorms can create a backlash whereby group members reject or distance themselves from the individual. The interpersonal distancing occurring from intragroup marginalization can be viewed as a social sanction placed on the individual and can take the form of teasing and criticism. Intragroup marginalization is based on social identity theory (Tajfel& Turner 1986)suggesting that group members marginalize in-group members when they do not conform to group standards in order to maintain the uniqueness and stability of the group (Abrams, Marques, Bown, & Henson 2000).Group members displaying behaviors or attitudes that conflict with group norms can be perceived as threatening the distinctiveness of the group and can then be marginalized in order to preserve the group's distinctiveness. Intragroup marginalization may be experienced by any racial/ethnic group. Additionally, family, friends, and other racial/ethnic members in the community can all impose group norms and engage in the process of intragroup marginalization. Limited research suggests intragroup marginalization may lead to higher levels of acculturative stress, or stress associated with adapting to a new culture, and increased alcohol use among young adults (Castillo, Cano, Chen, Blucker, & Olds, 2008;Castillo, Zahn, & Cano 2012;Llamas & Ramos-Sanchez 2013;Llamas & Morgan Consoli 2012).Past research, while not directly investigating intragroup marginalization, has made potential links betweenfamilial and peer stresswith tobacco and marijuana use (e.g., Wills, Knight, Pagano, & Sargent 2015;Zapata Roblyer, Grzywacz, Cervantes, & Merten, 2016;Vitaro, Wanner, Brendgen, Gosselin, & Gendreau, 2004). Foster and Spencer (2013) suggest thatmarijuana and other drug use may underlie a deeper need for connection in the absence of close familial connections for marginalized young adults, or young adults that have been rejected by their families. These young adults may be seeking opportunitiesto connect and create a sense of belonging,and marijuana use can play a common and significant social role in building supportive and caring relationships (Foster &Spencer 2013). Researchers further contend that investigation is needed to better understand how culture impacts these young adults'drug use (Foster & Spencer 2013). Currently, intragroup marginalization is measured using the Intragroup Marginalization Inventory (Castillo et al. 2007), which is comprised of three separate scales measuring perceived intragroup marginalization from the heritage culture family (12-items), friends (17-items), and other members of the individual's ethnic group (13-items). The inventory is comprised of 42-items rated on a 7-point Likert scale (never/does not apply [1] to extremely often [7]). The scale items were developed so that the scale could be tailored to any ethnic group (e.g.,'Chinese friends tell me that I am not really Chinese because I don't act Chinese'). While the scale is comprehensive, the length of the survey can make it difficult for researchers to distribute the entire inventory, with many opting to use only one scale in their research (e.g., Castillo et al., 2008;Castillo, Zahn, & Cano, 2012;Llamas & Morgan Consoli, 2012;Llamas & Ramos-Sanchez, 2013). In practice this has limited studies of intragroup marginalization to focus either onfamily members or friends, rather than examining both. Due to the length of the survey, the feasibility of using the measure in largescale studies or with large sample sizeshas been limited. Most studies using the inventory havelimitedsample sizes focused on one racial/ethnic group (under 400 participants; e.g., Castillo et al., 2007;Llamas & Morgan Consoli, 2012;Llamas & Ramos-Sanchez, 2013). Greater sample sizes allow for segmentation of the data across demographic characteristics (i.e.race/ethnicity, gender, etc.), reduce the margin of error, and provide the statistical power to conduct more advanced analyses.In addition, some items may have less applicability for certain groups, such as items related to linguistic expectations (e.g., 'Family members criticize me because I don't speak my ethnic group's language.').Lastly, the inventory was developed and validated with a college population and hasnotbeen validated with noncollege populations (Castillo et al., 2007). Tobacco and marijuana use are problematic for all young adults and intragroup marginalization may be an important factor in understandingtobacco and marijuana disparities in this population as a whole.Yet,without an efficient means to assess intragroup marginalization, this important construct will continue to remain absent within health disparities research. --- Current Study Limited research addresses whether shared cultural values or feelings of marginalization may help explain high rates of tobacco and marijuana use among young adults (Chen &Jacobson 2012;Foster & Spencer 2013). The purpose of this study is to provide a psychometrically sound abbreviated measure of intragroup marginalization. Such a measure would have great utility when survey length is of concernand the survey needs to be distributed across diverse racial/ethnic groups. This study tests and validates an abbreviated measure of the Intragroup Marginalization Inventory, which we refer to as the IMI-6. The IMI-6 consists of six items that measure perceived intragroup marginalization from the heritage culture family and friends. The items of the IMI-6 are hypothesized to have content validity, as items were taken directly from the existing scale, which has already been found to have content validity and were selected in consultation with the survey developer and by the primary author whoseresearch focuses on racial/ethnic minority issues and intragroup marginalization in specific. We hypothesize that the IMI-6 also has construct validity, whichwe establish in this study through exploratory factor analyses. In addition to testing the feasibility of using this abbreviated measure, a primary aim of this study was to apply the IMI-6and examine relationships between intragroup marginalization and tobacco and marijuana use. We hypothesize that participants reporting more experiences of intragroup marginalization would be more likely to use cigarettes, e-cigarettes, cigars, blunts, hookahs, and marijuana. --- Method Item selection The original Intragroup Marginalization Inventory consists of three scales: Family, Friend, and Ethnic Group. The scales have a common factor structure, and while there are slight differences in items and factor names, they fall into five general factors: Homeostatic Pressure (pressure to not change), Linguistic Expectations (expectations that one speak the native language), and Accusations of Assimilation (accusations of adopting values and beliefs of White American culture), Accusations of Differentiation (accusations of looking or acting different), and Discrepant Values (values are too different from the group). The IMI-6 consists of six items that measure perceived intragroup marginalization from the heritage culture family and friends. The original scale developer provided consultation during item selection, ultimately reviewing and approving the final six items. Items were selected based on the researchers'and developer's experience with the survey as well as those items that had the greatest applicability to a diverse pool of respondents and were broad enough to remain appropriate for different racial/ethnic groups. Items from the Accusations of Assimilation and Linguistic Expectations factors were not included as they contained items that were tailored to specific racial/ethnic groups (i.e. an item from the Accusations of Assimilation was relevant only to Latina/os, "Friends from my ethnic group tell me that I am brown on the outside, but white on the inside"). Items from the Homeostatic Pressure were similar to items from the Accusations of Differentiation factor, however items from the Homeostatic Pressure focusedsolely on the individual's behavior, while items from the Accusations of Differentiation included items assessing both behaviorand appearance. The selected items were taken from the Discrepant Values factor and the Accusation of Differentiation factor of the full inventory (see Table 1).Two items were taken from the Discrepant Values factor assessing whether family and friends have the same hopes and dreams as the respondent. Four items were taken from the Accusation Differentiation factor assessing whether family and friends accuse the respondent of not really being a member of one's ethnic group because s/he does not look like and act like members of the group. Responses were rated on a 7-point Likert scale, ranging from 'never/ does not apply' (1) to 'extremely often (7).' Items 3 and 6 were reverse coded, so that higher numbersrepresentgreaterexperiences of intragroup marginalization. Items were piloted with 45 young adults (ages 18-26) from the San Francisco Bay Area. Participants were recruited from local bars on a Thursday, Friday and Saturday evening to be interviewed that same weekend and received a $75 incentive if they participated in a one-hour focus group, completed the pilot questionnaire, and engaged in an interview with project staff to share feedback about the questionnaireIndividuals reviewed the item clarity and representation of their experiences. No items were altered and participant feedback suggested that the selected items accurately captured participant experiences. --- Participants and procedure Sample-This study used data we collected in 2014 as part of the San Francisco Bay Area Young Adult Health Survey, a probabilistic multi-mode household survey of 18-26 year old young adults, stratified by race/ethnicity (Holmes, Popova, & Ling 2016). The study was conducted in Alameda and San Francisco Counties in California. We identified potential respondent households using address lists from Marketing Systems Group (MSG; sample 1) in which there was an approximately 30-40% chance that an eligible young adult resided at a selected address (n=15,000 addresses). We used 2009-2013 American Community Survey and 2010 decennial census data in a multistage sampling design to identify Census Block Groups and then Census Blocks in which at least 15% of residents were Latino or non-Hispanic Black adults in the eligible age range. Ultimately, we randomly selected 61 blocks, then households within these blocks (n=1,636 housing units) then young adults within eligible households (sample 2). We oversampled these blocks because young nonwhite urban adults are among the most difficult populations to survey (Tourangeau, Edwards, Johnson, Wolter, & Bates, 2014), and we wished to ensure appropriate population representation. We surveyed in three stages and utilized four modes of contact (mail, web, telephone, faceto-face). In the first stage we conducted a series of three mailings with sample 1 households; respondents returned paper questionnaires or completed surveys online using Qualtrics. In the second stage we telephonedthose who did not respond to mail, and lastly we performed face-to-face interviews with a random selection of the remaining nonresponders (n<unk>1,250) from sample 1 as well as all of the households identified in sample 2. Potential sample 2 respondents did not participate in the mail or telephone phases of the survey; each of these households was visited in person. The final sample consisted of 1,363 young adult participants, for a response rate of approximately 30%, with race, sex and age distributions closely reflecting those of the young adult population overall in the two counties surveyed. Ethnicity/race was measured using items from the Census Bureau's American Community Survey instrument, with participants first asked to identify if there were Hispanic, Latino, or Spanish origin and then to select their race from 14 categories. Race/ethnicity was then collapsed into mutually exclusive categories including Hispanic, non-Hispanic White, non-Hispanic black, non-Hispanic Asian/Pacific Islander and Mixed Race. Those who selected more than one race/ethnic category (e.g. Black and Latino; Japanese and White, etc.) were categorized as Mixed Race.We constructed individual sample and post-stratification adjustment weights during data reduction (Holmes, Popova, and Ling 2016). --- Measures Outcomes-We evaluated associations between intragroup marginalization and current use of cigarettes, cigars, blunts (hollowed out cigars filled with marijuana), hookah, e-cigarettes and marijuana. Each outcome measure was dichotomized and set equal to '1' if a respondent reported using the product in question at least once in the past 30 days. --- Main Explanatory Variables Covariates: Age in years since birth was measured continuously (18-26), sex was measured dichotomously with male set equal to '1' and female to '0', and maternal education was set equal to '1' if the respondent's mother had completed at least a bachelor's degree and '0' otherwise. Race/ethnicity was measured as an indicator variable with mutually exclusive categories including Hispanic, non-Hispanic black, non-Hispanic Asian/Pacific Islander and Mixed Race (those who identified as two or more races). We restricted our analysis to young adults in these categories, excluding non-Hispanic white as the intragroup marginalization inventory has only been used and validated among nonwhite populations previously and endorsement of intragroup marginalization was not expected among this population (Castillo et al.,2007).The resulting number of observations was 1058, or 78% of the total sample. --- Statistical Analysis To examine the items in the abbreviated measure we conducted an exploratory factor analysis(EFA).Due to the exploratory nature of our analysis we chose to conduct an EFA rather than confirmatory factor analysis (CFA). CFA is useful to extract latent factors from a set of items based on an a priori theory. This requires a strong empirical or conceptual foundation and a pre-specification of the number of factors pattern of factor loadings. As we are using these items in a relatively innovative fashion, we wanted to determine, without specifying a structure, how the items were related.We conducted an EFA using an oblique geomin rotation (Fabrigar et al. 1999) in Mplus.EFA methods typically follow ''rules of thumb,'' with factor loading cutoff criteria rangingfrom.30 to.55, to establish a solid factor loading coefficient (Swisher, Beckstead, & Bebeau, 2004); we used a cutoff value of.55 in this study.The number of factors retained was based on eigenvalues >1. Internal consistency was examined by computing Cronbach's <unk> for theentire measure and each subscale. Second, we fit multinomial logistic regression models using SAS SURVEYLOGISTIC (SAS, 2008)to account for the complex survey design. This was repeated with six dichotomous outcomes (cigarette use, e-cigarette use, blunt/wraps, hookah and marijuana use,) in two steps:1) unadjusted analysis (factors were the sole predictors), and 2) controlling for race/ ethnicity, age, sex and mother's highest education. --- Results --- Sample information Weighted percentages (or means) and standard error of percent (or standard error of the mean) are presented in Table 2. Approximately one-third of the sample retained for analysis was Latino, 40% was non-Hispanic Asian/Pacific Islander, 15% was non-Hispanic Black and the remaining 10% reported being of two or more races.Close to half of all participants endorsed feeling marginalized by friends because they did not look (43%) or act (49%) like members of their racial/ethnic group. Approximately a quarter of participants endorsed feeling marginalized by family members because they did not look (23%) or act (27%) like members of their racial/ethnic group. Most participants reported having similar hopes and dreams as their friends (95%) and family (84%). --- Exploratory Factor Analysis and Internal Consistency The EFA indicated two factors (eigenvalue factor 1 = 2.970, eigenvalue factor 2 = 1.591, and eigenvalue factor 3 = 0.688). As show in Table 3, every item loaded above 0.60 on at least one factor. Factor 1 might be described as looking or acting like your ethnic group and was composed of items 'Friends and peers in my ethnic group tell me I am not really a member of my ethnic group because I don't look like my ethnic group,''Friends and peers in my ethnic group tell me I am not really a member of my ethnic group because I don't act like my ethnic group,''Family members tell me I am not really a member of my ethnic group because I don't look like my ethnic group,' and 'Family members tell me I am not really a member of my ethnic group because I don't act like my ethnic group.'Factor 2 appears to represent hopes and dreams and was composed of the two items 'Friends and peers in my ethnic group have the same hopes and dreams as me,' and 'My family has the same hopes and dreams as me.'The Cronbach's <unk> for the entire IMI-6 was 0.66. Cronbach's <unk>s were computed for each subscale and found to be: Factor 1,.81; and Factor 2, 0.71. Mean IMI factor scores by race/ethnicity are presented in Table 4. A regression analysis was conducted to determine mean differences in IMI factor sores by race/ethnicity (Table 5). A significant difference in means scores was found by race/ethnicity for Factor 1, F(8, 1050)= 20.02, p<unk>.001, R 2 =0.04. Latinos had greater mean scores for Factor 1 than Non-Hispanic Blacks [t(8)=4.43, p<unk>.01] and Non-Hispanic Asian/Pacific Islander [t(8)= 3.81, p<unk>.01]. Mixed Race individuals had greatermean scores for Factor 1 than Non-Hispanic Blacks [t(8)= -5.00, p<unk>.01] and Non-Hispanic Asian/Pacific Islander [t(8)= -2.99, p<unk>.05]. No significant difference in means scores was found by race/ethnicity for Factor 2, F(8, 1050)= 1.81, p=0.22, R 2 =0.005. --- IMI-Discriminant Validity Non-Hispanic whites were not expected to report intragroup marginalization and not included in the factor and regression analyses as the full inventory has only been used and validated among nonwhite populations. Todemonstrate the discriminant validity of the measure two t-tests were conducted. Non-Hispanic Whites were compared to the rest of the sample on the two factors. Non-Hispanic Whites experienced significantly lower scores on discrimination for both factor 1 (1.26 vs. 1.92, p<unk>.0001) and factor 2 (3.26 vs. 3.71, p<unk>. --- 0001). --- IMI-6in Unadjusted Logistic Regressions Results varied by outcome such that no significant relationship was found between the two factors and cigarette use, e-cigarette use orblunt use. However, Factor 1 was related to hookah, marijuana and cigar use. Higher scores on Factor 1 were related to higher odds of hookah use (OR = 1.26, 95%CI = 1.07, 1.48) and marijuana use (OR = 1.37, 95%CI = 1.05, 1.79), but lower odds of cigar use (OR = 0.81, 95%CI = 0.70, 0.93). Factor 2 was related to lower odds of hookah use (OR =.85, 95%CI = 0.72, 0.99). --- IMI-6 in Multinomial Logistic Regressions When controlling for race/ethnicity, age, sex and mother's education the results were consistent with the unadjusted models, except that Factor 1 and 2 were no longer associated with hookah use. No significant relationships were found for cigarettes, e-cigarettes, or blunts. When adjusting for covariates, Factor 1 and 2 were no longer associated with hookah use. However, the associations with marijuana and cigar use were robust;Factor 1 was associated with increased odds of marijuana use (OR = 1.34, 95%CI = 1.02, 1.76), and lower odds of using cigars (OR = 0.79, 95%CI = 0.71, 0.87). --- Discussion Results support the use of an abbreviated measure of intragroup marginalization. The IMI-6 was found to be psychometrically sound and representative of the full construct of intragroup marginalization as theorized by Castillo and colleagues (2007). Two factors emerged from the abbreviated scale. The first factor encompassed items related to belonging and membership, capturing whether individuals felt marginalized due to deviations in their physical appearance or behaviors (i.e., hobbies, interests). The second factor encompassed whether the individual shared similar hopes and dreamsas their families and friends. Thesefactors reflected similarly identified factors from the validation study of the full inventory scales (Castillo et al., 2007), suggesting good agreement between the original measure and the abbreviated version. Examining racial/ethnicdifferences in mean scores across factors demonstrated significant differences in Factor 1. Latinos and Mixed Race young adults experienced greater intragroup marginalization related to not looking or acting like members of their racial/ethnic group compared to non-Hispanic Blacks and Asian Americans/ Pacific Islanders. The full Intragroup Marginalization Inventory (Castillo et al., 2007) was developed with a diverse sample (Asian American, Black/ African American, Latino, Native American and Biracial) and past research has explored intragroup marginalization with African Americans (e.g. Thompson et al. 2010), Asian Americans (e.g., Castillo et al., 2012) and Latinos (e.g., Castillo et al., 2008); however, specific racial/ethnic differences have not been examined. Latinos may be particularly susceptible to intragroup marginalization given the heterogeneity among Latinos in terms of national origin, physical appearance, political ideology, immigration status, and class status (Fry,2002;Johnson, Farrell, & Guinn, 1997). In particular, Latinos can encompass different racial groups (i.e. Afro-Latino, Asian Latino, etc.), which can contribute to differences in appearance one of the concepts captured in Factor 1. Physical appearance can limit the extent to which people are accepted as belonging to a certain racial/ethnic group, which is also especially relevant for multiracial individuals, whose physical appearance may not align with any specific ethnic/racial group (AhnAllen, Suyemoto, & Carter, 2006). Additionally, multiracial individuals describe feeling marginalized from peers rooted in having different appearance, culture, and/or beliefs than their peers (Jackson,2010), explaining the higher rates of intragroup marginalization observed in this study. Research examining young adult tobacco and marijuana use often relies on college samples, thereby neglecting individuals in this age group that may be at greater risk of substance use (e.g., Moran, Wechsler, & Rigotti, 2004;Morrell, Cohen, Bacchi, & West, 2005;National Cancer Institute, 2008;Rigotti, Lee, &Wechsler, 2000). The Intragroup Marginalization Inventory, which may have particular utility with young adults who are negotiating the stresses of transitioning to adulthood, was also developed and tested with a college-only sample (Ferenczi & Marshall, 2014). This study validates an abbreviated version of the IMI, the IMI-6, which was developed to capture tensions experienced within racial/ethnic groups.We tested the IMI-6in a large representative household sample of racially/ethnically diverse young adults in the San Francisco Bay Area in order to better understand the impact of cultural stressors on tobacco and marijuana use among young adults in general. When controlling for demographic characteristics, Factor 1 (membership) was associated with greater marijuana use. Participants who felt that they did not look or act like members of their racial/ethnic group demonstrated increased oddsof marijuana use. Young adults who feel marginalized by family members or friends may seek to find a way to belong and connect with other young adults and marijuana use may be a way to find belonging within a group. This parallels research, which suggests that the decision to engage in marijuana use comes from an internal need for emotional connection and friendship (Pilkington, 2007) and as an opportunity to connect and create a sense of belonging (Foster &Spencer, 2013). Other research has identified marijuana as a more acceptable substance viewed as superior and safer than other substances (Foster & Spencer, 2013). Marijuana may be the substance of choice to build connection with others and combat feelings of intragroup marginalization. If marijuana use is perceived as a means for social connection, it mayhelp to explain the findings between Factor 1 (membership) and cigar use. When controlling for demographic characteristics, participants who felt as though they did not look or act like members of their racial/ethnic group haddecreased odds of cigar use. Cigarswere the least frequently used product within the sample retained for analysis.National averages parallel this trend with current cigar use (10%) having lower prevalence than to cigarettes(31%) and marijuana (19%) for young adults (SAMHSA, 2013).If marginalized young adults seek to connect with others via substance use, cigar use may not be the best mechanism by which to connect with others and therefore they may be less likely to use cigars. The combination of low rates of use and potential lack of opportunity to build social connection may help explain the decreased odds of cigar use. This finding is unexpected and further research is needed to better understand the relationship between intragroup marginalization and cigar use. Similarly, cigarette, e-cigarette, blunt use, and hookah use had lower rates compared to marijuana use. While unexpected, cigarette, e-cigarette, and blunt use were not associated with experiences of intragroup marginalization.This may be due in part to the lower rates of use. It is worth noting that blunt use was examined independently, although it is often associated with marijuana use and in this sample most blunt users also reported concurrent marijuana use (104 of 109 blunt users).Additionally, the use of these substances may be less tied to social use and therefore their use may not be linked to developing ways of belonging. Past research has differentiated between'social smoking' and smoking alone (Moran, Wechsler, & Rigotti,2004). Studies have suggested that young adults not in college may be less likely to be social smokers (Johnston, O'Malley, & Bachman, 2002) and social smoking may not be prevalent across racial/ethnic groups (Moran,Wechsler, & Rigotti, 2004).This study did not differentiate between social smoking and smoking alone and may be another important factor to better understand the role of intragroup marginalization and tobacco use. Intragroup marginalization was associated with higher hookah use; however, when controlling for race/ethnicity this association was no longer significant due to racial/ethnic differences. While hookah use has been noted as a means for socializing (Braun, Glassman, Wohlwend, Whewell, & Reindl, 2012) and is often smoked ina group setting (Ward, Eissenberg, Gray, Srinivas, Wilson, & Maziak, 2007), this may be population specific. Hookah use is common in Middle Eastern countries and has strong cultural underpinnings (Jamil, Elsouhag, Hiller, Arnetz, & Arnetz, 2010). Middle Eastern young adults experiencing intragroup marginalization may use hookah as a means to connect and fit in within their cultural group. Furthermore, African Americans have been found to have lower rates of use compared to other ethnic/racial groups (Barnett, Smith, He, Soule, Curbow, Tomar, & McCarty, 2013;Primack, Fertman, Rice, Adachi-Mejia, & Fine, 2010). Additional research may be needed to further investigate differential impacts of intragroup marginalization on hookah use ethnic/racial group. Factor 1 (membership) captures the challenges young adults face when they feel they do not fit in with members of their ethnic/racial group. Young adults experiencing this may desire to find ways to gain membership and connection with others. While Factor 1 focuses on difficulties in belonging and membership, Factor 2 centered on shared values and dreams. Feeling marginalized due to a lack of similar hopes and dreams was not associated with tobacco or marijuana use. This finding supports the theory that young adults use these substances as a means of building belonging and connection (Foster &Spencer 2013). While having dissimilar hopes and dreams may be stressful, it may not necessarily indicate one does have any connection to others. Given these findings, the scale may be able to be further abbreviated by dropping Factor 2, particular when examining tobacco and marijuana use. Future research may be needed to further investigate impacts of Factor 2 on other health outcomes. Despite the strengths of this research, there are important limitations to note.This study focused on young adults in the San Francisco Bay Area, and findings may be not be generalizable to all young adults. However by using population-based sampling, we were able to obtain a representative sample, which past research has noted the difficulty in reaching urban young adults (Holmes et al., 2016). This study also utilized a cross-sectional design, preventing any potential inference concerning causality. Tobacco and marijuana use were measured using self-report data and use was not biochemically verified. While past research has demonstrated the reliability and validity of self-reported smoking in anonymous surveys with young adults (Ramo, Hall, &Prochaska, 2011) this validation has not extended to non-cigarette tobacco products;this may be a potential area for future research. This study examined intragroup marginalization among Mixed Race young adults; a populationoften overlooked in intragroup marginalization studies. Mixed Race participants were not required to identify which group served as the primary source of intragroup marginalization. However, it is possible that different cultural norms around tobacco and marijuana use could influence whether intragroup marginalization impacted behavior. Oyserman and colleagues (2007) have demonstrated the identity-based motivation ofhealth behaviors, with racial/ ethnic minorities more likely to identify unhealthy behaviors with their group. Additional research may be needed with Mixed Race individuals to better understand how different groups may impact the relationships between intragroup marginalization and tobacco use. A final limitation is that we did not directly assess reasons or motivations for use. Future qualitative research is needed to explicitly examine motivations for use as a result of experiences of intragroup marginalization. This study provides the first quantitative examination of intragroup marginalization with tobacco and marijuana use. Results respond to recent calls to better understand motivations for young adult marijuana use (Holmes et al., 2016), with findings demonstrating an association between intragroup marginalization and increased marijuana use.These findings are especially relevant given the changing climate regarding the legalization of marijuana, with California just recently voting to legalize marijuana (NORML, 2016). Results reaffirm existing arguments that drug policy must attend to the social and cultural contexts of use (Duff, Moore, Johnston, & Goren, 2007;Foster & Spencer, 2013)Additionally, findings respond to existing calls in the literature to better understand how culture impacts use (Foster &Spencer 2013). Past intervention research has highlighted the importance of attending to peer smoking behavior and norms, providing further support for the need to attend to social dynamics when addressing young adult tobacco and marijuana use (Kalkhoran, Lisha, Neilands, Jordan, & Ling, 2016). Additional research is needed to further investigate the relationship between intragroup marginalization and marijuana use, which can help in the tailoring and development of targeted health education programs. --- Author Manuscript Llamas et al. Page 21 Table 5. --- Mean differences of IMI factor
Tobacco and marijuana use among U.S. young adults is a top public health concern and racial/ ethnic minorities may be at particular risk. Past research examining cultural variables has focused on the individual in relation to the mainstream U.S. culture, however an individual can also experience within-group stress, or intragroup marginalization. We used the 2014 San Francisco Bay Area Young Adult Health Survey to validate an abbreviated measure of intragroup marginalization and identify associations between intragroup marginalization and tobacco and marijuana use among ethnic minority young adults (N=1058). Exploratory Factor Analysis was conducted to identify factors within the abbreviated scale and logistic regressions were conducted to examine relationships between intragroup marginalization and tobacco and marijuana use. Two factors emerged from the abbreviated scale. The first factor encompassed items related to belonging and membership, capturing if individuals experienced marginalization due to not fitting in because of physical appearance or behavior. The second factor encompassed whether individuals shared similar hopes and dreams to their friends and family members. Factor 1 (membership) was associated with increased odds of marijuana use (OR = 1.34, p < .05) and lower odds of using cigars (OR = 0.79, p < .05), controlling for sociodemographic factors. Results suggest that young adults may use marijuana as a means to build connection and belonging to cope with feeling marginalized. Health education programs focused on ethnic minority young adults are needed to help them effectively cope with intragroup marginalization without resorting to marijuana use.ethnic minority; young adults; intragroup marginalization; tobacco; marijuana Tobacco and marijuana use among U.S. young adults is a top public health concern (Chen & Jacobson, 2012); young adults use both substances at higher rates than any other age group
marijuana use (Holmes et al., 2016), with findings demonstrating an association between intragroup marginalization and increased marijuana use.These findings are especially relevant given the changing climate regarding the legalization of marijuana, with California just recently voting to legalize marijuana (NORML, 2016). Results reaffirm existing arguments that drug policy must attend to the social and cultural contexts of use (Duff, Moore, Johnston, & Goren, 2007;Foster & Spencer, 2013)Additionally, findings respond to existing calls in the literature to better understand how culture impacts use (Foster &Spencer 2013). Past intervention research has highlighted the importance of attending to peer smoking behavior and norms, providing further support for the need to attend to social dynamics when addressing young adult tobacco and marijuana use (Kalkhoran, Lisha, Neilands, Jordan, & Ling, 2016). Additional research is needed to further investigate the relationship between intragroup marginalization and marijuana use, which can help in the tailoring and development of targeted health education programs. --- Author Manuscript Llamas et al. Page 21 Table 5. --- Mean differences of IMI factor scores by race/ethncity
Tobacco and marijuana use among U.S. young adults is a top public health concern and racial/ ethnic minorities may be at particular risk. Past research examining cultural variables has focused on the individual in relation to the mainstream U.S. culture, however an individual can also experience within-group stress, or intragroup marginalization. We used the 2014 San Francisco Bay Area Young Adult Health Survey to validate an abbreviated measure of intragroup marginalization and identify associations between intragroup marginalization and tobacco and marijuana use among ethnic minority young adults (N=1058). Exploratory Factor Analysis was conducted to identify factors within the abbreviated scale and logistic regressions were conducted to examine relationships between intragroup marginalization and tobacco and marijuana use. Two factors emerged from the abbreviated scale. The first factor encompassed items related to belonging and membership, capturing if individuals experienced marginalization due to not fitting in because of physical appearance or behavior. The second factor encompassed whether individuals shared similar hopes and dreams to their friends and family members. Factor 1 (membership) was associated with increased odds of marijuana use (OR = 1.34, p < .05) and lower odds of using cigars (OR = 0.79, p < .05), controlling for sociodemographic factors. Results suggest that young adults may use marijuana as a means to build connection and belonging to cope with feeling marginalized. Health education programs focused on ethnic minority young adults are needed to help them effectively cope with intragroup marginalization without resorting to marijuana use.ethnic minority; young adults; intragroup marginalization; tobacco; marijuana Tobacco and marijuana use among U.S. young adults is a top public health concern (Chen & Jacobson, 2012); young adults use both substances at higher rates than any other age group
Introduction To cope with the ongoing threat of infectious disease, it is common for governments to implement public health guidelines aimed at preventing or limiting community spread of a pathogen, many of which focus on health protective behaviors (e.g., social distancing, wearing masks). Although adherence to such guidelines is expected to have both societal and individual benefits, evidence suggests that sizable subsets of the population fail to adhere to recommended health protective behaviors during pandemics (often with deleterious consequences; Breitnauer 2020; Taylor & Asmundson, 2021). Thus, it is imperative to identify factors that relate to lower adherence to recommended health protective behaviors in the context of pandemics. that substance use is motivated by desires to increase social affiliation (Votaw & Witkiewitz, 2021), individuals may be more willing to violate social distancing recommendations to meet these needs. Substance use may also reduce risk perceptions of disease (Maisto et al., 2002), thereby reducing motivation to adhere to social distancing recommendations. Finally, many substances (e.g., alcohol) have a disinhibiting effect that may interfere with decision making and increase the likelihood of noncompliance with social distancing and other health protective behaviors (Zvolensky et al., 2020). Notably, one factor that may account for reduced adherence to social distancing recommendations among individuals using substances during this pandemic is low self-efficacy for adhering to these recommendations. Defined as beliefs in one's own ability to engage in a particular behavior, selfefficacy is theorized to play a key role in the initiation of and engagement in subsequent behaviors (Bandura, 1977) and has been identified as a primary factor influencing engagement in protective health behaviors within prominent models of health behavior (Janz & Becker, 1984;Rogers, 1975). Thus, consistent with these theories, perceptions of one's ability to adhere to social distancing recommendations would be expected to influence both actual and intended engagement in these behaviors. With regard to the relation of substance use to self-efficacy for adhering to social distancing recommendations, studies have consistently shown that greater substance use frequency is associated with lower self-efficacy in general and for specific health protective behaviors (Kadden & Litt, 2011;Oei et al., 2007). Substance use would also be expected to decrease self-efficacy for adhering to social distancing recommendations in particular. Specifically, the need for face-to-face interactions to obtain certain substances, as well as heightened urges to use substances in social contexts (e.g., due to social affiliation motives), may decrease expectations that one is capable of adherence to social distancing. Likewise, repeated experiences with violating social distancing recommendations due to the disinhibiting effects of substances would also be expected to reduce self-efficacy for social distancing. Thus, this study examined the explanatory role of social distancing self-efficacy in the relation of substance use frequency to adherence to social distancing recommendations and social distancing intentions during the early stages of the COVID-19 pandemic. To this end, we examined the prospective relations of substance use frequency at the initial assessment (which coincided with the onset of most stayat-home orders in the U.S.) to both adherence to social distancing recommendations one-month later and intentions to adhere to these recommendations in the following two weeks, as well as the role of social distancing self-efficacy in these relations. We hypothesized that baseline substance use frequency would be negatively associated with social distancing self-efficacy, adherence to social distancing recommendations, and social distancing intentions one-month later. In addition, we hypothesized that social distancing self-efficacy would account for significant variance in the relations of baseline substance use frequency to both social distancing behaviors and intentions one-month later. --- Method --- Participants Participants included a U.S. nationwide community sample of 377 adults who completed a prospective online study of health and coping in response to COVID-19 through an internet-based platform (Amazon's Mechanical Turk; MTurk). Participants completed an initial assessment from March 27, 2020 through April 5, 2020 (corresponding to the onset of stay-at-home orders in most states), and a follow-up assessment approximately one month later between April 27, 2020 and May 21, 2020 (when strict stay-at-home orders began to ease and were replaced with social distancing orders and recommendations). The study was posted to MTurk via CloudResearch. For the present study, inclusion criteria consisted of: (1) U.S. resident, (2) <unk> 95% approval rating as an MTurk worker, (3) completion of <unk> 5,000 previous MTurk tasks, and (4) valid responses on questionnaires (assessed via multiple attention check items). Participants (52.3% female; 47.8% male) ranged in age from 20 to 74 years (M = 41.29, SD = 12.01) and represented 44 states in the U.S. Most participants identified as White (84.9%), followed by Black/African American (9.3%), Asian/Asian-American (4.3%), and Latinx (1.9%). At the time of the initial assessment, 10.9% of participants had graduated from high school or obtained a GED, 38.2% had completed some college or technical school, 41.4% had graduated from college, and 9.1% had advanced graduate/professional degrees. With regard to annual household income, 31.6% of participants reported an income of <unk> $35,000, 31.6% reported an income of $35,000 to $64,999, and 36.9% reported an income of > $65,000. --- Measures Substance use frequency. The Drug Use Questionnaire (Hien & First, 1991) was used to assess baseline substance use frequency at the initial assessment. Participants indicated the frequency with which they used 12 substances (i.e., marijuana, alcohol, heroin, PCP, ecstasy, cocaine/ crack, stimulants, sedatives, hallucinogens, inhalants, [misused] prescription drugs, and crystal meth) during the past month on a 5-point Likert-type scale (0 = Never; 4 = 4 or more times per week). The DUQ demonstrates good construct and convergent validity (Lejuez et al., 2007). Items were summed to create a total score of baseline substance use frequency (<unk> = 0.76). Social distancing self-efficacy. Social distancing selfefficacy at one-month follow-up was assessed via a 3-item measure created for this study (derived from Brafford & Beck 1991). Participants were asked to rate three items assessing their perceived ability to follow U.S. social distancing recommendations on a 5-point Likert-type scale (1 = Not able at all; 5 = Completely able). Items were summed to create a total score of social distancing self-efficacy (<unk> = 0.83). Adherence to social distancing recommendations. Adherence to social distancing recommendations at onemonth follow-up was assessed using a 5-item self-report measure created for this study and derived from the theory of planned behavior (Ajzen, 1991). Participants were asked to report on engagement in recommended social distancing behaviors (e.g., avoiding large gatherings, staying 6 feet away from others) over the past two weeks on a 5-point Likert-type scale (1 = Never; 5 = Always). Items were summed to create an overall index of adherence to social distancing recommendations at follow-up (<unk> = 0.88). Intentions to adhere to social distancing recommendations in the future. Intentions to adhere to social distancing recommendations in the two weeks after the one-month follow-up were assessed via a 5-item measure created for this study and derived from the theory of planned behavior (Ajzen, 1991). Participants were asked to report their intentions to engage in the aforementioned recommended social distancing behaviors over the next two weeks on a 5-point Likert-type scale (1 = Intend to never do the behavior; 5 = Intend to always do the behavior). Items were summed to create a total score representing social distancing intentions (<unk> = 0.87). Clinical covariates. The Depression Anxiety Stress Scales-21 (DASS-21; Lovibond & Lovibond 1995) was used to assess symptoms of depression and anxiety at the initial assessment (<unk>s <unk> 0.89 in this sample). Participants rate items on a 4-point Likert-type scale. The DASS-21 has adequate reliability and convergent and discriminant validity (Lovibond & Lovibond, 1995). --- Procedures All procedures received approval from the university's Institutional Review Board. To ensure the study was not being completed by a bot, participants responded to a Completely Automatic Public Turing test to Tell Computers and Humans Apart prior to providing informed consent. Initial data were collected in blocks of nine participants at a time and all data, including attention check items and geolocations, were examined by researchers before compensation was provided. Participants who failed one or more attention check items were removed from the study (n = 53 of 553 completers). Those whose data were considered valid (based on attention check items and geolocations; N = 500) were compensated $3.00. One-month following completion of the initial assessment, participants were contacted via CloudResearch to complete the follow-up assessment. Of the 500 participants who completed the initial assessment, 77% (n = 386) completed the follow-up. Participants who failed two or more attention check items were removed from the study (n = 3); the rest were compensated $3.00. In addition, two participants were excluded for invalid data and four were excluded for extensive missing data on the measures of interest, resulting in a final sample size of 377. --- Results --- Preliminary analyses Descriptive statistics for and correlations among all variables of interest are presented in Table 1. The most frequently reported substances at the initial assessment were alcohol (53.8%), followed by marijuana (18%), prescription sedatives (8.2%), and prescription opioids (7.7%), with 44% of participants reporting regular use of alcohol and 13.3% reporting regular use of marijuana. To identify covariates for primary analyses, we examined associations of relevant demographic and clinical characteristics to the outcome variables (Table 1). Given significant associations of age, sex, and depression and anxiety symptoms to adherence to social distancing recommendations at follow-up, these variables were included as covariates in this model. Consistent with hypotheses, baseline substance use frequency was significantly negatively associated with social distancing self-efficacy and adherence to social distancing recommendations at the one-month follow-up; however, it was not significantly associated with intentions to adhere to social distancing recommendations at follow-up. Additionally, social distancing self-efficacy was significantly positively associated with both adherence to social distancing recommendations and intentions to adhere to social distancing recommendations. --- Primary analyses Next, we examined the indirect relations of baseline substance use frequency to both adherence to social distancing recommendations and social distancing intentions at 1 3 one-month follow-up through social distancing self-efficacy using the PROCESS (version 3.0) macro for SPSS (Model 4; Hayes 2018). Indirect relations were evaluated using biascorrected 95% confidence intervals based on 5,000 bootstrap samples. Providing partial support for study hypotheses, results revealed a significant direct relation between baseline substance use frequency and adherence to social distancing recommendations one month later (although not to social distancing intentions; see Table 2). Consistent with hypotheses, results revealed significant indirect relations of greater baseline substance use frequency to both lower adherence to social distancing recommendations and lower social distancing intentions at the one-month follow-up through lower social distancing self-efficacy (see Table 2). --- Discussion To extend extant research on the factors associated with nonadherence to recommended health protective behaviors during pandemics, this study aimed to examine the prospective relations of substance use frequency to both adherence to social distancing recommendations and future social distancing intentions one-month later during the early stages of the COVID-19 pandemic in the U.S., as well as the explanatory role of social distancing self-efficacy in these relations. Consistent with study hypotheses, results revealed a significant direct relation of baseline substance use frequency to lower adherence to social distancing recommendations onemonth later. This finding provides support for the premise that substance use may increase noncompliance with social distancing recommendations during the COVID-19 pandemic and is consistent with past research suggesting that frequent substance use is associated with poor adherence to protective behaviors in other contexts (Lasser et al., 2011;Liu et al., 2006). Notably, however, and contrary to predictions, substance use frequency did not have a significant direct relation to intentions to engage in social distancing behaviors in the weeks following the one-month follow-up. Thus, findings suggest that substance use may interfere with adherence to social distancing recommendations despite intentions to engage in such behaviors. This discrepant pattern of findings may capture the difficulties complying to social distancing recommendations posed by substance use, which may prompt engagement in risky behaviors that go against one's self-interest for the purpose of obtaining or using substances. With regard to the theorized role of social distancing selfefficacy in the relations between substance use frequency and both adherence to social distancing and social distancing intentions, results provided support for study hypotheses, revealing significant indirect relations of greater substance study identify substance use as one factor that may negatively influence adherence to social distancing during the COVID-19 pandemic via lower social distancing selfefficacy. As the COVID-19 pandemic remains an ongoing public health crisis and evidence suggests the increased likelihood of future pandemics of this kind (Bernstein et al., 2022), identifying promising targets for interventions aimed at increasing engagement in health protective behaviors in the context of pandemics is critical. Results of this study highlight the potential utility of interventions targeting substance use and social distancing self-efficacy. use frequency to lower levels of both social distancing behaviors and intentions one-month later through lower social distancing self-efficacy. These findings are consistent with recent research highlighting the role of self-efficacy in both social distancing behaviors and intentions during the COVID-19 pandemic (Charles et al., 2020;Hamilton et al., 2020), and extend this research to a substance use context. Several limitations of this study warrant consideration. First, the generalizability of our findings to more severe substance use or the use of illicit substances like heroin or cocaine remains unclear. Another limitation is the exclusive reliance on self-report questionnaire data, which may be influenced by social desirability biases or recall difficulties. Future research should incorporate other assessment methods (e.g., ecological momentary assessment, timeline follow-back procedures) to further clarify the nature of the relation of substance use and social distancing during this pandemic. Further, although our use of a prospective design facilitates examination of the associations of baseline substance use frequency to both adherence to social distancing recommendations and social distancing intentions onemonth later, we were not able to examine the interrelations of substance use, social distancing self-efficacy, and social distancing behaviors and intentions over time. Likewise, we cannot speak to the temporal relations among these factors and whether social distancing self-efficacy predicts social distancing behaviors or intentions. Research incorporating the repeated assessment of these factors over more extended time periods is needed to clarify the precise interrelations among these factors over time, including their likely reciprocal influences. Future research should also examine adherence to other health protective behaviors, such as mask-wearing and vaccinations. Beyond the risks associated with substance use in general, substance use in the context of a pandemic may be particularly risky insofar as it interferes with adherence to recommended health protective behaviors. Results of this --- Data Availability Data for this study is available upon reasonable request to Drs. Matthew T. Tull or Dr. Kim L. Gratz. --- Authors' contributions All authors contributed to the study conception and design. Material preparation and data collection was performed by all authors. Data analysis was performed by Kayla Scamaldo, Kim Gratz, and Matthew Tull. The first draft of the manuscript was written by Kayla Scamaldo and Kim Gratz. All authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. --- Declarations --- Conflicts of interest/Competing interests The authors have no competing interests to declare that are relevant to the content of this article. --- Ethics approval All procedures performed in this study were in accordance with ethical standards of the institutional research committee and with the 19634 Helsinki Declaration and its later amendments or comparable ethical standards. The study was approved by the University of Toledo Institutional Review Board (300607-UT). Consent to participate Informed consent was obtained from all individual participants included in this study. --- Consent for publication Not applicable. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
The COVID-19 pandemic provides an ideal context for exploring this question. Specifically, the unprecedented worldwide spread and impact of COVID-19 prompted the implementation of extraordinary social distancing interventions and highlighted the public health importance of widespread adherence to these guidelines. Yet, given that individuals vary considerably in their adherence to social distancing recommendations (Coroiu et al., 2020), making the identification of factors that may increase risk for nonadherence to these recommendations of utmost importance. One factor that warrants attention in this regard is substance use. Consistent with evidence that substance use increases during periods of disease outbreaks (e.g., Lee et al., 2018), increases in substance use were observed during the early stages of the COVID-19 pandemic (Grossman et al., 2020; Taylor et al., 2021). In addition to the health risks associated with substance use in general, obtaining and using drugs in the context of a pandemic may confer unique risks for contracting and transmitting the virus by interfering with social distancing. For example, some substances can only be obtained through face-to-face interactions, necessitating social contact. Moreover, to the extent
Introduction In April 2011, media reported that mothers had a mysterious disease at Asan Medical Center in Seoul, South Korea [1]. Six women were hospitalized in the respiratory intensive care unit before and after childbirth. The symptoms were respiratory failure and pulmonary fibrosis [2]. The patients were not from one area, but from all over Korea. The Korea Center for Disease Control and Prevention (CDC) commissioned an epidemiological investigation to determine the cause of these unique symptoms. The results showed that the humidifier disinfectant, a biocide used by adding to the water in the humidifier, was causing these diseases [1,2]. According to the CDC, humidifier disinfectants are absorbed into the body through the nose, mouth and skin during respiration, which has a deleterious effect on lung disease [2]. Approximately 9.98 million humidifier disinfectant products were sold from 1994 to 2011; these were distributed to unspecified people [3]. The chemicals in the humidifier disinfectant were poly hexamethylene guanidine phosphate (PHMG), Oligo 2-(2-ethoxy) ethoxy ethyl guanidine chloride (PGH), cholromethylisothiazolinone (CMIT) and methylisothiazolinone (MIT), among others. Among them, products containing CMIT/MIT, the most toxic chemicals, have been sold since 1994 [4]. Diseases caused by toxic chemicals take a long time to show if the toxic chemicals have caused actual health damage, as they involve many variables, such as the incubation period [5]. It is difficult to accurately count the number of survivors of humidifier disinfectant damage. As of September 2021, the government reported that the number of persons who claimed damage was 7540, including 1713 deaths [6]. The humidifier disinfectants accumulate in the body, damage lung cells and ultimately lead to widespread health impairments. Pulmonary fibrosis, asthma and dyspnea are the main symptoms [7]. In addition, acute bronchitis, pneumonia, rhinitis, interstitial pneumonia, nonspecific otitis media, chronic sinusitis, chronic obstructive pulmonary disease, acute sinusitis and acute bronchiolitis have been reported [8]. In a study on mice, it is reported that lung damage caused by repeated exposure to PHMG-P does not recover from structural changes caused by pulmonary dysfunction and pathology, even after a long recovery period. This suggests that the damage caused by humidifier disinfectants may also have long-term effects on survivors. Therefore, plans to help survivors recover need to be considered in a long-term context [9]. In addition to physical health, the damage caused by humidifier disinfectant adversely affects mental health problems. As for the mental health problems that occurred after exposure, depression and helplessness were reported by 57.5% of individuals, guilt and self-blame by 55.1%, anxiety and tension by 54.3%, suicidal thoughts by 27.6% and suicide attempts by 11%. This indicates that survivors' suicide attempts are 4.5 times higher than the general population [8]. The deterioration of physical health impacts the survivor, whereas mental health problems impact both the survivor and his/her significant others [10]. Moreover, studies have shown that socio-demographic, social and environmental factors should be considered in the psychological factors of victims of social disasters, such as social support, sociality and community cohesion [11][12][13]; this indicates that the psychological pain of survivors should be understood from a socio-demographic perspective. Humidifier disinfectants can be classified as a social disaster and it is closely related to Japan's exposure to radiation in Fukushima, in that it had many random victims not only physically but also psychologically [14]. Studies argue that the mental and physical health of survivors are much more serious than those of the general group and require special attention [6,10]. It is necessary to explore more clearly which variables are specifically related to survivors' psychological difficulties. This study aimed to examine the psychological health of humidifier disinfectant survivors and the general population. Next, we examined the socio-demographic variables influencing survivors' psychological symptoms. Specifically, this study attempts to explore survivors' psychological symptoms by gender, economic status, educational level and social variables such as the number of friends. This study can serve as preliminary research to identify important variables when designing intervention strategies to improve survivors' mental health issues. --- Method --- Participants This study was approved by the National Institute of Environmental Research (NIER) in South Korea. The data of humidifier disinfectant survivors were collected through an online survey using the Adult Self Report (ASR). A total of 228 survivors who were suffering from humidifier disinfectant damage participated in this survey. The mean age of participants was 42.23 years, with 10.90 standard deviation; 83 (36.4%) were male and 145 (63.6%) were female. To compare the psychological symptoms between the humidifier disinfectant survivor group and the general group, the norm data of ASR were utilized. The HUNO Inc. (ASR Provider Company in South Korea) provided 1003 norm data of ASR. Among the respondents in the general population, a random sampling method was used to select 228 participants. The mean age of participants in the general group was 37.86 years with 9.71 standard deviation; 120 (52.6%) were male and 108 (47.4%) were female. --- Dependent Variables Psychological Symptoms To measure the psychological symptoms, the ASR, the Adult version of the Achenbach System of Empirically Based Assessment (ASEBA) [15], was used in this study. Along with MMPI-2, ASR is one of the most used personality assessments to measure psychological symptoms in South Korea. The ASR was validated in Korean and the convergent validity, concurrent validity and discriminant validity of the Korean version were confirmed [16]. The psychological symptoms scale consists of eight syndrome subscales and is rated on a 3-point Likert-type scale (0: "not true", 1: "sometimes", 2: "often true"). The eight syndrome subscales are as follows: anxious/depressed (18 items); withdrawn (9 items); somatic complaints (12 items); thought problems (10 items); attention problems (15 items); aggressive behavior (15 items); rule-breaking behavior (14 items); and intrusive (6 items). The combination of anxious/depressed, withdrawn and somatic complaints is termed internalizing problems, and the combination of aggressive behavior, rule-breaking behavior and intrusive is termed externalizing problems. Anxious/depressed subscale measures the feelings of being emotionally depressed, overly worried and anxious, and the sample items are "I worry about my future" and "I cry a lot". The withdrawn subscale evaluates withdrawal, passive attitude and showing no interest in surrounding people, "I don't get along with other people" and "My social relations with the opposite sex are poor" are the sample items. The sample items of the somatic complaints subscale, which assesses various physical symptoms despite no clear medical cause, are "I feel dizzy or lightheaded" and "I feel tired without good reason". The thought problems subscale estimates unrealistic and bizarre thoughts and behaviors, such as excessive repetition of certain actions and thoughts and seeing phenomena or hearing sounds that do not exist. The sample items are "I can't get my mind off certain thoughts" and "I hear sounds or voices that other people think aren't there". The attention problems subscale measures inattentive or hyperactive behavior and difficulty in making plans; the sample items are "I have trouble concentrating or paying attention for long" and "I daydream a lot". The aggressive behavior subscale evaluates verbally or physically destructive behavior and hostile attitudes, with "I argue a lot" and "I blame others for my problems" being examples of the items in the subscale. The rule-breaking behavior subscale assesses impulsive engagement in problematic behaviors that do not follow rules or violate social norms at work or in society, and "I damage or destroy my things" and "I break rules at work or elsewhere" are the sample items. Sample items of the intrusive subscale that estimate behavior that bothers or disturbs others are "I brag" and "I try to get a lot of attention". The internal consistency (Cronbach's <unk>) of the subscales ranged from 0.70 to 0.92 in this study. --- Independent Variables Four variables were utilized as independent variables: gender, educational level, family economic status and number of friends. Gender was coded into two categories (male and female). Educational level was measured by one item asking about educational level, and was coded into three categories (graduate school graduate or higher than graduate school graduate, university graduate, high school graduate or less than high school graduate). Family economic status was assessed by one item, "compared to the economic level of all households in Korea, which of the following would you say you belong to?" This single question was coded into three categories (lower, middle and upper economic status). Number of friends was measured by one item, "how many friends do you have besides family?" This single question was coded into two categories (few: 0-3 friends; and many: 4 or more friends). --- Covariate Variable Because age is a continuous variable, we included age as a covariate. According to a previous study, age is related to psychological symptoms; younger adults (ages 18-35) scored significantly higher than older adults (ages 36-59) on anxious/depressed, somatic complaints, attention problems, aggressive behavior and intrusive [17]. --- Data Analysis One-way Multivariate Analysis of Covariance (MANCOVA) was conducted to compare dependent variables between the general and survivor groups. Moreover, Cohen's d was calculated to determine the effect size for differences between the general and survivor groups. Effect sizes are considered small if d = 0.2, medium if d = 0.5 and large if d = 0.8 [18]. Then, a series of two-way MANCOVA was conducted to determine the main and interaction effects of independent variables on dependent variables. The independent variables were gender, educational level, family economic status and number of friends. The dependent variables were anxious/depressed, withdrawn, somatic complaints, thought problems, attention problems, aggressive behavior, rule-breaking behavior and intrusive. The covariate variable was age. As eight dependent variables were conceptually related to each other (average r = 0.57), the MANCOVA, which controlled correlations among dependent variables, was suitable for the analysis. --- Results A one-way MANCOVA with group as the independent variable, age as the covariate and psychological symptoms as dependent variables was performed to compare the psychological symptoms of the general and survivor groups. Moreover, Cohen's d was calculated to determine the effect size for differences between the two groups. The covariate of age (Wilks' lambda = 0.875; F(8,446) = 7.967; p <unk> 0.001; <unk> 2 = 0.125) was statistically significant. Moreover, a significant main effect for the group (Wilks' lambda = 0.661; F(8,446) = 28.574; p <unk> 0.001; <unk> 2 = 0.339) was found. Main effects were found for seven psychological symptoms: anxious/depressed, F(1,453) = 152.301, p <unk> 0.001; withdrawn, F(1,453) = 143.436, p <unk> 0.001; somatic complaints, F(1,453) = 120.920, p <unk> 0.001; thought problems, F(1,453) = 80.489, p <unk> 0.001; attention problems, F(1,453) = 56.567, p <unk> 0.001; aggressive behavior, F(1,453) = 86.133, p <unk> 0.001; and rule-breaking behavior, F(1,453) = 9.383, p <unk> 0.01. The results revealed that the survivor group displayed higher anxious/depressed symptoms, withdrawn symptoms, somatic complaints, thought problems, attention problems, aggressive behavior and rule-breaking behavior than the general group (see Table 1). Group differences in anxious/depressed, withdrawn, somatic complaints, thought problems and aggressive behavior indicated a large effect size (Cohen's d > 0.80). Next, a series of two-way MANCOVA were conducted separately, with age as the covariate. First, the two-way interaction effect of group <unk> gender was tested. The covariate of age (Wilks' lambda = 0.874; F(8,444) = 8.005; p <unk> 0.001; <unk> 2 = 0.126) was statistically significant. Moreover, a significant main effect was found for group (Wilks' lambda = 0.681; F(8,444) = 26.002; p <unk> 0.001; <unk> 2 = 0.319) and gender (Wilks' lambda = 0.938; F(8,444) = 3.641; p <unk> 0.001; <unk> 2 = 0.062). Specifically, main effects for gender were observed for somatic complaints, F(1,451) = 5.699, p <unk> 0.05; and rule-breaking behavior, F(1,451) = 4.451, p <unk> 0.05. Female participants displayed more somatic complaints than male participants. On the other hand, male participants showed more rule-breaking behavior than female participants. However, the two-way interaction effect of group <unk> gender was statistically insignificant (Wilks' lambda = 0.977; F(8,444) = 1.309; p = 0.237; <unk> 2 = 0.023). Second, the two-way interaction effect of group <unk> educational level was tested. The covariate of age (Wilks' lambda = 0.882, F(8,441) = 7.399; p <unk> 0.001; <unk> 2 = 0.118) was statistically significant. A significant main effect for group (Wilks' lambda = 0.760; F(8,441) = 17.427; p <unk> 0.001; <unk> 2 = 0.240) was found. However, the main effect for educational level (Wilks' lambda = 0.964; F(16,882) = 1.033; p = 0.418; <unk> 2 = 0.018) and the two-way interaction effect of group <unk> educational level (Wilks' lambda = 0.944; F(16,882) = 1.615; p = 0.059; <unk> 2 = 0.028) were statistically insignificant. Third, the two-way interaction effect of group <unk> family economic status was tested. The covariate of age (Wilks' lambda = 0.876; F(8,442) = 7.837; p <unk> 0.001; <unk> 2 = 0.124) was statistically significant. A significant main effect of group (Wilks' lambda = 0.659; F(8,442) = 28.546; p <unk> 0.001; <unk> 2 = 0.341) and family economic status (Wilks' lambda = 0.883, F(16,884) = 3.559; p <unk> 0.001; <unk> 2 = 0.061) was observed. Moreover, a significant two-way interaction effect of group <unk> family economic status was found (Wilks' lambda = 0.922; F(16,884) = 2.304; p <unk> 0.01; <unk> 2 = 0.040). Specifically, interaction effects were observed for five psychological symptoms: anxious/depressed, F(2,449) = 4.615, p <unk> 0.05; withdrawn, F(2,449) = 10.407, p <unk> 0.001; thought problems, F(2,449) = 7.869, p <unk> 0.001; attention problems, F(2,449) = 7.740, p <unk> 0.001; and rule-breaking behavior, F(2,449) = 5.003, p <unk> 0.01. Although the general group's psychological symptoms decreased as the family economic status increased, the survivor group's psychological symptoms showed a different tendency. In other words, the upper family economic status group had the highest mean value for psychological symptoms, followed by the lower and the middle groups (see Table 2 and Figure 1). Fourth, the two-way interaction effect of group <unk> number of friends was tested. The covariate of age (Wilks' lambda = 0.868; F(8,444) = 8.428; p <unk> 0.001; <unk> 2 = 0.132) was statistically significant. A significant main effect for group (Wilks' lambda = 0.678; F(8,444) = 26.403; p <unk> 0.001; <unk> 2 = 0.322) and number of friends (Wilks' lambda = 0.860; F(8,444) = 9.049; p <unk> 0.001; <unk> 2 = 0.140) was observed. In addition, a significant two-way interaction effect of group <unk> number of friends was found (Wilks' lambda = 0.961; F(8,444) = 2.243; p <unk> 0.05; <unk> 2 = 0.039). The effects of two-way interaction were found on six psychological symptoms: anxious/depressed, F(1,451) = 7.069, p <unk> 0.01; withdrawn, F(1,451) = 14.408, p <unk> 0.001; somatic complaints, F(1,451) = 4.785, p <unk> 0.05; thought problems, F(1,451) = 5.326, p <unk> 0.05; attention problems, F(1,451) = 6.389, p <unk> 0.05; and aggressive behavior, F(1,451) = 9.527, p <unk> 0.01. Compared to the general group, which showed a slight decrease in psychological symptoms with an increase in the number of friends, the survivor group showed a prominent decrease in psychological symptoms as the number of friends increased (see Table 3 and Figure 2). --- Discussion The first purpose of this study was to compare the psychological symptoms of humidifier disinfectant survivors and general groups. To examine these differences, a one-way MANCOVA was performed. The survivor group showed higher scores on seven psychological symptoms than the general group: anxious/depressed, withdrawn, somatic complaints, thought problems, attention problems, aggressive behavior and rule-breaking behavior. Consistent with previous studies [19] that suggested that social disasters have a serious impact on the mental health of survivors, the mean differences in scores between humidifier disinfectant disaster survivors and general groups in this study indicated that survivors experienced severe psychological difficulties. It was also revealed that the greater the damage and the closer the relationship with the survivor, the more severe the psychological trauma of the survivors. As for the social aspect of survivors, the findings of a meta-analysis study [20] showed that multilevel social support from the micro-system (i.e., family and friends), meso-system (i.e., neighborhood, community) and macro-system (i.e., society and culture) need to be implemented to help survivors' recovery. Therefore, this study explored the effects of socio-demographic variables, such as gender, educational level, family economic status and number of friends on psychological symptoms. A series of two-way MANCOVAs was conducted to determine the main and interaction effects of four demographic variables by groups (survivors vs. general groups) on psychological symptoms. Among the four demographic variables, the results indicated that educational level had no main effect on participants' psychological symptoms. Studies have found mixed results regarding educational levels and psychological symptoms. Some studies reported that low educational level was related to increased behavioral problems [21], but other studies found that educational level was not related to behavioral problems [22]. The findings of this study are consistent with the findings of latter studies; that is, the educational level of both humidifier survivors and the general group did not influence psychological symptoms. We also examined the relationship between psychological symptoms, family economic status and number of friends as factors of social resources. Family economic status and number of friends had significant main and interaction effects on psychological symptoms. Interestingly, the interaction effects of family economic status and the two groups (i.e., survivors and general groups) were also found for five psychological symptoms: being anxious/depressed, withdrawn, thought problems, attention problems and rule-breaking behavior. The psychological symptoms of the survivor group that were divided according to family economic status level showed a V-shaped pattern, while those of the general group showed a decreasing pattern as the family economic status level increased. Consistent with many prior studies that lower levels of family economic status was associated with higher levels of psychological symptoms [23,24], our findings also show that individuals with lower family economic status have more severe psychological symptoms. Next, survivors with high economic status have more severe psychological symptoms than those in other groups (i.e., middle and lower family economic status). The results of the higher family economic status group imply that social comparison may contribute to psychological symptoms. The results of the higher family economic status group could be explained by the theory of the 'big fish little pond' effect (BFLPE). Herbert and John emphasized the importance of the frame of reference with the BFLPE model. According to the model, individuals compare their own self-concept with their peers, and individuals have higher self-concept when they are in a less capable group than in a more capable group, even though they perform equally [25]. Although the BFLPE model was originally intended to explain academic achievement, it can be expanded and used to explain the psychological status experienced subjectively by individuals who compare relative satisfaction with those around them. Because survivors with a high economic status have more severe psychological symptoms than other groups, it is important that psychologists and other such professionals focus on individuals' subjective psychological damage rather than objectively measured ones. The results provide implications for considering the subjective frame of reference for survivors by designing differential psychological interventions. Next, there were significant main and interaction effects of the number of friends and two groups (i.e., survivors and general groups) on six psychological symptoms: anxious/depressed, withdrawn, somatic complaints, thought problems, attention problems and aggressive behavior. Our results found that the level of psychological symptoms in both groups reporting that the number of friends being four or more was significantly healthier than the group with three or fewer friends. The effects of the number of friends on psychological symptoms were more prominent in survivors than in the general groups. That is, friends' support was more helpful for the psychological health of survivors than general groups. Consistent with previous studies [26], social relationships contributed to the psychological symptoms of survivors. According to a meta-analysis study examining factors influencing post-traumatic stress response after a disaster, social relationships were found to be beneficial in post-traumatic stress response in social disasters, but not in natural disasters. Studies [27,28] have reported that disaster survivors can improve their quality of life, and return to their daily lives if they receive social support. Based on the research results, it can be seen that formal and informal social support should be provided for the recovery of disaster survivors. This study had some limitations. First, the sample of humidifier disinfectant survivors was relatively small (n = 228). Although the sample was a representative group of humidifier disinfectant survivors, the results should be cautiously interpreted; therefore, future studies should use larger samples. Second, due to the small number of survivors, analysis of various dimensions of variables such as dividing the number of friends, more specifically into three groups, was limited. Future studies are needed to diversify dimension of variables. Third, the current findings are limited to data from one self-reported psychological assessment. Previous studies have shown that a survivor's quality of life is affected by factors such as demographic characteristics, physical health, psychological characteristics and social support in an integrated way. To expand the knowledge of survivors' psychological status, it is necessary to analyze their relationship with psychological health and additional data, such as physical status, diagnosed disease, amount of damage compensation and degree of damage to the family member. Fourth, the results are obtained from cross-sectional data, and it is necessary to analyze the psychological health of survivors longitudinally in the future. Despite these limitations, the results of this study highlight survivors' ability to recover. The findings of this study are expected to provide information on psychological symptoms and aid in the provision of counseling for better outcomes in survivors of humidifier disinfectant disasters. Additionally, to recover and improve survivors' quality of life, it is believed that a comprehensive support system is needed in consideration of psychological health, as well as economic and environmental aspects. --- Data Availability Statement: The data presented in this study are available on request from the corresponding author. --- Conflicts of Interest: The authors declare that there is no conflict of interest.
This study aimed to compare the psychological symptoms of humidifier disinfectant survivors to the general population and explore socio-demographic factors influencing survivors' psychological symptoms. A one-way Multivariate Analysis of Covariance (MANCOVA) and a series of two-way MANCOVA were conducted with a sample of 228 humidifier disinfectant survivors and 228 controls. The results demonstrated that the survivor group displayed higher anxious/depressed symptoms, withdrawn symptoms, somatic complaints, thought problems, attention problems, aggressive behavior and rule-breaking behavior than the general group. Moreover, among the socio-demographic factors, the two-way interaction effects of group × family economic status and group × number of friends were found to be statistically significant. The limitations and implications of this study are discussed.
Introduction The majority of people with dementia live at home with support from their family members. If a partner is present, he or she is usually the person who fulfills the role of primary informal caregiver [1]. Family caregiving plays an increasingly vital role in care for people with dementia in European countries like the Netherlands, since policies encourage people to call on their own social network in the first place, supported by home and community-based services, in order to delay institutionalization [2,3]. Providing family care can be a serious burden for caregivers and can negatively affect their psychological and physical health, especially among informal caregivers of persons with dementia [1,4,5]. For instance, caregivers of people with dementia show higher rates of depression, anxiety, sleeping disorders, and physical morbidity, including cardiovascular disease and lower immunity than noncaregivers, for example [6][7][8][9]. Furthermore, studies indicate that health-care use is higher among family caregivers of persons with dementia compared to noncaregivers [6,10]. The health of family caregivers is one of the most important predictors of institutionalization of the person with dementia [11]. A majority of persons with dementia and their family caregivers prefer care at home [12] and institutionalized care increases health-care expenses [13]. It is therefore important to offer timely support to avoid deterioration of health in family caregivers and to enable them to maintain the care for their partner, relative, or friend with dementia as long as possible. Currently, there is limited insight into the occurrence of health problems and changes in health-care utilization in different stages of the care trajectory, while information about this is essential in order to offer timely support to family caregivers. Furthermore, studies of health problems in family caregivers have mainly focused on psychological health outcomes as opposed to physical health outcomes and have used relatively small and selective study samples without a comparison group. There is a lack of evidence from large, representative population-based studies regarding the most prevalent psychosocial and physical health problems of partners caring for a person with dementia, and including a matching comparison group [14]. The aim of the current study is to provide insight into the prevalence of a wide range of psychosocial and physical health problems in cohabiting partners of persons with dementia that occur during the dementia care trajectory. In addition, this study aims to provide information on the frequency of contacts with the general practitioner (GP) during the dementia care trajectory. The research questions for this study are: 1. Which health problems are most prevalent among partners of people with dementia in the year prior to the dementia diagnosis and in the 3 years after the dementia diagnosis and to what extent do these differ from health problems in comparable partners of persons without dementia? Does the prevalence of these health problems change over time? 2. How often do partners of people with dementia contact their GP in the year prior to the diagnosis of dementia and in the 3 years after the diagnosis, does this frequency differ from that in comparable partners, and does it change over time? Based on the previous systematic reviews (e.g., [6,10]), we expected: • Psychological health problems, including, for example, depression, anxiety, and sleeping disorders, to be more prevalent in partners of people with dementia than in the comparison partners (H1); • Cardiovascular problems and immunity problems to be more prevalent in partners of people with dementia than in the comparison partners (H2); • The GP contact rate to be higher than for the comparison partners (H3). --- Materials and Methods Data from national administrative databases were linked with electronic health record (EHR) data from GPs. The data covered the year before and the 3 years following the dementia diagnosis. This time frame was chosen since Dutch data on dementia care trajectories have revealed that institutionalization in a long-term care facility often takes place approximately 3.5 years after the diagnosis is recorded in general practice [15]. --- Data Sources --- EHR Data from GPs Routinely recorded EHR data of GPs participating in the Nivel Primary Care Database (Nivel PCD) were used to retrieve data on psychosocial and physical health problems (https://www.nivel.nl/en/nivel-primary-care-database). The Nivel PCD collects pseudonymized EHR data on approximately 1.7 million individuals (10% of the Dutch population), which are routinely recorded by a nationally representative network of GP practices (451 for the current study), spread throughout the Netherlands [16]. This includes data on diagnoses, prescriptions, number of consultations, and referrals of all the patients who are registered with the participating GP practices. Diagnoses made by a specialist from a hospital or a memory clinic are also recorded by GPs. International Classification of Primary Care (ICPC-1) coding is used to code contact diagnoses [17] and grouped into disease episodes [18]. GPs receive support in coding and feedback on the quality of recording [19,20]. In the Netherlands, the GP acts as the "gatekeeper" to specialist care and is therefore usually the first health-care provider people contact in the case of health problems. Virtually all Dutch residents are registered with a general practice. --- Administrative Data Data on sociodemographic characteristics, the date of death, and the date of institutionalization were derived from administrative data sources made available for research by Statistics Netherlands (Centraal Bureau voor de Statistiek, CBS). Statistics Netherlands is the governmental institution that is responsible for the processing of statistical population data in the Netherlands. Sociodemographic characteristics and date of death originated from the Municipal Personal Records Database, covering all persons residing in the Netherlands. The date of permanent institutionalization was derived from administrative data for the Dutch national long-term care insurance scheme covering all institutionalizations (nursing, residential, or psychiatric homes) of all Dutch adults. --- Study Population --- Partners of Persons with Dementia Partners of persons born in 1965 or before with a recorded dementia diagnosis (ICPC code: P70) between 2008 and 2015 were identified in the EHR data. Partners were included based on the following criteria: living at the same household address, living together with <unk>5 persons at the same address, and having an age difference with the person with dementia <unk>20 years. Living together with more than 5 persons at the same address could imply that the person lives in a residential care home and these cases were therefore excluded (n = 56). If the age difference is <unk>20 years, it is more likely that the person in question is not the partner of the person with dementia, therefore these cases were excluded (n = 18). Households with more than 1 person with dementia were also excluded. Heide, I. van der, Heins, M., Verheij, R., Hout, H.J.P. van, Francke, A., Joling, K. Prevalence of health problems and health-care use in partners of people with dementia: longitudinal analysis with routinely recorded health and administrative data. Gerontoloy: 2021 __________________________________________________________________________________________________________________________________ This is a Nivel certified Post Print, more info at nivel.nl --- Comparison Group For every person with a recorded dementia diagnosis, an independent researcher identified, if available, a maximum of 4 comparison persons without a recorded dementia diagnosis from the same general practice, in the same age category (5-year intervals), of the same sex, and living with a partner. A maximum of 4 comparison persons was identified because a large comparison group increases the reliability of the findings. The partners of these comparison persons were included as comparison partners in the current study. Neither the comparison persons nor their partners were diagnosed with dementia during the study period. Both were usually registered with the same general practice. --- Outcomes Psychosocial and Physical Health Problems The prevalence of psychosocial and physical health problems was operationalized as a morbidity or symptom as recorded in the partner's or comparison partner's EHR during a specific year. GPs can use a total of 685 different ICPC codes to record diagnoses that are clustered into 17 ICPC chapters, reflecting different systems of the human body. In this study, we used 16 ICPC chapters (excluding the chapter about pregnancy) as health indicators. If significant differences (p <unk> 0.01) were found in the prevalence of specific ICPC chapters between the partners and comparison partners, further analyses were conducted to examine whether there were differences between the samples at the ICPC level within that specific chapter. --- Frequency of GP Contacts The frequency of GP contacts in each year was obtained from the EHRs. Contacts included medical consultations at the GP's practice, home visits, and telephone consultations. --- Sociodemographics Characteristics The following sociodemographic characteristics of the persons with dementia and their partners and of the comparison persons and comparison partners were described: age, gender, and migrant status. The migrant status was categorized as a Western background (Dutch or Western migration background) or as a non-Western migration background (Surinamese, Antillean, Aruban, Moroccan, Turkish, or other non-Western migration background). --- Frailty A frailty index was created for the persons with dementia and the comparison persons in order to obtain an impression of their health condition. The frailty index was created by screening the GPs' EHRs for 35 predefined relevant "health deficits" including ICPC codes of diseases and symptoms and one deficit "polypharmacy" [21]. The proportion of deficits present in an individual resulted in the Frailty Index score (range 0-1). In accordance with prior studies, people were classified into nonfrail (3 or fewer deficits; Frailty Index <unk>0.08), pre-frail (4 to 8 deficits; 0.08 <unk>index <unk>0.25), and frail (9 or more deficits; index <unk>0.25) [22][23][24]. --- Date of Death and Date of Institutionalization The date of death and date of institutionalization of the persons with dementia and their partners and of the comparison persons and comparison partners were determined to describe the proportion of persons who moved to a long-term care facility or died during the study period. Heide, I. van der, Heins, M., Verheij, R., Hout, H.J.P. van, Francke, A., Joling, K. Prevalence of health problems and health-care use in partners of people with dementia: longitudinal analysis with routinely recorded health and administrative data. Gerontoloy: 2021 __________________________________________________________________________________________________________________________________ This is a Nivel certified Post Print, more info at nivel.nl --- Data Linkage The GP data were pseudonymized at the source (i.e., the GP practice) and linked to the administrative data at Statistics Netherlands after being securely transferred by a trusted third party [16]. Pseudonyms were based on the citizen service number or on a combination of date of birth, gender, and postal code. The pseudonymized data were made accessible to the researchers through a secured remote access facility provided by Statistics Netherlands under strict privacy conditions. --- Statistical Analysis Descriptive statistics were calculated to describe the sample characteristics. <unk> 2 and independent t tests were used to determine differences between sample characteristics. The prevalence of the 16 ICPC chapters and the frequency of GP contacts were calculated and described per year for both partners and comparison partners. To examine whether the prevalence of health problems and the number of GP contacts differed significantly between the partners and comparison partners and to examine whether the prevalence of health problems increased or decreased over time, generalized estimating equation (GEE) models were fitted. GEE models take into account the correlation of different measures within subjects. For each of the 16 ICPC chapters, a GEE model for binary response variables was fitted, with the measurement year (continuous ranging from 0 to 3), partner group (partner vs. comparison partner), and the interaction term measurement year*partner group as predictors. If a significant difference (p <unk> 0.01, because of multiple testing) was found between partners and comparison partners in the prevalence of ICPC chapters, GEE models were fitted for all specific health problems (specific ICPC codes) that fell within those overarching ICPC chapters. Only significant differences (p <unk> 0.01) in specific health problems that occurred in at least 5% of the partners were considered relevant and only these differences are therefore reported. In addition, a GEE model for count response variables was fitted to estimate changes over time in the number of GP contacts. This model also had the measurement year, partner group, and measurement year*partner group as predictors. All analyses were based on study subjects who were registered at a GP practice for at least one entire follow-up year. All analyses were conducted in SPSS version 15. --- [Figure 1] [Table 1] --- Results --- Sample Characteristics Figure 1 shows the inclusion of partners and comparison partners per year. A total of 1,711 partners and 6,201 comparison partners were included in the analyses. The mean number of followup years was 2.3 years in both groups (see Table 1). The partners as well as the persons with dementia were slightly but significantly older than the comparison partners and the comparison persons without dementia (75.4 vs. 74.3 years and 78.1 vs. 76.8 years, respectively, see Table 1). The partners had a mean age of 75 and the comparison partners a mean age of 74. In both groups, almost all of the partners were of Western origin (97 and 98%, respectively). A significantly higher proportion of the partners of the persons with dementia cared for a frail person than the comparison partners and a significantly higher proportion of the persons with dementia moved to a long-term care facility (21 vs. 2%) or died (16 vs. 10%) during the study period than the comparison persons. Heide, I. van der, Heins, M., Verheij, R., Hout, H.J.P. van, Francke, A., Joling, K. Prevalence of health problems and health-care use in partners of people with dementia: longitudinal analysis with routinely recorded health and administrative data. 3 show that musculoskeletal problems were most prevalent across the years in both the partner (41-46%) and the comparison partner group (38-40%). Differences between partners and comparison partners were found for the following health problems: • Social problems were more prevalent in partners than in comparison partners (OR = 4.98 [95% CI = 4.27-5.80]; p <unk> 0.01). The prevalence of social problems increased over time in both the partner and comparison partner group (OR = 1.20 [95% CI = 1.12-1.28]; p <unk> 0.01). • Within the "social problems" chapter, we found "problems with the illness of the partner" to be more prevalent in partners than in comparison partners (OR = Furthermore, in both partners and comparison partners, a significant increase over the years was found in the prevalence of general and unspecified health problems (OR = 1.06 [95% CI = 1.02-1.10]; p <unk> 0.01) and in the prevalence of urological problems (OR = 1.06 [95% CI = 1.02-1.10]; p <unk> 0.01). --- GP contacts in partners and comparison partners in the year before and the 3 years after diagnosis. It was found that partners had more GP contacts than comparison partners across the years (B = 0.12 (95% CI = 0.06-0.18); p <unk> 0.01). Partners had 9-10 contacts per year throughout the study period, whereas comparison partners had 7-8 contacts per year; see Figures 4 and5. In addition, the number of GP contacts increased over time in the partner group but not in the comparison partner group (B = 0.05 [95% CI = 0.01-0.08]; p <unk> 0.01). --- Discussion --- Reflection on Main Findings This study provided insight into the most prevalent psychosocial and physical health problems among persons taking care of their partner with dementia during the year before the dementia diagnosis was recorded in the GP's electronic medical records and the 3 years after the diagnosis. These health problems were compared to the health problems of a matched comparison group. We found musculoskeletal problems to be the most prevalent type of health problem across all years in both partners and comparison partners, which is in line with international research that suggests that musculoskeletal problems are one of the most prevalent health problems in older people [25]. Musculoskeletal problems were more prevalent overall in partners than in comparison Heide, I. van der, Heins, M., Verheij, R., Hout, H.J.P. van, Francke, A., Joling, K. Prevalence of health problems and health-care use in partners of people with dementia: longitudinal analysis with routinely recorded health and administrative data. Gerontoloy: 2021 __________________________________________________________________________________________________________________________________ This is a Nivel certified Post Print, more info at nivel.nl 7 partners, which could be related to the provision of family care, but might also be related to differences in, for instance, the socioeconomic position of the partners and comparison partners. We expected that psychological health problems, including depression, anxiety, and sleeping disorders, would be more prevalent in partners than in comparison partners after the diagnosis of dementia (H1). This hypothesis was partly confirmed, as an increase in the prevalence of psychological problems, and specifically sleeping problems, over time was found in the partner group and not in the comparison partner group. --- [Figure 3][Figure 4][Figure 5] Besides sleeping problems, no other specific psychological health problems, such as depression, were significantly more prevalent in partners than in comparison partners during the year before and the 3 years after the diagnosis. In contrast, a comparable Dutch study showed that spouses of persons with dementia were 4 times more likely to be diagnosed with depression than spouses of persons without dementia [7]. In this study, a cohort of spouses was followed for 6 years, but not specifically immediately after the diagnosis. It could be that depression in spouses only manifests several years after the diagnosis, and our follow-up time was too short to detect this. Another explanation for not finding a higher prevalence of psychological problems might be that GPs are more likely to label depressive feelings or other psychological problems in partners of persons with dementia as "problems with the illness of the partner," which we found to be more prevalent in partners than in comparison partners. We also expected that cardiovascular problems and problems related to immunity would be more prevalent in partners than in comparison partners after the diagnosis of dementia (H2), as described in the study of Brodaty and colleagues [6]. No significant differences were found with respect to cardiovascular problems between partners and comparison partners, but we did find an increase of respiratory problems in partners over the years that was not found in the comparison partners. The increase in respiratory problems may be related to immunity problems [6] although this could not be investigated in the current study. Third, we expected that partners of persons with dementia would visit their GP more often after the diagnosis than comparison partners (H3). In accordance with previous research [6,10], it was found that during the entire study period partners visited their GP more often than comparison partners, with a peak in the third year after the diagnosis. Since the formal diagnosis of dementia is often given after the disease has been manifest for a while, it is likely that partners already struggle in dealing with dementia before this formal diagnosis and therefore may visit their GP more often. This possible explanation is supported by the finding that during the year before the diagnosis, partners visited their GP more frequently for problems with the illness of the person they were living with than comparison partners. The increase in the number of GP contacts over the years in partners, which was not seen in the comparison partners, seems to confirm that caring for a partner with dementia becomes increasingly demanding over the course of time and may affect the health of the informal caregiver. This assumption is also supported by the finding that, compared to earlier years, the third year after the diagnosis shows more health problems that are more prevalent in partners than in comparison partners. In addition, we found that social problems, reflecting problems with the disease and loss or death of the person with dementia, were 3 to 5 times more prevalent in partners than in comparison partners in the year before the dementia diagnosis and in the 3 years after the diagnosis. The prevalence of social problems was lowest before the diagnosis and showed a peak during the first year after the diagnosis, which gradually decreased in the following years. This pattern could be due to the fact that after the diagnosis, partners face many uncertainties and contact their GP in need of Heide, I. van der, Heins, M., Verheij, R., Hout, H.J.P. van, Francke, A., Joling, K. Prevalence of health problems and health-care use in partners of people with dementia: longitudinal analysis with routinely recorded health and administrative data. Gerontoloy: 2021 __________________________________________________________________________________________________________________________________ This is a Nivel certified Post Print, more info at nivel.nl support. As the disease progresses over the years, it could be that informal caregivers are somewhat more prepared for the future or are receiving support by then. Nevertheless, given the high prevalence of problems with the disease or loss of the person with dementia in partners compared to comparison partners over the years, partners seem to be in need of advice or support in relation to the condition of their partner. Earlier survey research already highlighted this need among family caregivers at all stages of dementia [26]. --- Strengths and Limitations An important strength of this study is that a large group of partners of persons with dementia was followed during several years of the care trajectory and a wide range of psychosocial and health problems were examined using routine registration data. In addition, we were able to include a large group of comparison partners with a long follow-up period as well. Because of the gatekeeping health-care system in the Netherlands in combination with the comprehensive use of EHRs with guidelines for proper EHR keeping [19] and the possibility of record linkage with pseudonymized data, it was possible to use existing data to identify and follow-up partners of people with dementia without increasing the administrative burden for health professionals. A limitation of this study is that dementia is likely to be under-recorded in Dutch primary care. There seems to be a reluctance to record dementia in EHRs if it is not yet officially confirmed by a medical specialist. This means that it is possible that some of the partners and comparison partners as well as the partners of the comparison partners might have had dementia but were not yet diagnosed as such. A second limitation is that in theory a few cohabiting children might have been included in the partner group. However, since cohabiting partners were selected based on the criterion that the age difference with the person with dementia should be <unk>20 years and teenage pregnancies are rare in The Netherlands, this number would the negligibly low and would therefore not have affected the outcomes of our study. --- Conclusion The findings of the current study imply that having a cohabiting partner with dementia has consequences for the caregiver's physical and psychosocial health. This is reflected by a higher prevalence of musculoskeletal problems, respiratory problems, psychological problems, and especially social problems as well as an increase in GP contacts, over the course of multiple years prior to and following the diagnosis of dementia. In practice, this means that the increase in the number of people with dementia will be accompanied by an increased appeal to GPs by partners of people with dementia. Given the finding that partners often visit the GP for problems with the disease of the person with dementia, timely referral to for instance a case manager dementia for support is important. Support for partners seems needed throughout the disease trajectory, starting in the first year after or even before the diagnosis of their relative. This could contribute to the prevention of overburdening in partners, of which the specific health problems and the increase in GP contacts as found in this study might be relevant indicators. --- Statement of Ethics This study has been approved by the Ethics Committee of the VU University Medical Center and is in accordance with the governance code of Nivel PCD, under number NZR-00315.063. Patients were informed by their GP about the use of their pseudonymized health data and could object. Data were processed in accordance with national and EU regulations and guidelines. The use of EHRs for research purposes is allowed under certain conditions. When these conditions are fulfilled, neither obtaining informed consent from patients nor approval by a Medical Ethics Committee is obligatory Heide, I. van der, Heins, M., Verheij, R., Hout, H.J.P. van, Francke, A., Joling --- for this type of observational studies containing no directly identifiable data (art. 24 GDPR Implementation Act jo art. 9.2 sub j GDPR). --- Conflict of Interest Statement The authors have no conflicts of interest to declare. --- Author Contributions K. Joling, H. van Hout, A. Francke, R. Verheij, and I. van der Heide planned the study. I. van der Heide performed the statistical analyses and wrote the manuscript. K. Joling, H. van Hout, A. Francke, and M. Heins supervised the data analysis. K. Joling, H. van Hout, A. Francke, R. Verheij, and M. Heins contributed to revising the manuscript.
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Introduction The USA ranks first among high-income countries for the number of gun deaths and gun injuries per capita, with 39,740 people killed by guns in 2018 (Center for Disease Control & Injury Prevention, Web-Based Injury Statistics Query and Reporting Systems (WISQARS) 2020). The costs of gun violence for survivors are profound and include a higher likelihood of suffering from PTSD (Montgomerie et al. 2015;Ranney et al. 2019), perpetrating violence (Rowhani-Rahbar et al. 2016), carrying guns (Beardslee et al. 2018), and experiencing subsequent reinjury or death than those who experience other forms of injury (Rowhani-Rahbar et al. 2015;Fahimi et al. 2016). However, gun violence is likely to have an important, underappreciated impact on community members who hear about or witness gun violence. The impact of gun violence on mothers is particularly understudied. This gap in the literature is important because mothers' mental health and wellbeing have important spillover effects for their children and partners (Bagner et al. 2010;Cummings and Davies 1994;Elgar et al. 2004;Goodman et al. 2011;Yeh et al. 2016). Examining the determinants of mothers' mental health therefore offers important insights into how to improve mothers' wellbeing, as well as the wellbeing of her family. In this study, we examined whether witnessing gun violence in one's community has associations with mothers' symptoms of depression, probabilities of meeting depression criteria, and reports of parental aggravation. In doing so, we point to important externalities associated with the US's gun violence epidemic. --- Background --- Community Violence and Wellbeing Prior research illustrates that individuals who are exposed to local violence have greater risks of experiencing mental health concerns including anxiety, PTSD, and depression, the latter of which is the focus of this study (Clark et al. 2008;Fowler et al. 2009;Rossin-Slater et al. 2019;Theall et al. 2017;Wilson-Generson and Pruchno 2013;Wilkinson et al. 2008). These associations hold for those who are victimized by violence and for those who are indirectly exposed to violence, by witnessing or hearing about it in their communities (Gergo et al. 2020;Fowler et al. 2009;Rossin-Slater et al. 2019). Local violence can also exacerbate risk factors for depression by preventing individuals from going outside and socializing, thereby corresponding to physical inactivity (Kneeshaw-Price et al. 2015;Yu and Lippert 2016), social isolation (Barnes et al. 2006;Cohen-Mansfield et al. 2016), and lower social cohesion among neighbors (Kingsbury et al. 2020;Newbury et al. 2018). Low social cohesion has even been found to exacerbate the relationship between violence exposure and adverse mental health outcomes (Kingsbury et al. 2020;Newbury et al. 2018). Additionally, neighborhoods with a greater prevalence of gun violence tend to be more socioeconomically disadvantaged, racially segregated, and have lower access to healthcare resources (Kane 2011;Knopov et al. 2019;Williams and Collins 2001;Wong et al. 2020). As such, local violence can have direct and indirect impacts on community members' mental health and may exacerbate socioeconomic and racial disparities in health and wellbeing. While prior research has made important strides in highlighting the impacts of local violence on individuals' mental health, this research has largely not examined the relationship between gun violence in one's community and individuals' risks of depression. Instead, existing studies have largely focused on crime rates or cumulative measures of violence exposure that combine exposure to gun violence with other forms of violence such as stabbings, muggings, and physical fights (Gergo et al. 2020;Clark et al. 2008;Fowler et al. 2009;Huang et al. 2018;Wilson-Genderson and Pruchno 2013). This is an important gap in the literature because gun violence is far more likely to lead to the death or injury of victims and bystanders than other types of violence such as stabbings or physical fights and, as such, may be especially traumatizing for witnesses (Wells and Horney 2002). Indeed, residents of violent neighborhoods report fears that they or their loved ones will be the victims of gun violence (Opara 2020). Victimization with a gun is also associated with significantly greater mental health distress than victimization with other weapons (Kagawa et al. 2018(Kagawa et al., 2020;;Langton and Truman 2014). Furthermore, gun use is associated with and enables other forms of violence such as gang violence (Stretesky and Pogrebin 2007) and suicide (Shenassa et al. 2003). Gun violence may therefore have an especially comprehensive association with local occurrences of crime, injury, and death and thereby have important, enduring associations with depression and wellbeing among community members. Moreover, much of the prior literature has examined the impact of local violence on children, adolescents, and victims and perpetrators. Few studies have examined the impact of local violence on mothers. However, maternal depression is important for mothers' wellbeing, for the wellbeing of her family members, and for familial dynamics. For example, maternal depression is associated with aggravation in parenting (also referred to as parenting stress), harsher parenting practices, negative attachment between parents and children, and impaired family functioning, including lower quality relationships between parents and worse familial problem-solving (Elgar et al. 2004;Erickson et al. 2019;Wolford et al. 2019;Yeh et al. 2016). Given these relationships, it is perhaps unsurprising that prior research has found that maternal depression is associated with children's behavioral problems, risks of depression, and attachment styles in the short and long term (Bagner et al. 2010;Cummings and Davies 1994;Elgar et al. 2004;Goodman et al. 2011;Pratt et al. 2019). Examining the determinants of maternal depression is therefore important for understanding the wellbeing of mothers and their families. The few studies that have examined these associations for adult women's outcomes have largely been smaller scale, focusing on specific areas or subsets of mothers, such as those recovering from substance use disorders (Clark et al. 2008;DeSantis et al. 2016;Evans et al. 2011). An important exception to this is a study by Huang et al. (2018) that examined the association between health outcomes and a dichotomous measure indicating whether mothers witnessed or were victimized by any type of violence, including shootings, attacks with other forms of weapons, and being hit. The authors found that exposure to violence 2 years prior was associated with health problems, substance abuse, and depression. While that study made very important contributions to the literature on the effects of violence exposure on individuals' health, it did not separately examine exposure to community violence from victimization and did not focus on the association of gun violence with mothers' outcomes. It also explored the associations over a relatively limited 2-year time frame. It is therefore unclear the extent to which community gun violence specifically is associated with depression among mothers. --- This Study In this study, we help fill these gaps in the literature by focusing on exposure to gun violence specifically and mothers' symptoms and diagnoses of depression for a large sample of mothers across 20 US cities. We used longitudinal Fragile Families and Child Wellbeing Study (FFCWS) data and examined whether witnessing gun violence in one's local community was associated with mothers' symptoms of depression, using three different depression outcome measures. We also examined whether witnessing gun violence was directly and indirectly associated with parental aggravation. Our findings highlight the externalities associated with gun violence and contribute to the literature on the social factors that shape parenting practices and children's outcomes. Further, because gun violence disproportionately impacts under-resourced communities and communities of color (Overstreet 2000;Tracy et al. 2019), our findings are important for understanding socioeconomic and racial disparities in wellbeing. --- Data and Methods The FFCWS is a longitudinal survey that followed 4898 children born in 1998 and their parents at the child's birth and at ages 1, 3, 5, 9, and 15. The FFCWS randomly selected 20 US cities with populations of 200,000 or more and selected hospitals within those cities. 1 The FFCWS oversampled unmarried, low-income parents and is therefore not nationally representative. However, because individuals with lower socioeconomic statuses are more likely to be exposed to gun violence (Overstreet 2000;Tracy et al. 2019), the FFCWS captures a sample that is disproportionately affected by gun violence and so is of special interest to this study. We used data from waves 3-6 when children were 3-15 years old because these were the years for which we had information on mothers' exposure to violence in their communities. We studied approximately 4587 mothers for whom we had valid responses to the depression measures in at least two survey waves (the sample sizes varied across models depending on the outcome and method used, as described below). We did not include fathers because fathers were asked a more limited set of questions than mothers in most years. Missing values on the covariates were imputed using chained equations and imputing the dataset 10 times using STATA 16's "chained" command. The only pattern we observed in our missing data was that individuals who were missing data on neighborhood poverty rates were more likely to be missing information on county-level crime rates. The FFCWS data are largely publicly available, though we also used restricted access data on the characteristics of respondents' Census tracts and counties of residence to help account for characteristics of the residential environment. This research project was approved by the institutional review board of the Human Subject Division at the University of Washington. --- Depression The FFCWS measured depression using the Composite International Diagnostic Interview-Short Form (CIDI-SF) (Kessler et al. 1998). The CIDI-SF has been used in numerous epidemiological and research studies, and the questions that comprise it are consistent with those included in the Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition (Bendheim-Thoman Center for Research on Child Wellbeing 2020). The CIDI-SF included 15 questions related to whether respondents had feelings of depression or anhedonia for a period of at least 2 weeks during the prior 12 months. These 15 questions included all those used for diagnosing major depression with the CIDI. They did not include questions from the full CIDI questionnaire that are not necessary for diagnosing depression, including respondents' level of contact with healthcare providers and the recency of their symptoms. Respondents were first asked if they had felt depressed and/or unable to enjoy things for a period of at least 2 weeks since the prior interview and if those feelings lasted most of the day, every day during that 2-week period. Those who agreed that they had feelings of depression or anehedonia most of the day, every day for a 2-week period, were then asked 15 subsequent questions about their depressive symptoms. These questions concerned whether respondents experienced a loss of interest, tiredness, weight changes, sleep problems, trouble concentrating, feelings of worthlessness, and thoughts about death and the frequency of those symptoms (these questions were not asked of those who did not have a 2-week period of depressive feelings that lasted most of the day). Respondents who reported at least 3 symptoms of depression most of the day during that period met the conservative threshold for diagnosing depression. FFCWS created a dichotomous variable indicating whether respondents met the conservative depression criteria, with those who meet the conservative depression criteria receiving a value of 1, and those who did not meet the conservative depression criteria receiving a value of 0. Respondents who reported at least 3 symptoms of depression half of the day met the liberal depression criteria and were scored a "1" in a dichotomous indicator variable. More information on these scales is provided in FFCWS's public data guide (Bendheim-Thoman Center for Research on Child Wellbeing 2020). In this study, we examined depression in three ways. We examined the probabilities that respondents met the (1) liberal and (2) conservative thresholds for depression and (3) we constructed a continuous measure of depression symptoms. The third outcome was developed by standardizing the component measures to have a mean of 0 and standard deviation of 1 and summing the results from the 15 CIDI-SF questions into a standardized scale using STATA 16's alpha command. It was important to standardize each measure because our CIDI-SF questions had varying scales. Some of the CIDI-SF questions were dichotomous (yes/no), while others concerned the frequency of symptoms and were ordinal. Standardizing each measure addressed the different scales of these questions. Those who had no symptoms of depression were given a score of 0. This index had an alpha score of 0.9808, indicating it is highly reliable. --- Exposure to Gun Violence Mothers' exposure to gun violence was measured with a dichotomous variable representing whether mothers reported that they saw someone else get shot 1 or more times in the past year in their community (1 = yes, 0 = no). Mothers were told to only respond about shootings they had seen in their local community or neighborhood and to not include shootings they witnessed in their home or on TV. Mothers were asked this question in waves 3-6. --- Covariates In our regressions, we included a lagged outcome variable representing mothers' symptoms or diagnoses of depression in the previous wave to account for preexisting mental illness. We also accounted for the mother's race/ethnicity (White, Black, Latinx, Other), as well as the mother's educational attainment and employment status, logged household income, and whether the household was in poverty.2 These latter measures helped account for the household's socioeconomic status, which, as noted above, is important for mothers' risk of depression and exposure to violence. We controlled for the number of children in the household, whether the mother was married or cohabiting, and whether she was cohabiting with the father of the focal child. Additionally, we included state fixed effects, the logged violent crime rate in the county, and the percentage of individuals in the respondents' neighborhood (Census tract) who were below poverty level. These contextual variables helped address prior findings that structural disadvantage in one's neighborhood is associated with individuals' risks of depression and exposure to violence (Dawson et al. 2019;Knopov et al. 2019;Wong et al. 2020). Finally, we included a control variable representing the length between survey years to account for the shorter temporal distance between earlier survey waves and the longer distance between later survey waves. --- Statistical Analyses To examine the relationship between exposure to gun violence and mothers' depression outcomes, we first used linear and logistic regression models and included our lagged outcome variable as a covariate. Our depression outcome scale was continuous. We used linear regression models for this outcome and included the dichotomous variable representing whether mothers witnessed a shooting as our predictor variable. For the two dichotomous depression outcomes (whether respondents met liberal depression criteria or conservative depression criteria), logistic regression models were used. As with the linear regression model, the dichotomous indicator for whether mothers witnessed a shooting was included as a predictor. We performed bivariable analyses and multivariable analyses with the full suite of covariates. Robust standard errors were calculated at the individual level. We then examined whether our results were robust when we used withinperson fixed-effects (FE) models. By examining whether witnessing gun violence was associated with a change in depression within individuals, these models parceled out unobserved, time-invariant heterogeneity and were thus less susceptible to omitted variable bias. Only time-varying covariates were included in these models (gender, race, and education were excluded), and the lagged outcome variable was excluded because the FE models measured change in the outcome. The disadvantage of these models is that they remove all observations for which there is no variation in the outcome, leading to a loss of power and an inability to examine those who had never experienced depression (Hill et al. 2019). As such, it is valuable to examine the lagged outcome and FE models in tandem to provide a comprehensive insight into the relationship between exposure to gun violence and mothers' depression and to guard against the limitations of each model. All analyses were conducted in STATA 16. --- Supplementary Analyses As noted above, maternal depression is likely to have spillover effects for family members' wellbeing. To more directly examine this possibility, we conducted a supplementary analysis using parental aggravation as an outcome. Parental aggravation was measured by the FFCWS with the following questions, "Being a parent is harder than I thought it would be," "I feel trapped by my responsibilities as a parent," "I find that taking care of my child(ren) is much more work than pleasure," and "I often feel tired, worn out, or exhausted from raising a family." We dichotomized the responses to these questions (1 = agree or strongly agree, 0 = disagree or strongly disagree) and summed and averaged them to create a scale measure of parental aggravation. This scale is wellsupported and has been validated in the literature on family functioning (Bendheim-Thoman Center for Research on Child Wellbeing 2020). For this analysis, we used structural equation models (SEM) which allowed us to examine whether witnessing a shooting had a direct relationship with parental aggravation, as well as an indirect relationship through maternal depression. We expected to observe both direct and indirect relationships given that maternal depression is associated with harsher parenting practices (Wolford et al. 2019;Yeh et al. 2016). Solely examining the direct relationship between witnessing a shooting and parental aggravation could therefore underestimate gun violence exposure's impact on parenting. We included the full suite of covariates that had been used for our depression outcomes, as well as a measure for child gender, the continuous measure for maternal depression,3 and a scale measure for child behavior problems, as all may be associated with parenting aggravation. The behavior problem measure was constructed from the Child Behavior Checklist, a list of 34 survey questions on children's behavior problems that parents were asked in each survey wave of the FFCWS (Bendheim-Thoman Center for Research on Child Wellbeing 2020). We summed the 34 questions into a standardized scale with a mean of 0 and standard deviation of 1. Because multiple imputation does not support SEM in Stata, we used the original, non-imputed dataset. --- Results --- Descriptive Statistics For the lagged outcomes models, we observe about 12,846 mother interviews in our analytic sample (observations varied modestly depending on the outcome in question), of which 744 (5.8%) reported witnessing a shooting in any survey year (Table 1). Because the FE models relied on within-person change, they dropped observations for which we did not observe changes in the depression outcomes. As such, fewer mothers are observed in our FE analytic samples, particularly for our conservative depression criteria outcome. Nevertheless, we observed 3547 mother interviews in our FE samples for the conservative depression criteria, our most restrictive criteria. Of those mothers, 259 (7.3%) had witnessed a shooting in the past year (Table 1). We first examined the descriptive characteristics for our lagged outcome and FE analytic samples. On average, those who witnessed shootings were more likely to be persons of color and socioeconomically disadvantaged. Specifically, the descriptive statistics for our lagged outcome sample indicated that mothers who witnessed shootings were more likely to be Black (74.7% vs. 48.6%), in poverty (61.8% vs. 36.6%), and have less than a high school degree (34.01 vs. 22.2). Mothers who witnessed shootings were also less likely to have a college degree (4% vs. 15.6%) and be married or cohabiting (38.6% vs. 51.8%). Further, those who witnessed a shooting had lower household incomes and lived in more disadvantaged neighborhoods and counties, on average. Mothers in the FE analytic sample displayed similar aggregate patterns (Table 1). Indicatively, our descriptive statistics indicated that mothers who witnessed shootings were more likely to meet conservative and liberal criteria for depression (21.1% vs. 11.2%; 27.8% vs. 16.3%) and have higher depression scores (0.44 vs. 0.08) than mothers who did not witness shootings. Mothers are also about 3-4 percentage points more likely to meet liberal or conservative depression criteria in prior waves than mothers who did not witness shootings, though mothers who witnessed shootings also exhibited greater changes in their depression scores and meeting criteria across waves than mothers who did not witness shootings (Table 1). Similar disparities were observed in the descriptive statistics for our FE samples, though we also observed that mothers in our FE samples were more likely than mothers in our lagged outcome samples to meet criteria for depression regardless of whether they had witnessed a shooting. This is unsurprising given that the FE models relied on mothers who exhibited changes in depression criteria. Nevertheless, mothers who witnessed shootings in our FE sample underwent larger increases in depression scores and the proportion meeting depression criteria across waves than mothers who did not witness shootings (Table 1). As such, our descriptive statistics provide suggestive evidence that witnessing a shooting is associated with higher probabilities of meeting depression criteria. --- Regression Models In both our bivariable and multivariable lagged outcome models, witnessing a shooting was associated with significantly greater symptoms of depression and a significantly higher likelihood of meeting criteria for depression based on the conservative and liberal CIDI-SF definitions. These results held in both the bivariable and multivariable models, though prior diagnoses of depression and individuals' socioeconomic and marital statuses explained modest portions of those relationships. In the fully specified multivariable models, witnessing a shooting is associated with an increase in depression scores of 21.4% of a standard deviation and is associated with a 58.3% and 57.8% increase in the odds of meeting the liberal and conservative criteria for depression, respectively (Table 2). The FE models largely confirmed the conclusions from the lagged outcome models. Specifically, those who witnessed a shooting experienced a significant increase in their depression scores and exhibited an approximately 32.5-39.2% increase in the odds of meeting both the conservative and liberal criteria for depression (Table 2). We did not observe any subgroup differences in these relationships by race, ethnicity, or socioeconomic status. --- Supplementary Analyses In supplementary analyses, we examined the direct and indirect relationships between witnessing a shooting, maternal depression, and parental aggravation, using SEM. We found that witnessing a shooting had a direct and significant association with parental aggravation as well as an indirect relationship through maternal depression (Fig. 1). Cumulatively, witnessing a shooting was associated with a 15% standard deviation increase in parental aggravation scores. Approximately 90% of that association was the result of the direct relationship between witnessing a shooting and parental aggravation, and an additional 10% resulted from the indirect relationship between witnessing a shooting, maternal depression, and parental aggravation. Thus, witnessing a shooting may impact parenting a Sample characteristics are for the samples utilized in the lagged outcome and fixed effects models for conservative depression criteria. Sample characteristics are substantively very similar for the other outcomes utilized for the lagged outcome models b PCG primary caregiver c HH household outcomes directly and indirectly by increasing mothers' risk of depression. We found substantively the same relationships using lagged outcome and FE models with our multiple imputation samples, though neither model clearly highlights the direct and indirect relationships between these measures. We therefore focused on the SEM models here. --- Discussion In this study, we found that 5.8-7.3% of low-income mothers in urban areas witnessed shootings in their local communities, a meaningful proportion. For these mothers, witnessing gun violence in their community was associated with significantly, high school degree, some college, college+), whether the mother is employed, logged total household income, whether the family is in poverty, whether the mother is married or cohabiting, whether the mother is married or cohabiting with the father of her child, the number of children in the household, state fixed effects, the logged violent crime rate in the county, the poverty rate of the Census tract, and the length (in years) between survey waves b All fixed effects multivariable models are adjusted for time-varying covariates including: whether the mother is employed, logged total household income, whether the family is in poverty, whether the mother is married or cohabiting, whether the mother is married or cohabiting with the father of her child, the number of children in the household, the logged violent crime rate in the county, the poverty rate of the Census tract, and the length (in years) between survey waves c DC depression criteria (White-ref., Black, Latinx, Other), mothers' highest level of education (less than high school-ref., high school degree, some college, college+), whether the mother is employed, logged total household income, whether the family is in poverty, whether the mother is married or cohabiting, whether the mother is married or cohabiting with the father of her child, the number of children in the household, state fixed effects, the logged violent crime rate in the county, the poverty rate of the Census tract, and the length (in years) between survey waves, child behavior problems, child gender, and the continuous maternal depression scale. b The indirect pathway modeled between witnessing a shooting and maternal depression included race/ethnicity (White-ref., Black, Latinx, Other), mothers' highest level of education (less than high school-ref., high school degree, some college, college+), whether the mother is employed, logged total household income, whether the family is in poverty, whether the mother is married or cohabiting, whether the mother is married or cohabiting with the father of her child, the number of children in the household, state fixed effects, the logged violent crime rate in the county, the poverty rate of the Census tract, and the length (in years) between survey waves more symptoms of depression and with meeting conservative and liberal criteria for depression. In fact, witnessing a shooting was associated with a roughly 32-58% increase in the odds of having depression depending on the model and depression criteria used. These are highly meaningful increases, and these relationships held after accounting for numerous characteristics of the mother, her household, and residential context that might shape the relationship between witnessing a shooting and depression. We also find that witnessing a shooting has a direct association with parenting aggravation, as well as an indirect association through maternal depression, reinforcing that these relationships are likely to have spillover consequences for family members' wellbeing. These findings are important for scholars and policymakers. Our focus on community gun violence is a contribution to the literature on violence and mental health, which has largely focused on victimization and/or broader measures of community violence and has not specifically focused on gun violence. Ours is among the few studies to observe these associations for mothers, rather than children, adolescents, victims, or perpetrators. The findings observed in this study therefore contribute to the literatures on mental health, gun violence, and parent outcomes. Moreover, while resources are frequently directed toward the victims of gun violence, our findings demonstrate the importance of providing resources for coping with trauma on a wider, community-level basis. This is especially important because women exposed to local violence are more likely than unexposed women to experience other forms of violent victimization (i.e., polyvictimization) (Willie et al. 2017) and less likely to have access to healthcare resources (King and Khanijahani 2020). Similarly, Black, Latinx, and lowincome mothers are disproportionately exposed to gun violence and less likely to have access to healthcare resources and mental health facilities (Bridges 2011;Dimick et al. 2013;White et al. 2012). Latinx and, especially, Black mothers are also more likely to experience additional stressors such as incarceration or the incarceration of a loved one (Wildeman 2009), poverty, and discrimination (Oh et al. 2020). While Black and Latinx women are less likely than White women to report depression (Oh et al. 2020), our findings suggest that gun violence occurs in tandem with numerous stressors that are likely to take a toll on mothers' mental health. These relationships could, in turn, exacerbate racial and socioeconomic disparities in wellbeing. Our findings therefore indicate that mothers exposed to gun violence are likely to be an important, underserved group of individuals who are vulnerable to depression. Further, we demonstrate that the costs of gun violence are underestimated unless the effect of gun violence on community members is accounted for. These effects include the direct effects of gun violence on witnesses and its indirect effects on the loved ones of witnesses who may be impacted if their kith or kin experience depression as a result of their exposure. We document that one spillover effect may be an increase in parental aggravation, which is associated with lower quality parent-child relationships and child behavior problems (Ward and Lee 2020). Because low-income children are more likely to be exposed to violence and live in single-parent families and families with greater parenting stress (Cooper et al. 2009), exposure to gun violence may exacerbate disparities in child wellbeing. As noted above, maternal depression is also associated with family functioning across a wide number of dimensions not explored here. Our findings are therefore important for illustrating that an important feature of residential contexts, local occurrences of gun violence, shapes the outcomes of mothers and their families. Additionally, our findings demonstrate the importance of improving individuals' access to healthcare resources and mental health facilities in areas that are exposed to gun violence. Developing support groups that are targeted to community members who live in areas affected by gun violence may be especially useful. Providing targeted supports to parents to help manage the stresses of parenting may also help ameliorate the association between local violence, mental health, and parenting outcomes. These supports could include mental health counseling, as well as access to low-cost, high-quality childcare and after-school resources that help ease the stress of parenting in a neighborhood that may be higher risk for children and family members (Craig and Churchill 2018). Finally, the presence of local community groups devoted to reducing violence is associated with decreases in local crime rates (Sharkey et al. 2017). Empowering communities through such local groups may not only help alleviate gun violence but could be beneficial for mental health as well. --- Limitations These findings are subject to limitations. First, we were only able to observe mothers across 4 waves of the FFCWS. During this time period, mothers were surveyed when their children were 3, 5, 9, and 15. As such, it would be valuable to have more waves of data at smaller, more regular intervals in order to better isolate the relationship between witnessing a shooting and mothers' symptoms of depression and to ensure that we capture the correct time ordering of witnessing gun violence and experiencing depression. Moreover, having fewer survey waves tends to lead to conservative estimates using FE models (Hill et al. 2019). Our ability to perform both lagged outcome and FE models helped account for the limitations of each method and the survey design. Nevertheless, these limitations are important to consider when interpreting the results, especially regarding potential omitted timevarying characteristics and experiences that occurred between survey waves. Indeed, it is possible that some of our respondents witnessed gun violence between survey waves. As a result, our comparison sample may include individuals who witnessed gun violence and who are suffering negative mental health outcomes as a result. If this is the case, our estimates may be conservative. This possibility is reinforced by prior research showing that children exposed to violence tend to be exposed multiple times, with these multiple exposures corresponding to progressively worse mental health outcomes (Copeland et al. 2007(Copeland et al., 2010)). As such, results of this observational study should be interpreted as associational rather than causal. Additionally, in our lagged outcome models, we would ideally include fixed effects at a finer geographic level of aggregation than states to better account for neighborhoodand city-level variation. However, too few individuals in the FFCWS shared Census tracts to use fixed effects at this level, and city identifiers were not available in every year. Our ability to include measures of county and neighborhood characteristics helps ameliorate some of this concern. Finally, the FFCWS focuses on larger urban areas and low-income families. Our results cannot therefore be generalized to the broader population. --- Conclusion This study provides important insights into the relationship between community gun violence and mothers' risk of depression, demonstrating that witnessing gun violence in one's community is associated with significant and meaningful increases in mothers' symptoms of depression, the probability that they meet the criteria for diagnosable depression, and their reports of parenting aggravation. These associations demonstrate the importance of providing local mental health and community resources for those who are exposed to violence and who may experience long-lasting trauma as a result (Rowhani-Rahbar et al. 2019). --- Compliance with Ethical Standards --- Conflict of Interest The authors declare that they have no conflict of interest. Ethics Approval This study was deemed minimal risk and was approved by the University of Washington Institutional Review Board (IRB). --- Consent to Participate The study relied on precollected data and involved no contact with participants on the part of the research team. Informed consent was obtained from all individuals who participated in the study by the Fragile Families and Child Wellbeing Study at baseline and at all subsequent waves. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Gun violence is a uniquely prevalent issue in the USA that disproportionately affects disadvantaged families already at risk of health disparities. Despite the traumatic nature of witnessing gun violence, we have little knowledge of whether exposure to local gun violence is associated with higher risks of depression among mothers, whose symptoms of depression are likely to have spillover effects for kin. We examined the association between exposure to gun violence in mothers' neighborhoods and their experiences of depression using longitudinal Fragile Families and Child Wellbeing Study data (n = 4587) in tandem with lagged outcome and fixed effect models. We find that mothers who witness at least one shooting in their neighborhoods or local communities exhibit more symptoms of depression and are 32-60% more likely to meet criteria for depression than mothers who do not witness a shooting. We also find that witnessing a shooting is associated with increases in parental aggravation, which is partially mediated by maternal depression. Given this and other previously documented spillover effects of mothers' mental health on children and family members, these findings have important implications for mothers' wellbeing and their kin. Further, we observe substantial racial and socioeconomic disparities in exposure to gun violence, suggesting that gun violence may heighten health disparities and drawing attention to the importance of providing mental health resources in communities that are most affected by gun violence.
Introduction --- W ell-being is viewed as the state of people's life conditions (Sumner, 2006) and some researchers have examined community well-being by using individual attributes such as satisfaction, happiness, quality of life, individual efficacy/agency, and/or social support (Andereck et al., 2007;Jurowski and Brown, 2001;Kerstetter and Bricker, 2012). Wellbeing measurements have progressed to encompass broader dimensions such as social and environmental aspects, and human rights (Sumner, 2006). It is now widely accepted that well-being is a multidimensional concept that encompasses all aspects of human life. Sustainability theories increasingly incorporate utilitarian concepts of well-being, demanding the development of destinations that provide more advantages to a higher number of people within the constraints of available resources (Kay Smith and Diekmann, 2017). The overall welfare of a community necessitates that together these various components work well and maintain a healthy balance (Christakopoulou et al., 2001). Community well-being is a combination of social, economic, environmental, cultural, and political conditions identified by individuals and their communities as crucial for them to flourish and fulfill their potential (Wiseman and Brasher, 2008). Cummins (1996b) found that the satisfaction associated with the community well-being domain occurs when people satisfied with education, neighborhood, service, facilities, social life, and social relations. Community satisfaction makes a significant and positive contribution to community members' perceptions of their quality of life (Norman et al., 1997). Many factors can directly or indirectly affect community well-being. Equally, one aspect of community well-being can impact another (Lee and Kim, 2015). For instance, there is a well-established link between economic well-being and health (Bushell and Sheldon, 2009), living environment and psychological needs (Aziz et al., 2022), as well as community satisfaction and attachment to an area (<unk>zkan et al., 2019;Theodori, 2001). Finally, community members with a high level of place attachment are more likely to engage in local volunteer work and collaborate with them and influence the the positive change in the community (Mulaphong, 2022). In the wider context, human well-being is when individuals able to cope with psychological, social and physical challenges (Dodge et al., 2012). The definition of human well-being is complex and subjective from different perspectives (Clark, 2014). In the context of the local community, well-being will be achieved if they have good satisfaction with the dimensions of the environment, economy, life and social relation, services and facilities, education, neighborhood, and culture, which is in line with some previous studies (Andrews and Withey, 1976;Cummins, 1996aCummins,, 1996b;;Norman et al., 1997;O'Brien and Lange, 1986;Wiseman and Brasher, 2008). Authority intervention is considered a mediating variable in this study. Current administration and political power determine the well-being of the local population via socioeconomic indicators and infrastructure, while also taking into account their satisfaction with economic, social, and environmental dimensions. For example, if the local community is satisfied with these dimensions, then they would realize the importance of sufficient funds in the effort towards conservation so that they could also enjoy the benefits of a healthy ecosystem such as fresh air, adequate income, good mental and physical health and a healthy source of food. The locals' scientific and indigenous knowledge of conservation also contributes to a strong sense of physical and spiritual connection to a place with rich natural and cultural attributes (Mokuku and Taylor, 2015). While, locals with low ecocentrism and limited conservation knowledge are more likely to engage in economic activities that disregard sustainability (Kaufman, 2015). The local community in GMNP is frequently dissatisfied with the local government in regards to land ownership and logging activities that threaten their traditional way of life. This relates back to the notion that human rights are a necessity that safeguards dignity and equality, which is emphasized at the global level (Clark, 2014). Although there are initiatives by the government in improving their standard of living, however, to what extent do they want to accept such initiatives in improving their well-being? Thus, human development, which describes a process of enlarging people's freedoms and opportunities and improving their well-being can also be challenged. In this light, the importance of empathy rises to the forefront as the primary focus for empowering communities to construct resilience in the face of crises and healing things that drive a wedge between them (Berardi et al., 2020). Communication between the local community and the state government, which leads to understanding and support towards biodiversity conservation efforts at GMNP, needs to be further refined in terms of its effectiveness. Thus, the ground-level issues of the locals need to be addressed so that their well-being can be sustained and become the key to the effective biodiversity framework for Gunung Mulu National Park (GMNP). Therefore, in this study, we aimed to explore the community well-being dimensions in terms of environment, economics, and social aspects as well as authority intervention based on the perspective of the local community and professionals with an emphasis on the current issues in GMNP. --- Conceptual framework The conceptual framework was developed based on the relevant literature reviews on community well-being aspects that have been described previously. Figure 1 shows the conceptual framework built in the study, which covers the environment, economics, and social aspects with an authority intervention and COVID-19 pandemic as a mediator. The pandemic has affected the social community by forced them to isolate and disturbed the economy of society. It also caused a major changes toward environmental with the increase of domestic waste (Sharma et al., 2020). Thus, this study has to consider the pandemic Covid-19 as one of the mediators other than authority intervention in the well-being. --- Methods Research area. GMNP is a national park located in Marudi Division, Sarawak, Malaysia (Fig. 2), which is one of the UNESCO World Heritage Site. UNESCO (2021) has clarified the WHS is the name given to locations on Earth that have exceptional importance to humanity as a whole and to be preserved for current and future generations to enjoy and appreciate. To date, 1007 natural and cultural places inscribed on the list such as Taj Mahal India, Grand Canyon (USA), Pyramids in Egypt, etc. This designation of WHS for GMNP is particularly beneficial for insitu biodiversity conservation where greater awareness on its status could lead to the rise of level of preservation of its valuable properties. With the status, the areas under the WHS will be aided with financial and expert advice from the WHS Committee to ensure its sustainability of the sites. The given status also had improved community well-being in GMNP by allowing people to work together in enhancing their economic and cultural development especially through heritage tourism. This GMNP area covers about 52,864 hectares of the mountainous part of northern Sarawak. It is separate from other developing areas, which lies between the headwaters of the Tutuh River and the Medalam River, a tributary of the Limbang River. Its location along the Brunei-Sabah-Sarawak-North Kalimantan Transboundary Landscape is one of the six priority landscapes in the protected areas of Borneo (WWF, 2017). GMNP and nearby villages such as Sungai Melinau Village, Batu Bungan Village, Long Iman Village, and Long Terawan Village are inhabited by the majority of Penan and Berawan communities, which are indigenous to the park. The local community of Sungai Melinau Village from Berawan community is the most engaged in tourism services such as homestay, and transportation (e.g., longboat, and car) in Mulu. Other local communities mostly work as farmers or, fishermen for their livelihood. The number of tourist arrivals to GMNP in 2019 is 21,022 and it is higher than in 2015 which was 18,632 (Sarawak Forestry Corporation, 2020). The trend shows that the number of international tourists is almost double that of domestic tourists. However, since the pandemic hit in 2020 and 2021, the data is not available for disclosure by the Sarawak Forestry Corporation due to the tourism industry's progress being too slow. Research technique. This study employed a mixed-methods approach, particularly a concurrent nested design that is more appropriate by considering the time constraints and comfort of the respondents during data collection. This design gives priority to one of the methods and guides the study. While another is Fig. 1 The conceptual framework used in the study. The community well-being is dependent on its members' satisfaction with respect to environmental, economic, and social factors, as well as mediators such as authority intervention and the COVID-19 pandemic. embedded, which aims at one of the methods, i.e., quantitative or qualitative method, and guides to address a different question than the dominant one or to seek information from different levels. The quantitative measures were triangulated with key informants' narratives, which allowed for a greater understanding of the meaning of the quantitative findings (Cresswell, 1999;Teddlie and Tashakkori, 2009). Mixed methods that combine qualitative and quantitative data provide a convenient method of everyday problem-solving (Tashakkori and Teddlie, 2010). Based on Miri Resident District Office (2020) a total population of Mulu Subdistrict is 4696. Using sample formula by Kothari (2004) with an acceptable error of 10% at the 95% confidence level, we consider to interview 99 respondents. With the limitation of movement due to COVID-19 pandemic, obtaining a sample size with a 5 to 10% margin of error seems impossible because of low number willingness to participate and fear of having contact with researchers. Kothari (2004) also emphasized that the selection of a research design for a sample size must be realistic, taking into account the budget and time constraints, and it is best to minimize sampling error as much as possible. Thus, the Likert scale questionnaire was disseminated to only 99 local communities in April 2021 through convenience sampling. The sample size is appropriate with an acceptable error of 10% at the 95% confidence level. The local community involved are those who live in the settlement areas around GMNP, including Kampung Batu Bungan, Kampung Long Iman, and Kampung Long Terawan. The respondents were to be from local communities who are more than 18 years old and live at the study site for more than 5 years. For the qualitative approach, personal interviews with twelve key informants were conducted using snowball sampling. They were identified as key informants due to their first-hand knowledge and active involvement in the community. Their narratives provide a qualitative aspect that has meaning, significance, and rich understanding (Tashakkori et al., 2020). The number of key informants (n = 12) is acceptable for the qualitative approach as its nature requires a small sample size. According to Hammarberg et al. (2016), a large sample size is not recommended because it will possibly cause excessive data issues that will affect the depth of the scope for rigorous analysis. The key informant voices help to achieve data saturation, external validity, and/or information redundancy (Onwuegbuzie and Leech, 2007). This techniques of data collection are aligned with stated in Lietz and Zayas (2010) in increasing the trustworthiness of qualitative research. This qualitative longitudinal research method was chosen for this study because it is suitable for pandemic or disaster-related studies in which unique and rapidly changing environments necessitate more comprehensive descriptions of human condition (Terzis et al., 2022).Furthermore, we also employed the reflexivity method of writing memos or field notes throughout data collection in order to comprehend the significance of each interview and observation session as suggested by (Yong et al., 2019) to ensure the quality of our qualitative study. Table 1 presented to show the characteristics of twelve key informants interviewed in different sessions. Data analysis. The descriptive data, which includes sociodemographic and respondents' community well-being, were analyzed using the IBM Statistical Package for Social Sciences (Version 24). While the key informants' narratives were themed deductively using Atlas.ti version 8 software. The themes highlighted the environment, economics, and social aspects, which play a crucial role in the community well-being in GMNP. --- Results and discussion Sociodemographic. Table 2 shows the demographic background of the respondents. Approximately 60.6% (n = 60) of respondents are male, while the remaining 39.4% (n = 39) are female. The majority of respondents (67.7%) are Orang Ulu, the Penan and Berawan ethnic who are indigenous to GMNP. Most of them have received at least secondary education and are employed and engaged in tourism services such as accommodation and transportation (e.g., longboat and car) in Mulu. Their income in the tourism sector was less than MYR2500 per month, considered the low-income group in Malaysia (Department of Statistics Malaysia, 2020). Locals' perspectives on well-being. Table 3 shows the mean analysis of community well-being based on the respondents' perspectives. In the context of this study, community well-being is assessed through their satisfaction with the dimensions of wellbeing in terms of the environment, economy, and social perspective. The average respondent is very satisfied with the environmental and social aspects of this GMNP. However, it was noted that they showed a moderate level of satisfaction with services and facilities. Although both the environment and social variables on average show good satisfaction among the respondents, they are not satisfied with the current monthly income, due to the COVID-19 pandemic, which is quite limited. The well-being of the community in Table 3 is based on the respondents' perspective. According to Ibrahim et al. (2021), the locals' perception of elements related to well-being that are affected by governance is important and needs to be taken into account towards sustainability in general. Efforts to improve this ability need to happen at the macro level, which is the existence and ability of an organization to provide sufficient investment to empower individuals toward a sustainable community (Zamhari and Perumal, 2016). Consequently, twelve key informants clarify the element of well-being through the environmental, social, and economic dimensions. Environmental dimension. The mean analysis in Table 4, shows that the respondents state that biodiversity issues, as a whole, are small in the area. This includes water pollution, extinction of animals and plants, degradation of wetlands, solid waste, and wildlife threats. This is contradictory to the narratives given by key informants, including K1 in this study, who stated that these issues are something that needs to be paid attention to in GMNP and the surrounding area regarding wildlife conservation and waste management. It also gives a reflection that the local community has less awareness about biodiversity problems that occur in their area. The imagined environmental futures of communities illuminate significant issues within the existing relationships between themselves and their physical surroundings (Nash et al., 2019). Locals consider it a less significant matter, but it needs to be taken seriously. Water pollution. Tourism sector does not affect water pollution in GMNP. However, the attitude of a few parties who lack of environmental awareness contribute to the water pollution. For instance, private oil palm companies are reportedly conducting logging operations in the vicinity of GMNP, resulting in cloudiness of the river water in the region, particularly downstream of the river (Cheng, 2019). This issue seems to be beyond the people's control because it involves local companies and authorities (Kendall, 2022). Based on observations, water pollution involving toxic waste does not occur, but the cloudiness that has occurred in the downstream river side is due to the sedimentation of logging activities nearby the settlements. You see how cloudy the river water is now? In the early 1970s, the Tutoh River was still clear and beautiful. We used to drink river water directly. (K10) The logging companies are no longer operating in their area, but the cloudy river water remains probably due to domestic waste into the river by a few local communities. Solid waste. The issue of solid waste in the GMNP and its surounded arises as a result of several factors requiring attention from the facilicities management. Poor solid waste management, including the lack of facilities to treat it, will lead locals to dispose of trash in open spaces such as rivers (Salam et al., 2022). There is no formal waste management system here. The locals have to manage their own waste. In the past, we used to propose a landfill, and the site was selected after a meeting with relevant stakeholders for many years. The population is increasing. The rubbish is increasing. I have experienced it myself in a longhouse where the people just throw the rubbish into the river that floats down. (K1) The population growth in Mulu with the lack of services that are normally handled by the council is the garbage disposal. There is no proper sewage. Everything will be thrown onto the ground and eventually end up in the river. That's why the river is polluted. It's a major concern. We don't have any policies on how to deal with the area that is outside the park but nearby. For example, okay, you can't cut this. People just cut it, which means it can impact the microclimate and a lot of things. (K2) Local peoples are more inclined to throw wastes into the river since long time. If this continues, it will degrade the water quality and directly harmful to the aquatic life. Next, the garbage suspended in the river will affect the scenic view of tourists during river cruising for visiting recreation area in GMNP such as Long Iman and Camp 5. In the past, the locals usually threw leftover foods into the river because it was organic. That situation is acceptable, but now the current generation is throwing away plastic that won't rot. (K8) --- Garbage is dumped randomly into the Melinau River. There is a lot of plastic in the trees. (K10) I burn garbage. If it's a can, I dig a hole to plant it in. (K5, K6) There are a few irresponsible people who throw garbage into the river, perhaps when there are no people in the river at night. (K6) As locals who are custodians of this WHS UNESCO site, they suppose to have a better ecocentric attitude compared to visitors. However, based on K5, K6, and K9, the visitors are very disciplined, i.e., instead of randomly discarding trash. They pick it up from the side of the road and place it into the trash can provided at their accomodation. This may be due to the influence from the attitudes brought by ecocentric vistors to GMNP. Such visitors can influence locals to havebetter attitudes toward nature conservation (Arnberger et al., 2019). The national park and Marriot Hotel have also worked together to run a community service program by emphasizing conservation-related environmental awareness education for the local population regularly, including river cleaning since 2006. To promote conservation activities, it is essential to have a welldeveloped, community-specific activity system including manpower, budget, community awareness, and consensus information (Hargreaves-Allen et al., 2017). Normally, we try to organize things in a day. We divide them into different sections of the river. Penans from Batu Bungan Village will clean between here (GMNP headquarters) and their settlement, while Sungai Melinau villagers (Berawans) will clean up to Kuala Melinau (Tutoh River) because of the spread of houses in between. They will take the bigger section because the community is bigger. Then, we send a boat to assist the community as well. Everybody will bring their garbage here, and we will count and monitor how much is collected each time. We have this communal work. At the same time, running the awareness mainly focuses on trying to convince people not to dump rubbish into the river, which is the best way to deal with it. It is quite hard when some people change their practices, but some ignore the advice. They still dump it. Some villagers (from the Melinau River area) go by boat and dump it in the Tutoh River. It seems like the practice is still there, but there have been some positive changes where the rubbish is not as bad as it was previously, even though income and the number of people have increased. Now, this is not just a local community. But the clinic, school, airport, and district office staff are here as well. We hope that there is some progress because the garbage problem in the river will never be solved. (K1) According to K1 and K8, both parties were also given a budget by the government to manage the collection of residents' garbage in GMNP Headquarters and Marriot Hotel and transport it by boat to the landfill facility in Marudi once a month. This is seen as a cheaper initiative than building a landfill in the national park itself. The pressure of an increasing population causes the problem of solid waste management become a matter that needs due attention from relevant stakeholders. To empower the local community, the park management has also proposed to the government to provide allocations for transporting rubbish using boats handled by the local community themselves, because most of them own a boat. This is also able to improve their well-being through generating income while fostering an ecocentric attitude that exists from a good place attachment. Degradation of wetlands. The degradation of wetlands in Mulu is caused by anthropogenic activities, including deforestation recently downstream and upstream. Maybe one of the biggest issues getting worse now is clearing along the riverbank of a stream because of the COVID-19 pandemic where most villagers from Batu Bungan Village have started farming for their livelihood. But the problem is that many have been clearing trees right to the riverbank, which could increase erosion. While this problem happened in the past, it was not as severe as it is now. But now, it is becoming very severe because nearly all the trees on the opposite bank are all being cut down. (K1) According to K4, K9, K10, and K11, deforestation for palm oil plantations caused landslides along the riverbanks, particularly in the downstream region. Although it occurs outside the GMNP, it still has the potential to disrupt the local ecosystem, especially the park's high-diversity area. Due to the potential for environmental damage, monoculture plantation activities should not be conducted within 80 kilometers of the park. Based on Brockerhoff et al. (2017), monoculture plantations are more likely to have lower levels of biodiversity compared to their surrounding native forests. In addition, loss of soil productivity and fertility, disruption of hydrological cycles, risks associated with plantation forestry practices (e.g., the introduction of exotic species), risks of promoting pests and diseases, increased risks of adverse effects of storms and fire, and negative effects on biodiversity are all potential outcomes. Extinction of animals and plants. Based on Table 2, respondents believe that the extinction of animals and plants in GMNP is nearly non-existent. This demonstrates that, on average, they are less aware of changes in population viability. They must be concerned about issues that have the potential to lead to extinction. I think the area of this forest is still large, so animals will not be able to easily become extinct. Monkeys are also still in the palm oil plantation area for foraging. (K5) I think the issue of extinction is not there at all. (K6) However, based on K7, K10, and K9, animals including pangolin and sun bear are increasingly difficult to see than before, and these animals are likely facing the threat of extinction due to irresponsible hunters in the GMNP area and its surroundings. We eat most of the animals in the park, to the point of posing a threat to them. We usually hunt mice and deer outside the national park. Now it is almost difficult to see hornbills, pergam, punai, and kuang birds compared to the 1960s. Based on my personal opinion, Ulu people don't love animals. Other animals such as forest cats, foxes, bats (large hawk-eagle), pythons, and red cats are also hunted. As for the trees, large and old trees were once cut down to be exported abroad. You can see the view of the forest that is diminishing from the air space when taking the plane here. The only remaining wood is small and young. (K10) Limited awareness of extinction has also been documented in other rural communities surrounding protected areas (Ma et al., 2021). Nonetheless, every individual should be concerned about the extinction of these species. To protect the diversity of species in GMNP, it is necessary to consider the extinction-causing factors. Wildlife threat. The primary threats to wildlife in the area are hunting, deforestation, and oil palm monoculture plantations. They have the right to hunt wild boar, deer, and mouse deer and fish in certain sections of the river. For the nomadic Penans, they could hunt in any area of the park, but there are few nomadic Penans left and most of them have settled. But we consider the people at Batu Bungan Village as seminomadic, and they go into the forest sometimes. They were given a certain area for hunting and fishing within the park. Basically, it has become a problem because they just live opposite the park and river, so they can easily enter the park for hunting and fishing. They do hunt and fishing outside the designated areas that are located next to their village. The problem is that hunting is not restricted to wild boars. Some people hunt endangered and even totally protected species. There are not many animals in terms of the abundance of mammals due to hunting. It is not just restricted to Penans, but some Berawans also hunt in the park. There is much less hunting on their side. There is more hunting downriver from here but not inside the park, particularly for wild boar. But it does not mean they did not hunt here. (K1) In Mulu, locals living near the national park have the right to hunt wild animals that are not protected. But when they hunt, usually the people there will hunt anything they find. Although there is a law, the parties here are less able to carry out effective enforcement. (K2) Frankly, I have hunted, and it is most likely in the picture (given pictorial questionnaire). But we do not hunt many animals at a time. It is just a few animals that are sufficient for the number of family members. We usually hunt monkeys and squirrels. Birds are hard to catch because they fly. I use a blowpipe that contains rubber poison to hunt. (K9) Hunting, so far, is a lifestyle for the Penan and Berawan for survival. They do not keep animals for food because they have been trained to hunt and find fish as food since a long time ago. To forbid them not to hunt is quite impossible. They will eat all the animals. Furthermore, they use blowpipes, which are considered silent killer tools that can catch more animals and are not so easily detected compared to using guns. (K7) Illegal wildlife trade is also likely to occur, involving locals and outsiders. K7 explained that pangolin species are in demand from buyers. He also explained that some use tissue culture to breed certain species of orchids and sell them quietly. Similarly with Phong Nha-Ke Bang National Park, Vietnam, where a few residents are not concerned about selling wild animals to outsiders for their profit (Truong, 2022). Next, disruption of the ecosystem may occur due to the conversion of forest to a monoculture plantation. According to a study by Ridwan et al. (2011), the monoculture oil palm plantation in nearby areas will reduce the foraging and roosting activities of tropics bat species, which rely heavily on forests for food and shelter. The park itself is well-protected. Maybe one of the issues hunting in the park is not a huge major issue. Other than that, there are threats to the area outside the park because the boundary does not protect all the species within the park because some species move outside the boundaries. One example is wrinkled bats in Deer Cave as you can see, they fly very far from the park which is exposed to threats outside the park that could affect their population, and one of the biggest problems is probably monoculture plantation outside the park. This is why UNESCO has recommended not clearing the forest or monoculture tree plantation activities within 25 km of the boundary of the park. But actually, there have been some issues with it as well. There have been some areas outside the park where it is designated for oil palm plantation and opposed by locals. (K1) Logging for oil palm plantations is happening outside the park and I don't agree. Imagine fitting into a palm oil plantation right next to this WHS. If the forest is all cut down, the animals will run to the untouched virgin forest of Brunei, which is not far away, only 21 kilometers from here. Here we are messed up. (K7) Bat species in GMNP are keystone species because their extinction will affect the cave ecosystem. Guano from bats is an important energy source with large, varied, and unique ecosystems existing around such deposits (Moulds et al., 2013). I think it affects the population outside of the park, for the forage would probably affect the future of the park, primarily because they forage in open areas and above the canopy. Now, they are probably dependent on insects above the canopy since insects are abundant. If the forests are cleared, there will be fewer insects, so they will lose food. If the area is cleared for monoculture plantations, they tend to introduce a lot of pesticides that will affect the bats. Not just bats themselves, because when they fly back to caves, they affect all the dependents and everything in the cave, so their adolescents will also be affected. (K1) If you do a lot of things around GMNP, then things in GMNP will not be able to survive. Just in case, the outstanding universal value in Mulu is the swiftlets and bats in the cave, but the food source is only 50 km around the cave. That brings you all the way to Brunei or other parts of Baram, but if there is no control over land use in the area, there may be fewer food sources for these animals. So, the number of OUVs could be impacted, which is very critical. (K2) Some Berawans, including K4 and K5, also stated that they always go to the GMNP forest area bordering Labi Forest Reserve, Brunei, and claimed that they can see more wildlife, including a maroon-leaf monkey and gray-leaf monkey species. It seems that the area is good to support the survival of the species due to the lack of human interference and provide foods and shelter. This area is located in the Heart of Borneo priority landscape, which is the Brunei-Sabah-Sarawak-North Kalimantan Transboundary Landscape, which acts as an ecological corridor that connects wildlife, including endangered species such as Borneo orangutans, Borneo pygmy elephants, hornbills, and Muller's gibbon, among others, that thrive in the region (Keong and Onuma, 2021). However, not all animal species have migrated towards Labi Forest Reserve, as claimed by residents. This is because GMNP is a habitat that has the most suitable ecosystem for several species, including bats. Obviously, the park hosts a lot of wildlife, like quite a few protected species, especially those related to caves (which feed in caves, etc.). It depends on the species and what kind of habitats they have, such as limestone, because they can't go to Brunei. After all, there's no limestone on the Brunei border. Even the karst itself provides the surface with cavities, which provides space for animals to hide, especially mammals. There's no support area in Brunei for them. Brunei, of course, has a forest, but it does not have karst and cavity caves that are found. Mulu has karst to support it. Higher elevation plays a crucial role. Brunei does not have high mountains, so many species are also restricted to higher altitudes. We here have a lot of endemic species, especially frogs and reptiles, that just can't go to Brunei because the habitat is not suitable there. (K1) By observing the animal hunting that takes place in GMNP and its surrounding area, the locals are still subject to the National Parks and Nature Reserves Ordinance of 1998 and the Wildlife Protection Ordinance of 1998 even though they have certain rights as indigenous people, including being allowed to hunt certain animals. Traditionally, local people hunt animals in this such areas for their livelihoods, but these local hunting restrictions cause them to slightly change and tend to depend on governments and other NGOs to provided financial aids (Heim and Pyhälä, 2020). Economic dimension. The COVID-19 pandemic has caused a significant change in the locals who have been heavily dependent on the tourism sector for survival. K3 and K4 feel proud because they live in the main area of world-class tourism, which is GMNP, and many benefits they receive are due to the existence of tourism. It changes their quality of life via income generation opportunities. We are proud because we have GMNP. There are job opportunities, and there are also tourists coming from Miri and Marudi by boat and staying here (Long Terawan Village) for a few days before going to the GMNP. (K4) Since the outbreak of the pandemic that caused economic paralysis, however, most respondents have become dissatisfied with their monthly income. Before the pandemic, the majority of the local community had earned a t minimum of MYR2,000 (USD442.48) per month. However, as a result of the pandemic, the income has decreased to less than MYR1,000 (USD221.24) per month. The situation forcing them to engage in gardening, fishing, animal husbandry, and other small-scale agricultural activities despite lacking agricultural expertize for their survival. Our income has been bad since the pandemic. I don't dare to open a homestay either. (K5) I experienced hardship because there were no tourists, no work, no money, and I just stayed at home while working on small gardens. (K3, K6, K8) It is not only the self-employed who are affected by the effects of this pandemic. In fact, according to K1, the number of staff was also reduced from various positions, including those working in the café, housekeeping, and some of the park guides. The COVID-19 pandemic has taught everybody and given lessons on not taking everything for granted. People began to realize how much money they could make from tourism. Things shift. You can see most people are farming now, which is different from their previous lives, where most of them are engaged in tourism. For example, people in Batu Bungan Village who work as boat operators, handicraft sellers, park guides, or freelance park guides have lost their income. But now they are turning back to where they were years before the park opened. They go to the farm. Even here, there is no market for people to buy food unless people sell to each other what they grow. The shop is also empty nearby nowadays. Not much there. They have twice-daily flights from Miri, as previously stated, so some will bring frozen foods, chicken, and meat. For the past few months, we've just had one flight, which is nothing convenient. People just depend on fish and what they grow. (K1) In addition, a similar situation occurred at the Marriot Hotel, where an estimated 80% of staff (100 individuals) were laid off due to current financial constraints. The hotel had to spend approximately MYR150,000 (USD33,673.83) on maintenance, particularly the electric generator, despite the low number of guests. Next, the local community faces the challenge of modifying their way of life to accommodate the pandemic. As if there is no other alternative, if you want to market handicrafts to outside areas, you need a fast postal service, and it is quite difficult to do so because the location
The local communities living around national parks or areas like World Heritage Site (WHS) are crucial stakeholders to such settings. Their well-being needs to be unraveled so that the holistic management of the national park is in good condition to stabilize its status as WHS through the support and empowerment of the community. Numerous studies have been conducted on the biodiversity and geology of Gunung Mulu National Park (GMNP), but the community psychology aspect that is the foundation of conservation efforts has not been addressed. Therefore, this study aims to examine the community well-being dimensions in terms of environment, economics, social aspects as well as authority intervention based on the perspective of the local community and professionals with an emphasis on the current issues in GMNP. Quantitative and qualitative approaches were used in this study through a questionnaire to 99 local communities, and individual interviews that were conducted in GMNP and four nearby villages. Data were analyzed descriptively with four themes: environment; economics; social; and authority intervention. The findings showed that locals were satisfied in residing area in terms of environmental conditions. However, it does not reflect the actual situation, i.e., river water cloudiness, wildlife threat, degradation of wetlands, and solid waste issues are still happening. The constraints of the COVID-19 pandemic portrayed that they were very dissatisfied with their monthly income, which is very low compared to before. In terms of social, the services and facilities, especially treated water and electricity need improvement. It also noted that authority intervention especially related to road proposal, financial and skills assistance, and community conflicts could influence locals' support for the planning and policies implemented in the national parks or WHS areas. This study suggests that relevant stakeholders should emphasize bottom-up approaches by considering aspects of community well-being that stem from multiple dimensions in order to achieve holistic national park management.
park guides have lost their income. But now they are turning back to where they were years before the park opened. They go to the farm. Even here, there is no market for people to buy food unless people sell to each other what they grow. The shop is also empty nearby nowadays. Not much there. They have twice-daily flights from Miri, as previously stated, so some will bring frozen foods, chicken, and meat. For the past few months, we've just had one flight, which is nothing convenient. People just depend on fish and what they grow. (K1) In addition, a similar situation occurred at the Marriot Hotel, where an estimated 80% of staff (100 individuals) were laid off due to current financial constraints. The hotel had to spend approximately MYR150,000 (USD33,673.83) on maintenance, particularly the electric generator, despite the low number of guests. Next, the local community faces the challenge of modifying their way of life to accommodate the pandemic. As if there is no other alternative, if you want to market handicrafts to outside areas, you need a fast postal service, and it is quite difficult to do so because the location of the post office is quite far. Even if you run an online business, you still need the internet, which is a broadband network, which is limited here. (K3) Now, young people are more stressed because many have lost their jobs. They only rely on freelance work. There used to be tourists who could use boats to earn an income, but now there are none. It's really difficult because of the pandemic that has been going on for more than a year now. We do a little business to cover our needs at home, such as selling drinks, cigarettes, and fried chicken. We thought of going to Miri many times due to having to follow procedures, including applying for a permit across the area, which is quite harassing. (K6) Changing jobs is very difficult for me, who is used to tourism. Even if you want to start a business, you still need a lot of capital. (K7) Our income is uncertain. We also look for umbut (the soft root of edible palm trees) and sell it to the villagers. (K9) Based on K12, there is a freelance park guide who has been laid off and has been able to generate income through his own YouTube account platform. The content of the uploaded videos revolves around his life during the pandemic, and it was acknowledged by K12 that he has good video editing skills and his presentation about geology in GMNP is very easy to understand by the audience. In conclusion, the diversification of livelihoods in GMNP in terms of different types of jobs is lower, which means that almost all of them depend on the tourism sector compared to others. Therefore, it is anticipated that the level of locals' adaptability to changes caused by this pandemic will be low. According to Makwindi and Ndlovu (2022), diversification is the most important strategy for surviving economic pressures caused by natural disasters such as this pandemic, which affects the income of the majority of people. All parties must understand the risk and vulnerability of relying on a single major source of income, as the COVID-19 pandemic has a significant impact on livelihoods (Smith et al., 2021). Social dimension. Despite their differences in political ideology, the majority of respondents believe that neighborhood life and social relations are still positive. The concept of togetherness is still practiced among them through cooperative activities to collect garbage around the village and river periodically through a program organized by GMNP and Marriot Hotel management based on K3, K4, K5, and K8. Community involvement is a crucial indicator of the success of protected area management (Wibowo et al., 2018). While visiting neighbors and sharing food is still a common practice among them. We also shared a wild boar (hunted animal). If they don't share it, they will buy it with us at a low price. (K4, K5, K9) Due to tourism, there are also locals, particularly from the village of Long Terawan, who marry tourists or foreign workers. This marriage, which represents their acceptance of foreign culture, is viewed favorably, and it is a reflex that they are adaptable in accepting positive changes such as education and employment. On the contrary, the local community is also very satisfied with the education received by their children since the 1990s, even though the location of the secondary school, which is Long Panai Secondary School, is quite far from the GMNP area and it takes almost 2 h by boat. The reputation of the school here is very good. Some former students got excellent results in Malaysia Education Certificate (SPM), i.e., 9A, and managed to continue their studies at university. (K10) Cultural heritage. The level of satisfaction of respondents towards culture is good (Table 2). In terms of traditional dances and musical instruments, residents still practice them intact to be presented to dignitaries and tourists who come here as tourist products. Next, the making of handicrafts and other forest products by the Penan community shows that these skills are still in good condition. Although cultural heritage is not viewed as a major issue by the study's respondents, who are mostly young and middle-class. However, some narratives in the study explain that there are still issues related to intangible cultural heritage, especially in Batu Bungan Village and Long Iman Village, which is in the Penan's Oroo' language. Oroo' is a language commonly used by previous generations of Penan for communication purposes in the forest such as navigation using signs from tree twigs and leaves. (K1) The arrangement of the twigs and leaves describes the combination of words in a sentence. The message can be translated if the individual understands each word that is trying to be conveyed. Oroo' is the object of writing language used by earlier generations to leave messages for each other in the jungle (Jensen, 1970). Sticks, prepared with cuts, twigs, and leaves in certain positions and places, will guide people, and inform them about directions, time, dangers, resources, etc. (Rothstein, 2020). According to K3, the Oroo' language is only part of the customs and culture and can still be understood by many people, especially the older generation. However, the language is poorly understood among the average younger generation. It is because they have received a formal education in school and can read and write well. Thus, the importance of mastering the Oroo' language has become less significant for them. Furthermore, the language is rarely used and is considered an ancient language. Similarly, based on a study by Plimmer et al. (2015), it was found that the language has disappeared and is no longer used by the younger generation among Penans in Long Lamai since they were settled. According to UNESCO (2011), the loss of indigenous languages is also detrimental to biodiversity, as traditional knowledge of nature and the universe, spiritual beliefs and cultural values expressed in indigenous languages provide timetested mechanisms for the sustainable use of natural resources and management of ecosystems. These elements have become more critical with the emergence of urgent new challenges posed by climate change. Figure 3 shows K9 explaining the basic Oroo' language which indicates that the Penan community uses twigs and leaves to form specific signals that carry certain messages. This clear explanation shows that the middle-aged generation is good at the Oroo' language. It further supports the statement of K3 that the middleaged generation can still understand the language very well as compared to the young-aged generation (born in the 90 s or later). It is in line with Zaman and Jengan (2014), where the respondents over the age of 60 have mastered the language because they experienced a nomadic life when they were young (Zaman et al., 2014). K9 stated that he had learned the Oroo' language through experiences with his father when hunting in the forest and looking for sago since childhood. He was accustomed to wading through the forest for that purpose and work of cutting and carrying trees and removing obstacles along the way in the forest. We did not have formal classes to learn this Oroo' language. We are indirectly good at using that language. (K9) However, according to K9, some Penan's younger generation who live with their families and semi-nomads in the jungle, are still able to use the language compared to the community in Kampung Batu Bungan. My 11-years-old son is already good at using sumpit (blowgun). He is good at using poison for hunting purposes because he always follows me to the forest, and I taught him. (K9) In conclusion, this informal lesson has the most crucial value in the Penan community. The Oroo' are themselves expressions of social interaction. Given the fact that people usually travel together, the reading and interpretation of the signs is also a social practice, although often the meaning of Oroo' is rather explicit (Rothstein, 2020). Most young people do not understand Oroo' language, especially those born in their 90 s and above. (K9) The elders realized that Oroo' will be lost if they do not find ways to preserve and pass it through to the younger generations (Plimmer et al., 2015). K9 also informed that their previous lives were more difficult, and the younger generation is having lack of interest to go into the forest. Some were afraid to go into the woods, unlike the middle age who had experienced and were trained for it. Furthermore, as mentioned by K3 earlier, words have become the primary medium of communication in society nowadays, and it affects the Oroo'language to be less important. The times and lifestyles slowly changed, and they adapted to the new situation. Indirectly formal education and socioeconomic factors caused this transformation to take place. Due to the pandemic, the tourism sector has stagnated for a while. As a result, local people have lost their source of income due to the lack of tourists. Furthermore, K3 also stated that a handful of villagers are chosen to stay in the forest to avoid COVID-19 virus infection. Penan community in Kampung Long Iman hid and ran away from home when medical officers came to their village to perform a COVID-19 polymerase chain reaction swab test (Nais, 2021). The increasing number of COVID-19 infection cases everyday is likely to cause many individuals to move to the forest. On the positive side, they can spend time together in the woods. This further strengthens the family bond and indirectly, they engage in the traditional lifestyle that was practiced by the earlier generation. The young-aged generation can have the opportunity to learn and experience the Oroo' language from the middle-aged. Services and facilities. The locals recognize that the level of services and facilities here is average and that it still requires significant attention from stakeholders, particularly concerning water and electricity supply. The electricity facilities here are bad. On average, we still use our generator. Some villagers use solar. I also have five water tank units and one of them is given by the government. (K5) We do have problems with our water supply. I live on the side of the road (far from the river) and only rely on rain catchment water. If it's a dry season, we don't have water. There is no clean water supply (treated water) in this Mulu area. Only the national park and the Marriot Hotel have treated water supplies because they have filters and chlorine. Those of us who live in single-family homes do not have that water. Those who live by the river also use engines to pump water into their houses. (K6) They are aware that the available river water is not necessarily safe to drink due to various factors. Clean water and electricity are basic needs for human well-being, and everyone has the right to have access to clean water which is in line with Sustainable Development Goals 6 (Purba and Budiono, 2019). Authority intervention as a mediator. The well-being of the community also depends on the authority's role in determining their quality of life in principle. In the context of tourism, the authorities have the power to mediate its intensification, form a policy, and determine the parties that should benefit through the implementation of the policy (Zinda, 2017). Holistic management by the authorities will encourage the local population to support the implementation of the policies carried out in this GMNP. Fig. 3 The basic Oroo' language explanation by key informant. It indicates that the Penan community uses twigs and leaves to form specific signals that carry certain messages. The symbol of two twigs of the same length indicates the presence of a team/ one/ friend/ family (non-enemies) while in the forest. The two twigs placed on top of the folded leaves indicate hunger signals to non-enemies (friends, family, acquaintances). While the combination of the symbol of a leaf pricked by a small twig, then inserted with a tree branch carries the meaning that the individual has obtained the hunted animal (wild boar or other foods) in this straight direction. This is in line with the findings of Park and Inanç (2017) that the positive behavior of locals towards conservation in protected areas is dependent on the current management strategy that involves local communities more effectively. Despite this, their dissatisfaction with an effort that the government is trying to implement needs to be taken into account and resolved through consultation. It is noted that there are several issues involving the authorities, especially regarding road proposals and community conflicts in the area, although the government helps a lot from an economic point of view. Road proposal. The Sarawak government has proposed new roads linking Miri-Marudi, Marudi-Mulu (Kuala Melinau), and Long Panai-Long Lama under the High Impact Infrastructure project. The project will increase accessibility from Mulu to other areas. Although this project makes it easier for locals to reach facilities such as hospitals, schools, and grocery stores in Miri at any time. However, many do not support the project due to some challenges that will arise as a result of the development. It's not that we don't want to, and it's not that we really want to. Indeed, since long ago, there has been a trail or unpaved road that connects this Melinau area. It starts from Long Iman-Long Lama-Long Bedian-Miri. I don't want a paved road directly from here (Melinau) to Miri. The land we have now is not big enough anymore. It has already been invaded, and such development will only make us more trapped. (K3) In my view, it is enough as it is now. There is no need for a road. (K9) This proposed road will not only put pressure on the locals of Batu Bungan and Long Iman Village, but it will also affect the economy of those from Long Terawan Village. This is because water transportation from Miri to GMNP will be paralyzed due to the existence of the road. All this time, the residents of Long Terawan have earned a decent income from rural businesses that have been welcomed by international tourists. The village becomes a transit point for tourists to experience longhouses and mingle with the residents. Thus, residents can also sell local products such as rice wine (an alcoholic drink), dried fish, and other agricultural products. The income of boat operators will also decrease as a result of the construction of the road. Therefore, looking at the impact that will happen, it is better to maintain air transportation as the main transport to this GMNP where the locals are indeed given a subsidy for the cost of their air ticket from Mulu to Miri Airport, which is considered reasonable. In addition, the construction of the road is likely to result in even worse degradation of biodiversity. An increase in population will also occur due to the existence of roads that connect Mulu to other areas. Pressure on the use of natural resources in Mulu will also occur, leading to encroachment, hunting, pollution, and so on. (K2) What is the guarantee that the parties involved in the construction of the road will not take timber when carrying out the project? Make sure they do the construction work without damaging the existing environment, which may threaten us. (K10) While first in protecting the park and biodiversity, the road connection to Miri and the rest of the area is a terrible idea. It will mostly destroy the biodiversity because it will make it easy for outsiders (wildlife traders) to come in and hire locals to collect in the forest and transport it directly. The wildlife trade is a big problem for us. Even though the local community hunts and the park is primarily for subtenants, they are not involved in the wildlife trade. Most local communities are also against the proposed road because they worry about the competition from outsiders coming in. Now, they monopolize transportation around here. Although people favor the road to Mulu, but not directly to here (consider at some stages). It should be because it directly limits how people come. One idea to solve this problem is to have the end of the road be at the Tutor River. Then people will travel the rest of the journey to Mulu by boat. It has frequently been discussed among the Berawan community about this. Many residents support the road, but they are concerned about encroachment on the park. They are aware of the effects of tourism on their livelihood because they are involved in it. (K1) This demonstrates that locals are highly concerned about potential threats to biodiversity in GMNP and the surrounding area posed by road construction. Despite opposition to the proposed road construction, it was the authorities who determined the most effective means of controlling the situation to ensure the well-being of the community. The authority should note that locals can be a direct threat to the protected area when they refuse to cooperate with them or participate in conservation activities (Holmes, 2013). The government's assistance in empowering the local economy. Creating a balance between park protection and community development is a global challenge for national park policymakers and management authorities (Peng et al., 2022). In terms of the economy, the government also provides a lot of assistance to tourism operators, including homestay operators who emphasize human capital. According to Croes et al. (2018), investment in human capital is the key to sustainable tourism development through good hospitality to tourists. In GMNP, some locals are given courses and facilities to operate businessrelated tourism by local authorities. Last year, some officers from the Sarawak Economic Development Corporation came here to inspect the state of our homestay. They monitor if there is a lack of facilities such as washing machines, beds, refrigerators, and beds. If before, they gave us MYR5000, but now the who bought us the appliance corresponded to the value. (K6) There are also homestay-related courses given by the government in Miri. (K5) Land ownership conflicts. The issue of land ownership among the Penan-Berawan people around Mulu has been a matter that has arisen recently, causing the government to intervene in finding a proper solution. It can be said that there is an encroaching party like the Berawan people who came to our place and claimed that this land is their right. Their border is far away from here. While they have a village, they don't live there. That is the issue that I want the Chief Minister (CM) to solve because they are still very much in power (dominant) here. In terms of work, it's not a problem. Recently, they requested the Chief Minister to build a longhouse. Once, the CM approved the construction of our longhouse here, and he was attacked by a few of them. This is a bad experience. (K3) As far as we are concerned, we are not worried. Due to the presence of immigrants from other villages and races, we are worried. I do not agree with the development. Right now, our land is an issue. We imagine our land will be taken away. Where else do we want to live? Now the Chief Minister is going to Marriot Hotel to discuss this land issue. A few Berawans claim that this place where we live is their land. Even though we have always lived here. (K9) Although ethnic conflicts have existed for centuries, However, they remain united in their opposition to forest destruction in their region, which shows their place attachment is good. Based on Zhang et al. (2020) andMohamad Syahrul Nizam et al. (2021), place attachment refers to an emotional bond which is a memory produced through experience in an area and it plays a role in fostering an ecocentric attitude. UNESCO WHS status helps here. If there is no such status, they, namely Radiant Lagoon Sdn Bhd, are already producing palm oil in the area near here. We (Penan and Berawan) drove them away. People who have a stake in Radiant Lagoon Sdn Bhd do some business here. They take opportunities (tricks) when running some businesses here, including attempts to exploit the area, which is only seen to affect the environment and not to the detriment of the local population. (K10). The study carried out by Brankov et al. (2022), Eben (2006), Mannetti et al. (2019), Nastran (2015), and Ngonidzashe et al. (2017) also stated the same situation, which is the conflict between locals and protected area management. Although it involves different issues from different groups, it has similarities in terms of dependence on natural attributes in national parks. --- Conclusion, caveats, and policy implications The well-being of the community through their satisfaction with the environmental, economic, and social aspects of GMNP has been unraveled, and some of the issues that have arisen have also been explained in detail by key informants in this study. In general, the local community's perception of the environmental aspects of GMNP is good. However, it does not reflect the actual situation, i.e., river water turbidity, wildlife threats, degradation of wetlands, and solid waste issues still occur, especially in areas at least 50 kilometers outside of GMNP. Next, the constraints of the COVID-19 pandemic show a decrease in their income. For survival, they changed to gardening, fishing, hunting, and small-scale agriculture activities for continuous food supply. In terms of social aspects, the services and facilities are needed to be improved, especially the supply of clean water and electricity. Some of the local communities are very satisfied with the cultural aspects of their lives, which include traditional elements such as handicrafts, dance, and musical instruments. Based on the narratives, the Oroo's secret sign language of Penan, which is their identity, is increasingly threatened due to changing lifestyles and the need for documentation and promotion of tourist products for the language. Authority intervention also greatly affects the well-being of local communities due to policy implementation in terms of environmental, economic, and social aspects. Local support towards the government can determine the holistic management of biodiversity conservation and sustainable tourism in the natural area like GMNP. Therefore, the bottom-up approach is seen as an idea that should be emphasized by stakeholders, which is the involvement of all parties in the decision-making process starting from the ground level, and it is important in understanding the well-being of the complex community. Communication involving all stakeholders can determine the form of community empowerment and lead them to support a balanced form of management. The holistic management in a protected area, particularly UNESCO World Heritage Site will provide good periodic reporting, which is a reflex that the site is well-managed by stakeholders and there is no significant threat that makes the site listed on the List of World Heritage in Danger. The elements of biodiversity conservation and well-being are among the emphasis in the periodic report which is carried out every six years by appointed assessors. In particular, the process involves an assessment of the detrimental elements of a property, whether its condition is stable or not, and the UNESCO World Heritage Committee will provide recommendations based on the report. In addition to the global branding as WHS, any protected area also has the potential to be proposed as one of the IUCN Green List of Protected and Conserved Areas if the components include good governance, sound design and planning, effective management, and successful conservation outcomes in good conditions. The situation in GMNP explains that the local community has met certain well-being criteria based on definitions from various literatures that explain the relationship among the environmental, social, and economic domains. The social psychological aspect demonstrates that it is the key in achieving well-being. Finally, place attachment is interpreted as a catalyst for a resilient community in GMNP. This study has highlighted the challenges posed by the pandemic to a tourism community residing in one of the protected UNESCO World Heritage Sites. The empowerment of local communities constitutes comprehensive conservation of biodiversity. The conceptual framework of this study can be applied to other studies on related topics, such as the well-being of local communities in protected areas in both developed and developing countries. This mixed-methods study provides greater insight than a quantitative study alone into community well-being, environmental, economic, and social issues, the role of authority intervention, and the COVID-19 pandemic as a mediator. This is due to the fact that, in general, local communities living in protected areas have similar environments, i.e., they rely on biodiversity characteristics that may provide slightly different benefits and challenges based on their perspectives. Future research will benefitted with broaden the scope of the study towards psychosocial aspects (awareness, knowledge, attitude, and experience of biodiversity conservation) and sociodemographic factors that may have an impact on the respondents' own level of well-being. It is also recommended for the future research to explore the community well-being in terms of economic aspect, particularly the value of biodiversity conservation in GMNP. It is potentially, could provide a new context into the existing data that relates to human well-being. Taking into account how challenging it is to conduct sampling during this pandemic, the mixed method approach can reduce the likelihood of bias in research findings. We acknowledged that this study has its data limitation, however it could represent a pragmatic and prioritize a meaningful knowledge on the application of quantitative and qualitative methods. Arbitration between these two approaches is crucial to understand the community's perspective through the eyes of key informants and the generalizations of laymen. In addition, this study includes local communities residing in the isolated area who, on average, have a low level of literacy and limited internet access. Thus, it limits the ability of online surveys. Even though this mixed method is deemed appropriate for use during this pandemic, researchers must be aware that data reliability may be compromised. Thus, it is recommended that researchers engage in data triangulation to ensure its reliability by engaging in reflection that consistent with the study's objective(s), epistemological stance, and design. In addition, the extent to which threats and biases in the study can be adequately managed. --- Data availaibility All data generated or analyzed during this study are included in this published article. --- Author contributions All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by MSNI. The first draft of the manuscript was written by MSNI and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript. --- Competing interests The authors declare no competing interests. --- Ethical approval The study was approved by the Universiti Putra Malaysia Ethics Committee for Research involving Human Subjects (JKEUPM) (reference number: JKEUPM-2020-403). All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Before being approved by JKEUPM, this study has obtained a research permit from the Sarawak Forestry Corporation (reference number: SFC.PLandRS/2020-006 and Park Permit No. WL23/2020) for a one-year period from 2020 to 2021. It has to comply to the First Schedule of the National Parks and Nature Reserves Regulations 1999 (Regulation 5). In order to meet ethical principles, we have collected primary data using a physical questionnaire from local communities living near Gunung Mulu National Park voluntarily, taking into account certain aspects such as being over 18 years old and having lived in the study area for more than five years. Meanwhile, 12 key informants have agreed to be involved in the recorded interview session. --- Informed consent Before participants agreed to participate in this study, they were given an information sheet that explained the nature of the research in terms of methodology, benefits of the study, possible side effects and complications, and confidentiality. Informed consent for participation and publication was obtained from all participants in this study. All participants gave their informed written consent. --- Additional information Correspondence and requests for materials should be addressed to Suziana Hassan. Reprints and permission information is available at http://www.nature.com/reprints Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The local communities living around national parks or areas like World Heritage Site (WHS) are crucial stakeholders to such settings. Their well-being needs to be unraveled so that the holistic management of the national park is in good condition to stabilize its status as WHS through the support and empowerment of the community. Numerous studies have been conducted on the biodiversity and geology of Gunung Mulu National Park (GMNP), but the community psychology aspect that is the foundation of conservation efforts has not been addressed. Therefore, this study aims to examine the community well-being dimensions in terms of environment, economics, social aspects as well as authority intervention based on the perspective of the local community and professionals with an emphasis on the current issues in GMNP. Quantitative and qualitative approaches were used in this study through a questionnaire to 99 local communities, and individual interviews that were conducted in GMNP and four nearby villages. Data were analyzed descriptively with four themes: environment; economics; social; and authority intervention. The findings showed that locals were satisfied in residing area in terms of environmental conditions. However, it does not reflect the actual situation, i.e., river water cloudiness, wildlife threat, degradation of wetlands, and solid waste issues are still happening. The constraints of the COVID-19 pandemic portrayed that they were very dissatisfied with their monthly income, which is very low compared to before. In terms of social, the services and facilities, especially treated water and electricity need improvement. It also noted that authority intervention especially related to road proposal, financial and skills assistance, and community conflicts could influence locals' support for the planning and policies implemented in the national parks or WHS areas. This study suggests that relevant stakeholders should emphasize bottom-up approaches by considering aspects of community well-being that stem from multiple dimensions in order to achieve holistic national park management.
INTRODUCTION Peru has enabled the most restrictive measures in its national public health history to control the current COVID-19 outbreak. [1][2][3][4] The first patient with COVID-19 in Peru was detected in Lima on March 5, 2020. 5 Five days after, classes in schools were suspended nationwide, and on March 12, all classes at universities were suspended nationally. 6 On March 15, a state of emergency, border closure, and lockdown was declared with the order of social isolation for 15 days, 7 which has been extended multiple times, and currently, it has been announced to be until June 30, 2020. Those measures were very similar to the ones imposed in China, which have affected people's lives, jobs, health, and well-being, 8 increasing stress and anxiety 9,10 during the COVID-19 outbreak. Because of the social isolation in Peru, the COVID-19 crisis is expected to affect people's mental health, especially that of healthcare workers. After the first case was detected on March 5, 2020, the confirmed cases increased rapidly overwhelming healthcare workers. Furthermore, personal protection equipment (PPE) access, burnout due to long work hours, not seeing their families for many days, the high risk of becoming infected, and the psychological harm of uncertainty have been reported to affect the physical and psychological status of healthcare workers in China 11 and Iran. 11,12 However, at the best of our knowledge, this has not been properly studied in Peru, as it has already been reported in China, [13][14][15] Singapore, 16 Iran, 12,17 Italy, 18,19 France, 20 United Kingdom, 21 and Spain. 22,23 The COVID-19 crisis is causing an increase in burnout or anxiety, 24,25 which resulted in an unprecedented psychological impact, [26][27][28] and affecting people's life satisfaction that is one of the most critical indicators of mental health. 29,30 We aim to use early evidence in Peru to help mental health service provide in screening people with psychological issues during the COVID-19 outbreak from a novel perspective of typhoon eye theory. 13,[31][32][33] It has been observed that people who reside far from the epicenter of an outbreak usually overestimate the likelihood of becoming infected, 34 which has been reported for COVID-19 13 and earthquakes. 32 This study identifies the vulnerable regions where individuals are more likely to suffer from well-being issues and helps to guide medical professionals' attention toward the more mentally vulnerable groups based on the distance from the epicenter in the COVID-19 outbreak in Peru: Lima (the capital of Peru). The "ripple effect" refers to the phenomenon that the mental health issues are more problematic for people around the center of the epicenter, which was the case for mental health services during the SARS and Ebola outbreaks. [35][36][37] However, because tremendous amount of social media exposure and information have been perceived during the COVID-19 outbreak, 38 our research group already has reported that in China individuals' well-being deteriorates over distance from the epicenter, 13 as depicted by the psychological typhoon eye theory. 31,32 In this study, we test whether the typhoon eye theory works and for whom it works in the COVID-19 outbreak in Peru. We performed our analysis on healthcare workers as they are a COVID-19 vulnerable group. We selected age as a variable because it has been reported that the younger population are usually more adaptive to a natural disaster or to the outbreak of a virus. 39,40 However, younger population also tend to access information on COVID-19 more frequently via digital sources such as social media, 41 which causes them to be exposed to more negative content. 42 Family size is an indicator of social support that one could receive during crisis like the current one 43 because it served as an important resource to buffer stress and anxiety. 44 We surveyed healthcare workers in 15 of the 24 provinces in Peru; these locations vary in their travel distance from the epicenter of Lima (0-1,292 km). We used anxiety, distress, and turnover intention scales to assess the mental health of healthcare workers in Peru after 1 month of lockdown and social distancing measures. Turnover intention is defined as the likelihood of an employee to leave his current job. 45 Overall, drawing from psychological typhoon eye, 31,32 this study provided a snapshot of adult healthcare workers' mental health during the ongoing COVID-19 pandemic to enable more targeted mental health support in Peru. --- METHODS Study design. We conducted a cross-sectional survey from April 10, 2020 to May 2, 2020, after 1 month of lockdown and social distancing measures, in Peru because of the COVID-19 outbreak. At the beginning of the survey (April 10), the number of confirmed cases in Peru was 5,897 and 169 deaths, 46 whereas at the end of the survey (May 2), the number of cases increased to 42,534 and the number of deaths increased to 1,200. 47 We surveyed healthcare workers from 15 47 Participants. The online survey reached 400 healthcare workers in healthcare organizations such as hospitals, clinics, first emergency responders, medical wards, nursing home, dental clinics, pharmacies, and other healthcare institutions. We received responses from 303 of them (response rate of 75%), who worked in 111 healthcare facilities, including 55 healthcare facilities in Lima, 33 healthcare facilities in Loreto, and 23 healthcare facilities from the other 22 cities (at least one from each city). Their distance to the epicenter (Lima) ranged from 0 km to 1,292 km. All survey participants provided their informed consent before the enrollment. The survey was approved by the Tsinghua University Ethics Committee (#20200322). The participants remained anonymous and had the option to finish the survey at any time, and their information was kept confidential. The participants were not involved in any of the planning, execution, and reporting stages of the study. Outcomes and covariates. Healthcare workers' anxiety, distress, and turnover intention were assessed using the seven-item Generalized Anxiety Disorder (GAD-7) scale, 48 the Kessler Psychological Distress scale (K6), 49 and the two-item turnover intention scale, 50 respectively. The total score of anxiety was considered as normal (0-4), mild (5-9), moderate (10-14), and severe (15-21), whereas the psychological distress was considered as low (<unk> 5), moderate (5-12), and serious (3 13). The cutoff value to consider the presence of anxiety was 10 51 and 13 for psychological distress. 52 The healthcare workers reported their age, gender, family status, education, occupation, type of healthcare organization (public or private), job level (entry, junior, intermediate, senior, and chief), exercise hours per day in the past week, and chronic health issues (yes or no). Education included the categories of high school, technical, bachelors, medical specialty, masters, and doctorate. Participants reported whether they had any chronic disease because comorbidities increase the chance of complications in a person with COVID-19 53 and because people with ongoing medical issues could be more anxious. Using their work locations, we calculated the distance of their cities to the epicenter of Lima for each participant. Statistical analysis. Data analysis was performed in STATA version 16.0 (StataCorp LLC, College Station, TX) with a significance level set at P <unk> 0.05, and all tests were two-tailed. We used linear regression to predict anxiety, distress, and turnover intention using unweighted data. ). The average distance of the participants to the epicenter of Lima was 424 km, with a SD of 490 km. The participants scored an average of 15.4 (SD of 4.6) in the GAD-7 anxiety scale, and the average surpassed the cutoff of severe anxiety at 15. 51 In the K6 distress scale, the participants scored an average of 19.2 (SD of 4.5), higher than the cutoff of mental distress disorder at 13. 52 Predictors of anxiety, distress, and turnover intention. The regression results in Table 2 examined the predictors of anxiety, mental distress, and turnover intention of healthcare workers in Peru during the COVID-19 outbreak. Education level had a negative association with anxiety (<unk> = -0.746, CI: -1.441 to -0.050, P = 0.036). The effects of gender, age, work level, type of contract, and type of institution on anxiety were not significant. In the case of mental distress, the predictors (gender, age, education level, work level, type of contract, or type of institution) were not significant. There was a negative association between age and turnover (<unk> = -0.033, CI: -0.057 to -0.008, P = 0.009), with younger healthcare workers (<unk> = 2.817, CI: 2.452 to 3.181) aged among 18-24 years in comparison to 35-44 years (<unk> = 2.162, CI: 1.946 to 2.378). Healthcare workers in the private sector had a higher turnover intention than those in the public sector (<unk> = -0.420, CI: -0.810 to -0.031, P = 0.035). The effects of gender, education level, and work level on turnover intention were not significant. --- RESULTS The distance to the epicenter as a predictor. First, margin analysis revealed the relationship between the distance to the epicenter and anxiety was significantly negative, a ripple effect, taking all the other covariates equal (<unk> = -0.002, CI: -0.004 to -0.0002, P = 0.031). This relationship, however, might vary when other variables changed. The regression results in Table 2 indicated a significant interaction effect between the distance to the epicenter and the type of institution on anxiety (<unk> = -0.005, CI: -0.010 to -0.000, P = 0.049). The interaction effect between the distance to the epicenter and job contract (full time versus part time) on anxiety was not significant (<unk> = 0.002, CI: -0.001 to 0.005, P = 0.223). Yet, margin analysis showed the relationship between the distance to the epicenter and anxiety was significant and showed a ripple effect only among full-time healthcare workers (<unk> = -0.003, CI: -0.005 to -0.001, P = 0.011) and not among temporary healthcare workers (<unk> = -0.001, CI: -0.004 to 0.002, P = 0.583). Second, margin analysis revealed the relationship between the distance to the epicenter and anxiety was significantly negative, a ripple effect, taking all the other covariates equal (<unk> = -0.003, CI: -0.005 to -0.004, P = 0.023). This relationship, also, varied when other variables changed. The regression results in Table 2 indicated a significant interaction effect between the distance to the epicenter and the type of institution on anxiety (<unk> = -0.008, CI: -0.014 to -0.002, P = 0.010). Margin analysis showed the relationship between the distance to the epicenter and anxiety was significant and 21) 11 ( 17) 21 ( 33) 13 ( 16) 0 (0) 3 ( 13) 28 ( 18) 22 ( 20) 9 (26) 3 31 ( 10) 6 ( 11) 10 ( 16) 2 ( 3) 12 ( 15) 0 (0) 1 ( 4) 8 ( 5) 18 ( 16) 5 (14) 4 10 (3) 4 ( 8) 4 ( 6) 0 (0) 2 ( 3) 0 (0) 0 (0) 6 ( 4) 4 ( 4) 0 (0) > 4 4 (1) 2 (4) 1 (2) 0 (0) 1 (1) 0 (0) 0 (0) 1 (1) 3 (3) 0 (0) Education level High school 1 (0) 0 (0) 0 (0) 0 (0) 1 ( 1) 0 (0) 0 (0) 0 (0) 0 (1) 0 (0) Technical 67 ( 22) 2 ( 4) 3 ( 5) 1 ( 2) 41 (64) 1 ( 5) 9 ( 38) 25 ( 16) 40 ( 36 (0) 0 (0) 0 (0) 1 (2) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 1(3) NA = not applicable. * Includes other occupations such as medical technologists, dentists, psychologists, biologists, administrators, ambulance drivers, physician auditors, students, and providers of general services. showed a typhoon eye effect only among healthcare workers in public institutions (<unk> = 0.008, CI: 0.001 to 0.015, P = 0.021) and not among healthcare workers in private institutions (<unk> = 0.000, CI: -0.002 to 0.002, P = 0.883). The interaction effect between the distance to the epicenter and job contract (full time versus part time) on anxiety was also significant (<unk> = 0.004, CI: 0.000 to 0.008, P = 0.029). Moreover, margin analysis showed the relationship between the distance to the epicenter and anxiety was significant and showed a ripple effect only among full-time healthcare workers (<unk> = -0.004, CI: -0.007 to -0.002, P = 0.002) and not among temporary healthcare workers (<unk> = 0.001, CI: -0.004 to 0.004, P = 0.955). Third, margin analysis revealed the relationship between the distance to the epicenter and turnover was not significant taking all the other covariates equal (<unk> = 0.002, CI: 0.0003 to 0.001, P = 0.494). The regression results in Table 2 indicated a significant interaction effect between the distance to the epicenter and the type of institution on turnover (<unk> = 0.002, CI: 0.001 to 0.004, P = 0.001). Margin analysis showed the relationship between the distance to the epicenter and turnover was significantly negative among healthcare workers in both public institutions (<unk> = -0.003, CI: -0.005 to -0.002, P = 0.000) and private institutions (<unk> = -0.001, CI: -0.001 to -0.000, P = 0.000). --- DISCUSSION Since 2012, Peru implemented the community mental health model as recommended by the WHO 54 as an approach to provide care in the community through specialized facilities called community mental health centers. However, as reported in 2019, actual full-time employees of these centers reported that there are critical barriers that still need to circumvent. 55 Some of them include lack of consistent training, resources, structure, and policies that effectively support the use and importance of these centers in the evaluation and adequate treatment of mental health conditions. 55 This situation also gets worsened when general practitioners in Peru consider themselves as not very competent in diagnosing and treating mental disorders. This was reported in a self-perception survey that evaluated the competence of Peruvian general practitioners in diagnosing and treating major depression, anxiety disorder, alcohol dependence, and schizophrenia. 56 It is reported that of the 434 responders, 70.5% believed they were competent in diagnosing depression, 73.3% for anxiety, 67.6% for alcohol dependence, 62.0% for schizophrenia, and when the four mental disorders were combined, only 41.6% of participants self-perceived competence in providing an adequate diagnosis. 56 These results highlighted the need to improve medical education so as to develop the skills necessary to confront mental health disorders. 56 There is a very limited number of studies that have assessed mental health in the general public and healthcare workers in Peru, and, to the best of our knowledge, this study is the first to report the mental health of healthcare workers in Peru during the COVID-19 outbreak. Our study shows that overall people who were geographically further from the epicenter in Peru during the outbreak experienced less anxiety and mental distress, corroborating the ripple effect and disconfirming the typhoon eye theory. 11,13,[31][32][33] However, this relationship can change depending on the type of institutions (public versus private) and contract (full time versus part time). The relationship between the distance to the epicenter and distress for healthcare workers in public institutions was positive, showing a typhoon eye effect (<unk> = 0.008, CI: 0.001 to 0.015, P = 0.021). Distance to the epicenter is a crucial factor for psychiatrists to consider to screen the mentally vulnerable groups, 13,30,57 but research needs to first establish whether the distance to the epicenter carries a ripple effect or a typhoon eye effect. Furthermore, our results indicate that healthcare workers with a lower education level were more anxious, and younger healthcare workers and those in the private sector were more susceptible to turnover. An important factor to consider is that at the beginning of the survey (April 10), the number of confirmed cases in Peru was 5,897 with 169 deaths, 46 whereas at the end of the survey (May 2), the number of cases increased to 42,534 and the number of deaths increased to 1,200. 47 This significant increase in confirmed cases and the accompanying coverage in the national and international media could have also caused increased anxiety and distress in healthcare workers. In addition, the reported precarious health system and saturation of every hospital in Peru with COVID-19 patients 3 could have also caused an increase in turnover intention for healthcare workers. Similar to Iran, 17 China, 13 and the United States, 58 we did not identify a universal risk factor that could predict specific mental disorders in Peruvian healthcare workers. This is expected as each country has their own medical system, clinical capacity, access to PPE, labor conditions, lockdown policies, and cultures. Limitations. The context of this study has a clear epicenter of COVID-19, Lima, in Peru. However, it is not always the case as observed in South Korea. 59 Our data were collected in Peru, a geographically large country, and it remains unclear whether the typhoon eye effect or the ripple effect will generalize in other countries, most of which are smaller. The epicenter of Lima is in the midwest of Peru, whereas the epicenter of Wuhan is in the middle of China, and the epicenter of New York State in the United States is in the northeast. Thus, we suspect that either the typhoon eye effect or the ripple effect might play out differently in term of pace and patterns. --- CONCLUSION Our results show that Peru's healthcare workers' anxiety and mental distress decreased as the distance from the epicenter increases, corroborating the ripple effect and disconfirming the typhoon eye theory. A lower education level increased the anxiety levels, whereas age and gender did not affect the anxiety and distress levels. The turnover intention was not associated with the distance to the epicenter nor gender, but it was higher in younger healthcare workers in the private section. Our results can help guide mental health service providers toward vulnerable groups of healthcare workers that are closer to Lima, the COVID-19 epicenter in Peru. We urge for more research to assess the mental health of healthcare workers and general public in Peru, a country that was not given the importance it deserves.
We conducted a cross-sectional survey to assess the anxiety, distress, and turnover intention (likelihood to leave their current job) of healthcare workers in Peru during the COVID-19 pandemic. Our results reported that 21.7% healthcare workers in Peru experienced severe anxiety, whereas 26.1% of them experienced severe mental distress. A higher level of education related with a lower level of anxiety. Younger workers had a higher level of turnover intention than their older colleagues did. Healthcare workers in the private sector had a higher turnover intention than those in the public sector. Most importantly, people who were geographically far from Lima, the epicenter in Peru, during the outbreak experienced less anxiety and mental distress, corroborating the ripple effect and disconfirming the typhoon eye theory. However, the direction of these relationships can change depending on the type of institutions (public versus private) and the type of employees' contract (full time versus part time). Our research helps provide insights for clinical professionals in identifying the vulnerable groups to mental disorders in Peru. This is the first study to assess anxiety, mental distress, and turnover intention in healthcare workers in Peru during the COVID-19 pandemic.
INTRODUCTION Child labour has been a serious problem and a challenge for many countries around the world, especially, the developing countries. Globally, it has been a challenge and long-term goal in many countries to abolish all forms of child labour. In developing countries, it is considered as a serious issue these days. Child labour refers to children who miss their childhood and are not able to have the basic amenities which a child should have. Recently the International Labour Organization (ILO, 2013) estimated there are around 215 million children between the ages five to fourteen who works worldwide. They are often mistreated and work for prolonged hours, in very bad conditions. This can affect their health physically, mentally and emotionally. These children do not have the basic rights like access to school or health care (Osment, 2014). According to ILO (2013) the largest numbers of child labourers are working in hazardous work and the total number of child workers is increasing, even though it is forbidden by law. These children are vulnerable to diseases and they struggle with long-term physical and psychological pain. The main cause that induces children to work is poverty. These children work for their survival and their families (Mapaure, 2009). Although, not all the work that children do is harmful or brutal. Some work may provide successful learning opportunities, such as babysitting or newspaper delivery jobs, but not if the work exposes them to psychological stress, like human trafficking, prostitution and pornographic activities (ILO, 2010). International organizations have made great efforts to eliminate child labour across the world. Many countries have adopted legislation to prohibit child labour; nonetheless, child labour is widespread throughout the world. It is not easy task for low income countries to achieve banning child labour. In most African countries, a large proportion of households still live below the poverty line of less than $5 US Dollars per day, due to factors such as weak economic base, galloping inflationary measures, high rate of unemployment, the inadequate incomes of parents as well as ineffective machinery to enforce child welfare policies (Togunde & Carter, 2008). The result is that it has affected children in the rural areas find it difficult to survive as a result of economic status of their parent. These adverse socio-economic situation as highlighted above which are also compounded by the challenging political and cultural crises in many countries, as evidenced by civil wars, genocide, famine, drought, HIV/AIDS epidemic, and structural adjustment programs makes life in the rural areas unbearable for the children (Alam, Mondal & Rahman, 2008;and Crosson, 2008). Consequently, African children who are always at the receiving end are often placed in the margins of public arena through their joining both the wage and non-wage markets, some of these activities are sometimes hazardous to their health and education (Crosson, 2008;Ekpenyong & Sibiri, 2011). The effects of child labour are visible on different levels in the society. On the child, the mental health of the child is negatively affected. Indeed, children engaged in hazardous industries have been observed to suffer from oral abuse from their employers, consistent fear of job termination, low self-esteem, and a loss of imagination and future direction in life (Ayoade, 2010;Okafor, 2010;and Ugochukwu, Okeke, Onubogu & Edokwe, 2012). Also, physical stress due to the age and maturation of the child is affected leading to low concentration at school and breakdown of health of the child. Physical consequences that range from malnourishment, diseases, musculoskeletal disorders from heavy labour, physical and sexual abuse, to injuries, exposure to toxic agents, and prolonged working in cramped and hazardous conditions have been well documented (Ugochukwu, et al., 2012;and Onyemelukwe, 2014). These physical effects of the industrial sector have been detrimental to the well-being of the child worker. At the household level, children's economic production has become an important aspect of economic survival strategies. Many children spend several hours working outside the home in order to bring additional income to the household. A significant proportion are involved in petty trading and services (as street hawkers, domestic servants, and in apprenticeship positions) or even working as street beggars in urban areas (Appel, 2009;and Amuda, 2010). Therefore, their involvement in these activities poses serious threats to the continued survival of the society; distort government policy with respect to education of the youth due to high dropout rate. It distorts acquisition of vocational skills and relevant education thereby destroying the economic sector (Amuda, 2010). Socially, children in industries have been found to experience negative consequences to their educational development and performance. The prevalence of illiteracy, low school attendance, and low enrollment has been attributed to children's economic participation (Okafor & Amayo, 2006;and Onyemelukwe, 2014). Okoye and Tanyi (2009) carried out a study on the perception of child labour in South Eastern Nigeria: a study of Onitsha metropolis, investigating the perceptions of Nigerians on child labour. A sample of 360 respondents was used for study. The findings indicate that majority (70.6%) of the respondents perceive such chores like baby-sitting, fetching water, splitting firewood, sweeping, farming and cooking as child labour. Also, the sex of the respondents was found to be the most important predictor of perception of chores that constitute child labour. From the study, it can be said that the forms of child labour reported by the authors are simply child work and not obviously child labour, as has been highlighted earlier. --- Forms of child labour Similarly, Asamu (2015) in a study examined child labour and its social implications on children in selected cities (being Ibadan, Enugu, and Kaduna) in Nigeria. A sample of 826 child labourers was selected as respondents for the study. Findings from the study revealed that, child labour activities fall into different categories namely: bus conducting, car washing, hawking, begging, weaving, tailoring, hairdressing and auto-repairing, among others. The study also showed that; most children who engage in child labour are largely from the lower economic stratum of the society, and the incidence of child labour was also significantly related to the rate of child's health status (r = 0.21> t0.05); school attendance (r = -0.62 > t0.05); academic performance (r =0.39 > t0.05) their delinquent behaviours (r =0.57 > t0.05); contact with parent (r = 0.24 > t0.05) and child's exploitation by employers (r= 0.31 > t0.05). The study highlighted that children who engage in economic activities are found to be different with respect to their social development. From the study, it can be deduced that the forms of child labour are far-reaching and wide-ranged. --- Factors that influences child labour practices Ekpenyong and Sibiri (2011) conducted a study on street trading and child labour in Yenagoa. The study showed that chronic urban poverty can compel parents to send children of school age to work to boost family income. Thus, for many hours each day, children of poor parents are engaged in economic ventures including hawking, plaiting of hair, and being apprenticed to various trades. The study explained the basics of child labour, its causes, and its effect on its victims and society as a whole. A sample of 300 respondents was used for the study. The findings of the study established that street trading and child labour are a great menace to both the individual and society. From the study, it can be deduced that poverty is a leading cause of child labour, especially, among the majority of child labourers. In another study carried out by Mfrekemfon and Ebirien (2015) on child labour: a public health problem in Nigeria. The study showed that child labour deprives them of their childhood, interferes with their ability to attend regular school, and this is mentally, physically, socially or morally dangerous and harmful. The study also showed that this has become a concern not only at the international level but national as well, because of the unhealthy circumstances and multiple health implications children are subjected to. The study indicated that child labour does not only deprive children of their education but also their physical and mental development. The study however, highlighted some of the causes of child labour to include poverty, unemployment, low income, corruption, demand for cheap labour and many others. From the study, it can be said that the causes of child labour on the child labourers are mostly economic, they are numerous and averse to the development of children. However, taking a different stride, Shailong, Onuk, and Beshi (2011) conducted a study on the socio-economic factors affecting children hawkers in Lafia Local Government Area, Nasarawa State, Nigeria. This study examined why children are sent out to hawk on highways and other places. The sample size comprises 100 children under 15 years of age. Findings from the study, revealed that large family size were major reasons why parents send their children out for hawking. The study also showed that the income from the child hawker supports their mothers, mostly, in polygamous homes or a single-parent home. It can be inferred from the study, that child labour is more consistent in most polygamous and single parent homes. However, the study did not consider that some single parents and mothers in polygamous homes would rather do all the labour to promote the welfare of their children, without putting them through the risk associated with hawking. In the same vein, Elegbeleye and Olasupo (2011) conducted a study on parental socio-economic status as correlate of child labour in Ile-Ife, Nigeria. The study investigated the relationship The outcome of the study showed that a significant relationship exists between parental socioeconomic status and child labour (parents of low income status showed significant high tendencies toward child labour practices than their high income counterparts). From the study, it can be said that the financial status of parents can influence their decision to engage their children in child labour practices. --- Effects of Child Labour on Child Development Early childhood is generally recognised as the most crucial life phase in terms of developmental malleability, for this is when maturation processes are accelerated and genotypic milestones emerge (Shonkoff, Richter, van der Gaag & Bhutta, 2012). The negative impact of deprivations in these critical periods can be very large. Importantly, the time sensitivities of early childhood are also socially structured by influences that include the institutions of education, as early cultural learning selects and reinforces specific cognitive and psychosocial competencies. Unequal participation in early-childhood and primary education further determines long-term trajectories, in the sense that institutions, teachers, and assessment systems all tend to promote some children over others, depending on their perceptions of children's characteristics and potential (Streuli, Vennam & Woodhead, 2011). Significantly, the child-environment influence operates in both directions, in that children do not simply absorb and react to external forces, but are instrumental in shaping their own environment by selecting and even creating those settings that are compatible with their individual characteristics (Woodhead, Ames, Vennam, Abebe, & Streuli, 2009). In Nigeria, child labour practices, manifesting in the different forms, seems to be on the increase (Osiruemu, 2007). This is perhaps due to economic crisis, which started in the 1980s. The Nigerian economic crisis has made life worse for children of the poor whose parents have either lost their jobs or suffered a drastic decline of income (Onuoha, 2008). Problems such as malnutrition, high infant mortality, overcrowding, and others have been exacerbated as many Nigerian families were pushed below the poverty level even as a small class of people profited from the economic crisis. The economic crisis has also led to the abandonment of traditional and family responsibilities with serious effect on the underprivileged and the children. The outcome of this is clearly visible in the high increase of children who engage in child labour in both the formal and informal sectors (Onimajesin, 2011). Child labour, according to UNICEF (2008), involves all works which are harmful to a child's health. These works include any work that violates children's fundamental human rights and any work that is dangerous or threatening. It also includes works that exhaust children's strength and damage their bodies. Whatever works that prevents children from going to school to gain basic skills and knowledge for their future development is included in the definition of child labour. With this, child labour is a challenge that every modern society has to contend with. Child labour has devastating effects on children, their families, communities in which they live, and generally on national development. The consequence(s) of child labour on child development is glaring obvious, and at times, irreversibly permanent. They include health hazard, physical abuse, fatigue, poor school performance, academic wastage, sexual abuse, accident, youth violence among others. Physical and health consequences of child labour include stunting, breathing problems owing to exposure to toxic substances, accident proneness, contamination of cuts and wounds. While cognitive problems include not attending schools, class retention and high dropout rate and achievement deficits, social and physiological consequences include isolation of working children from their families and peer-groups, stigmatization of work by peers, lowering of self-esteem of children and perception of relative deprivation (Rabiu, 2010;Onimajesin, 2011;and Asamu, 2015). Child labour exposes the child to a lot of hazards like sexual defilement, sexual assaults, neglects and threat of punishment for speaking out as exemplified above. The consequences of these acts usually result in an unwanted pregnancy, sexually transmitted diseases, psychological problems and a gradual withdrawal from a healthy relationship with the opposite gender. Nseabasi and Abiodun (2010) noted that street hawking exposes the male and female child to dangers posed by fraudsters and actual murderers because of their vulnerability at odd hawking hours. They are usually under personal jeopardy, harsh and hazardous conditions such as becoming an easy target to occult predators (ritual killers). Child labour does not only deprive children of their education but also their physical and mental development is taken away from their childhood. Children may not be aware of the short and long term risk involved in their work. Due to their long hours of work, child labourers are normally denied basic education, normal social interaction, personal development and emotional support from their family and they may face physical danger and even death (Onimajesin, 2011). Bassey, Baghebo and Otu (2012) argued that child labour has physical consequences on the child. These range from malnourishment, disease, musculoskeletal disorders from heavy labour, physical and sexual abuse. Mfrekemfon and Ebirien (2015) opined that child labour can result into injuries to the children and expose them to toxic agents in the process. Growth deficiency is common among child labourers. They tend to be shorter and lighter and grow with it into adult life. Long term health problems such as respiratory disease, asbestosis and different cancers are common in countries where they are forced to work with dangerous chemicals. HIV/AIDS and other sexually transmitted diseases are common among children forced into prostitution. Exhaustion and malnutrition result from children performing heavy manual labour for long hours under unbearable conditions and not having enough money to feed. --- Labour Rights for the Nigerian Child and the Way Forward The law governing the rights of a child in labour issues in Nigeria is the Labour Act. Section 59 (b) of the Act provides that no young person shall be employed in any work which is injurious to his health or which is dangerous or immoral. The Act further provides that no child under the Age of 16 years shall be employed in circumstances in which it is not reasonably possible for him to return each day to the place of residence of his parents or guardians. The section forbids a child less than 16 years from working underground or on machines. It further forbids young persons from working for a longer period than four hours in one day. It places additional restrictions on the employment of a child or young person on a ship or any vessel and it prohibits absolutely, the night employment of young persons. From the above, one can see that the Labour Act does not prohibit child labour, rather it only places restrictions on where, when and how child's labour may be employed (Dada, 2013). There should be public enlightenment at the grass roots or community levels on the present situation of child labour and its implication on the society. The family planning system should be made compulsory so as to prevent parents from having more children than they can care for. Poverty alleviation programmes should be improved upon to raise the standard of living of low income families; and upon meeting specific conditions, beneficiary households (poor households) must undertake certain activities or investments such as getting their children enrolled in school and allowing them to progress academically by staying in school without undue distraction. The government, local NGOs and civil societies should join hands and work together to ensure that children are protected from hazardous jobs that can impair their health status and their educational development (Asamu, 2015). --- The Role of the Social Worker in Child Labour The National Association of Social Workers Code of Ethics (1999) states that, the primary duty of the social worker is to enhance human wellbeing and help meet the basic human needs, with particular attention to the needs and empowerment of people who are vulnerable, oppressed, and living in poverty. They provide interventions and enhance human coping capabilities and competence to solve personal and social problems, so as to create a more caring, conducive, equitable and just society. Social workers provide clients with interventions such as assessment, counselling, task-centred work, advice, education/information giving, advocacy, among others (Ngwu, 2014). According to Ngwu (2014), the social worker plays the following roles in relation to child labour/welfare services: Broker role: the social worker makes linkages between community and the clients, to highlight the dangers of child labour, seek alternative ways of supporting the parents' income and mitigating the practice of child labour. Advocacy role: the social worker advocates on behalf of the clients (the children) to get the desired services (quality, uninterrupted and non-distracted education and childhood) from the society and their families. Enabler role: the social worker helps the child and the family to find potentialities and resources within themselves to solve their problem (of poverty). --- Objectives of the study The general objective of the study was to examine the socio-economic factors that influence child labour. The specific objectives were as follows: 1. To find out if christians are more likely to involve their children in labour practices than those who are from other religions. 2. To ascertain if younger adults are more likely to involve their children in child labour practices than older adults 3. To investigate if working class persons are more likely to involve children in child labour practices than non-working-class persons. --- Research Design The study utilized the cross-sectional survey research design. Cross-sectional survey research design entails the observation of a sample or a cross-section of a population at one point in time (Martyn, 2010). This design helped this study as well as facilitated the researcher's effort to identify the socio-economic factors that influence child labour in Nkanu East Local Government Area of Enugu State. --- Research Hypotheses The following hypotheses guided the study: 1. Christians are more likely to involve their children in labour practices than those who are from other religions. 2. Younger adults are more likely to involve their children in child labour practices than older adults 3. Working class persons are more likely to involve children in child labour practices than nonworking-class persons. --- Test of Hypotheses --- Hypothesis one Substantive hypothesis: Christians are more likely to involve their children in labour practices than those who are from other religions Null hypothesis: Christians are not more likely to involve their children in child labour practices than those who are from other religions To test hypothesis one, religious affiliation was cross-tabulated with involvement in child labour practices. The result revealed that among those involved in child labour practices, 93.2% were Christians while 6.8% were other religions. While among those not involved in child labour practices 93.1%, Christians while 6.9% were other religions. The chi square test result shows that computed <unk>2 is 7.171 while the critical <unk>2 value is 3.841 and df=1. The test shows there is a statistically significant relationship (P <unk>.007) between religious affiliation and the involvement in child labour practices. Accordingly, the substantive hypothesis which argued that Christians are more likely to involve their children in child labour practices than those from other religions is hereby accepted while the null hypothesis which state that Christians are not more likely to involve their children in child labour practices than those from other religions is hereby rejected. Hence, religion influences child labour. --- Hypothesis two Substantive hypothesis: Younger adults are more likely to involve their children in child labour practices than older adults Null hypothesis: Younger adults are not more likely to involve their children in child labour practices than older adults To test hypothesis two, age of respondents was cross-tabulated with involvement practices. The result revealed that 50.5% of younger adults and 49.5% of older adults are involved in child labour practices. On the other hand, 60.1% of younger adults and 39.9% of older adults are not involved in child labour practices. The chi square test result shows that computed <unk>2 is 5.322 while the critical <unk>2 value is 3.841 and df=1. The test shows there is a statistically significant relationship (P <unk>.021) between age of respondents and child labour practices. As a result of this, the substantive hypothesis which states that younger adults are more likely to involve their children in child labour practices than older adults is hereby upheld while the null hypothesis which states that younger adults are not more likely to involve their children in child labour practices than older adults is hereby rejected. Thus, age of respondents influences child labour practices. --- Child --- Hypothesis three --- Substantive hypothesis: Working class persons are more likely to involve children in child labour practices than non-working-class persons Null hypothesis: Working class persons are not more likely to involve children in child labour practices than non-working-class persons --- DISCUSSION OF FINDINGS Child labour remains a multifaceted social problem in many developing countries (Okoye & Tanyi, 2009). Child labour is one of the deadliest form of work a child is engaged in which tends to affect not just their present state but the future at large. Findings from this study in Table 4.8 revealed that majority (37.2%) of the respondents indicated house chores as a form of work children engage in. This means that children help their parents/guardian with house chores in the study area. This finding is in agreement with that of ILO (2010) according to him child work is a responsibility and a training to the child through assisting in house jobs that do not hinder their education and health. The study also revealed that 53.5% of the respondents agreed that house chores is a form of training. This finding agrees with that of Mfrekemfon and Ebirien (2015). According to Mfrekemfon and Ebirien (2015) child work is considered to be a part of children's training to be responsible adults. However, the findings of Okoye and Tanyi (2009) has a differing view. According to their study 70.6% of the respondents perceived house chores such as baby-sitting, fetching water, splitting firewood, sweeping, farming and cooking as a form of child labour and not child training. Although, participants in the IDIs reemphasized that children are to engage in house chores and other minor jobs for their development and proper training. morning sales, they know they will be caned and punished for late coming so they don't bother going to school. Findings from Table 4.13 revealed that respondents' majority (62.8%) indicated that poverty is the reason parents engage their children in child labour. However, the findings is in agreement with that of Mapaure (2009) and Amuda. ( 2010) According to Mapaure (2009) the major reason children work is poverty, they work for their survival and the survival of their families. Amuda. (2010) also disclosed that a significant proportion of children are involved in petty trading and services in other to bring in additional income into the house. Finding of the study in Table 4.17 also revealed that majority (87.2%) of the respondents mentioned that child labour practices have negative effect on the child. This finding is in agreement with the findings of Amuda ( 2010), Nseabasi and Abiodun (2010) and Bassey, Baghebo and Out (2012). According to Amuda (2010) the involvement of children in child labour activities poses serious threats to the continued survival of the society and distorts government policy with respect to the education of the child which is evident in the increased school drop-out rate. Nseabasi and Abiodun (2010) also noted that street hawking which is a form of child labour exposes the male and female child to dangers posed by fraudsters and murderers because of their vulnerability at odd hawking hours. Furthermore, Bassey, Baghebo and Out (2012) argued that child labour has physical consequences on the child which ranges from malnourishment, disease, musculoskeletal disorders from heavy labour, physical and sexual abuse. Furthermore, in the literature several scholars found out that various socio-economic factors influence child labour practices. In the view of Osiruemu (2007) and Onuoha (2008) revealed that economic crisis evident in the country has made parents suffer loss of job which in turn has affected their income. This agrees with the hypothesis three of this study as displayed in Table 4.40. --- CONCLUSION The current study sought to examine the socio-economic factors that influence child labour in Nigeria. Since child labour poses threats to the growth and development of a child and the society at large there is need to stop all forms of child labour through cutting down the various socioeconomic factors that influences child labour practices. From this study, it can be concluded that since majority of the respondents indicated that child labour practices has negative effect on the child, there is the need to eradicate all forms of child labour through championing sensitization and enlightenment programmes. People who still choose to children in child labour after these sensitization and enlightenment programmes should be severely punished by the law to deter others from engaging in such acts. Also, providing jobs and skills acquisition programmes for those in the non-working class sector should also be established to enable people financially independent and carter for them and the entire family. --- Recommendations Based on the findings of this study, the researcher endorses the following recommendations to aid government, institutions, NUC, UNICEF, community leaders, social workers and the public as a whole addressing the socio-economic factors that influence child labour in Nigeria. The recommendations are as follows; <unk> Majority of the respondents revealed that poverty was the major reason people engage in child labour. Therefore, the federal, state and local government should ensure that the basic needs of the citizens are met. Also, programs of free education should be introduced all over the country to enable the poor go to school. In terms of alleviating poverty, small scale business should be provided to the people, loans should be given to people willing to start up a business to boost their resources. <unk> Majority of the respondents who are Christians indicated that they involve in child labour, there is need for church leaders including the Christian Association of Nigeria body, the Pentecostal Fellowship of Nigeria, to reconscientize and educate her members on the dangers involved in child labour. Also, during church services issues such as child rape/molestation, drug abuse, child trafficking/kidnapping and other effects of child labour should be discussed to discontinue people's involvement in child labour. <unk> There is need for policy makers especially in the education sector to review and modify the curriculum in order to include activities that makes school attractive for the child. Doing this encourages the child to always be in school and on time thereby issues of school dropout is restrained. <unk> Scholarships and other forms of monetary support should be given by philanthropist and wellmeaning community members to support people in training their children. Government and International bodies should also partner with schools to award and recognise children putting in efforts in their academic. <unk> Family size was considered as a factor influencing child labour. Therefore, family planning should be a must for people. Government and other health related bodies should make family planning kits reachable and affordable for everyone. Social workers on their own part should create awareness by organizing enlightenment programmes for the general public specifically those in the non-working class sector, Christians, those with low education and the general public on the factors influencing child labour as well as its dangers on the child, the family and the society at large. Also, in curbing
This study examined the socio-economic factors influencing child labour in Nigeria. The instruments for data collection were the questionnaire and the in-depth interview schedules. The sample size used for the study was 621 (615 for the quantitative distribution and 6 for the IDIs). The quantitative data gathered were analysed with Statistical Package for the Social Sciences using percentages (%) and Chi-square (χ 2 ) statistics was used to test the three hypotheses, while the qualitative data gathered was analysed in themes as complement to the quantitative data. The study found that religious sex (χ 2 = (615),
relationship. (Singleton 1996: 457) 1.6 For some feminist critics, however, this postmodern emphasis on dissolving dualisms and its attention to actions rather than actors means that gender has been overlooked in science studies, dismissed as 'a "social" ghost that block[s] real explanation of science in action' (Haraway 1992: 332 n14). In science studies, as Whelan (2001) points out, there appears to be particular resistance to the issues of 'women in' and 'women into' science and technology that are assumed to be the sole concern of feminist science studies. [1] As we have seen above, a concern with women's under-representation in and differential experience of science is central to policy and organisational studies (Whelan 2001: 545). The standoff between these positions is summed up by Singleton as one in which feminists accuse contemporary social theories of science of being 'apolitical', while science studies scholars attribute 'epistemic conservatism' or a 'failure of [theoretical] nerve' to feminist arguments (Singleton 1996: 446;cites Grint and Woolgar 1995). --- Epistemic communities? 1. 7 We are nevertheless convinced that there is much to be gained from combining insights from these different approaches. In order to realise this potential we need to identify an analytical bridge based on shared understandings across these fields of study. The concept of epistemic community feels intuitively useful and relevant here, although it is rarely used in either social studies of science or women and science approaches. However, like both these approaches, the concept of epistemic community privileges the collective or relational aspects of knowledge production in particular contexts, rather than focusing on independent, atomistic knowers. This chimes with social studies of science, which emphasise contingent, dynamic and unbounded networks composed of non-human as well as human actors. Studies of women and science also work with a concept of communities. Here, however, the focus is on institutionally embedded, stable, and face-to-face collectives. Although it is odd that research in these two areas does not map coherently to explicit discussions of epistemic communities currently circulating in academic literatures (see Amin and Roberts 2008 for a useful overview; also Haas' 1992 notion of epistemic communities in the policy sense), there is much to be gained from thinking of science in these terms. In particular, the 'communities of practice' approach (Wenger 1998;Lave and Wenger 1991) draws attention to the face-to-face, interactional and performative dimensions of epistemic processes. Here, knowledge production and learning are analysed in terms of their tacit, embodied and situated qualities, rather than dynamics of abstraction, codification and networking: key elements of a rounded approach to gender, practice and organisation in science. --- Methodology 1.8 Our tactic here is to resist the temptation to attempt a theoretical resolution of these disparate concepts of epistemic community and contested visions of how epistemic actors work together in knowledge production. Instead, we start in the laboratory and work through a series of observations of the concrete practices and relationships of bioscientists. We use them to build an iterative interpretation which foregrounds the need to look at both organisation and practice in science. We develop our analysis through explorations of data generated with researchers in two biology laboratories in one department of a British university. Participant observation studies were conducted over ten months as part of the KNOWING project. [2] During the intensive phase of the observation, lasting for approximately five months, we visited both laboratories for the equivalent of one or two days per week, although the visits were often clustered in three to four day stints in order to get a sense of the labs' weekly routines. The observation study focused on the routine activities of researchers. However, we also attended laboratory meetings, department seminars and administrative meetings, as well as one specialist conference. Fieldnotes were written up and analysed alongside interview and focus group transcripts using thematic coding in NVivo software. [3] We also undertook more contextual forms of analysis by selecting longer examples from the fieldnotes in order to avoid fragmenting our accounts of the researchers' daily practices. These longer extracts from our empirical material feature strongly in the following section of our analysis. --- Unpacking epistemic communities: the laboratory --- 2.1 We begin by positioning the laboratory as a local site at which a specific epistemic culture comprising people, materials, machines, techniques, skills and ideas is instantiated and performed (Knorr Cetina 1999;Pickering 1995;Latour and Woolgar 1986). Science studies conceptualises these ensembles of networked actors as epistemic cultures which are distinctive to particular fields of inquiry. We found numerous ways in which labs functioned as distinctive epistemic micro-cultures and communities of practice in our study. We discuss them below in relation to three themes: shared routines and rhythms in the laboratory; the passing on of tacit knowledges and embodied skills; and collective identities. In each case we show how these are rooted not in abstract definitions of disciplines or fields, but in material practices, and we explore them by looking closely at extracts from participant observation data gathered in two laboratories: a long-established embryology group headed by a male professor; and a newer and very successful plant laboratory with a female leader. [4] We go on to discuss how gender is largely subsumed or hidden from view in this practicefocused analysis. In order to bring gender to light, we adopt a more organisational lens to look at laboratory life in the following section, entitled 'Organising knowledge work'. --- Shared routines --- 2.2 As the following fieldnotes illustrate, the laboratory groups were bound together by distinctive daily, weekly and longer-term temporal routines and rhythms which were intimately related to the experimental materials and methods that they used:... A departmental porter pick s up a batch of pig or cow ovaries from the abattoir once or twice a week. One of the post-docs explains: 'you can't vary the routine' -you can't have the ovaries to order as and when you want them for a particular set of experiments. If you stop for a couple of week s, say, the routine will be disrupted with no guarantee of further deliveries. [Lab observation fieldnote]. Once or twice a week researchers gather in the culture lab ante room to clean pig ovaries to prepare them for gathering eggs. Then three or four of the group [...] will sit at benches for an hour or so, 'aspirating' eggs from small cyst-lik e lesions on the ovaries, suck ing the eggs up through a needle into a syringe containing a small amount of solution to preserve them. --- [Lab observation fieldnote]. Technician: 'On Thursday the pig people will do the oviducts, and then culture the cells...' Researcher: 'I'm a mouse person, so I have to deal with the mouse timetable, which is different.' [Lab meeting fieldnote]. --- 2.3 In the embryology lab, the arrival of animal tissues demanded immediate attention resulting in intense cooperative activity. Routines for dealing with the ovaries were shaped by external considerations -the group had to coordinate their action in relation to the abattoir and the porter. Once in the lab, preparing materials took place in small spaces for concentrated periods of time as researchers and technicians worked together to aspirate eggs from fragile and short-lived ovarian tissue before moving on to fertilise them and culture embryos. For'mouse people' in the same lab, things were a little different. Mice could be obtained from departmental supplies more or less to order for particular experiments, so researchers were able to organise their own timetable for 'harvesting' eggs. For all the researchers, however, the laboratory's methodological commitment to modelling in vivo embryo development in in vitro conditions set up the experimental timetables dictated by the growth of embryos from two cells to a many-celled mass. --- 2.4 In the larger plant lab, researchers grew their own plant material for experimentation. The routines and rhythms of this lab involved comparatively little intensive collective work; rather, researchers were occupied separately in similar and parallel tasks (Kerr and Lorenz-Meyer 2009). Growing and watching -and then cutting and experimenting on -the plants is the thread that joins them and that runs through everyone's day. Anticipating when the plants will be at the right stage for whatever they're needed for, walk ing back through what is needed to have those plant materials at that time, is k ey to how everyone schedules their medium-term routines (it tak es about 6 week s to get to the setting seeds stage). [Lab observation fieldnote]. --- 2.5 The basic temporal building block in this lab was the six weeks it took for the seeds of their chosen plant model to produce a plant that could itself set seed. The lab's work involved analysing the relationship between plant genotypes and phenotypes and so they studied several generations of mutant genes and plant crosses, adding on additional six-week cycles to their timetable. Most researchers worked alone on their own preparation and individually conducted their own experiments. However, all were involved in the same routine experimental tasks: collecting, drying and labelling seeds; planting, growing and observing plant morphology; pollinating, collecting and storing the next generation of seeds. They passed in corridors and overlapped at the bench or in glass houses as each individually attended to the demands of growing material. --- 2.6 In both labs, then, temporal structures bound researchers together in local epistemic communities. In the embryology lab these were clearly marked by periods of collaborative activity and direct interaction; in the plant lab they were present in the form of the shared routines that researchers individually followed. --- Embodied skills and communities of practice --- 2.7 In the plant laboratory, methods for sharing techniques and passing on embodied skills and tacit knowledges appeared to be particularly important in (re)producing the group as an epistemic community, since researchers rarely physically worked together. Here the idea of communities of practice (Wenger 1998) offers useful insights. This approach emphasises hierarchical aspects of the organisation of knowledge communities, in particular the apprentice/master relationship and the ways in which strong community ties are built around'replicating and preserving existing knowledge' by 'passing on particular ways of doing things, resulting in cultures of work and professional identities that can clash with standards elsewhere' (Amin and Roberts 2008: 359). In both laboratories we observed the key roles played by experienced post-doctoral researchers and technicians as mentors and teachers of post-graduates and even under-graduate students at the bench. This was most clearly foregrounded in relation to mundane preparatory experimental tasks and in relation to the use of experimental equipment, in particular the standardised kits that are increasingly common in post-genomic bioscience. Many of these processes have written scripts or protocols, and these are excellent examples of what Latour calls 'immutable mobiles' (1987) -textual forms of representing knowledge which remain consistent as they move through scientific networks, and which are therefore crucial to the universalisation and standardisation of knowledges and techniques. However, protocols and instructions also have to be enacted and performed in specific circumstances. Contingencies, failures and unexpected hitches are an ordinary part of translating technical documents into practical experiments, and here the sharing of tacit and informal skills and knowledges come into play. --- 2.8 In both laboratories a good deal of activity was concentrated on the physical manipulation, movement and care of very small (albeit visible to the naked eye) objects. In the plant lab techniques included sowing individual seeds onto growth media and later collecting, separating and preparing for storage the tiny seeds from the plant. Critical tasks in the embryology lab included the aspiration of ovarian tissue, drawing eggs and fluid from mammalian ovaries using a syringe, and the use of mouth pipettes to transfer eggs and embryos around various dishes and plates: Three of the researchers are sitting around two tables in more or less the same positionfeet crossed under their swivel chairs, head fixed over the microscope, right hand operating the tube end of the mouth pipette, rubber end between lips. [...Later] one of the post-grads... explained to me that [...]she was frustrated with herself because the process of preparing the fertilised eggs was slow for her because she was uncomfortable with the mouth pipette technique [...] one of the postdoctoral researchers said it was 'lik e riding a bik e'impossible to explain in the abstract, impossible to describe, necessary to find you own way bodily [...] Two of the lab group are so practised in this technique that they can and do talk while pipetting, out of the side of the mouthpiece... [Lab observation fieldnote]. --- 2.9 In both cases these techniques demanded both the passing on of acquired and embodied skills and for researchers to learn to improvise a style suited to their particular strengths. For the most part, researchers would work closely with someone at the next academic level for a period to take on the relevant skills, although experienced technicians play a significant role. We see in action here the hierarchies of skill that Lave and Wenger (1991) refer to, as well as the constitution of communities through passing on particular practices. In the embryology lab, the teaching of practical techniques was accompanied by narratives in which post-doctoral researchers talked admiringly of their own predecessors and mentors in terms of their effortless skills, although they usually referred to their own technical competence self-deprecatingly. They also pointed out that mouth pipetting was now a relatively unusual technique in bioscience labs; this was frequently commented on by a new member of the group with a different disciplinary background, drawing attention to the distinctiveness of specific laboratory work cultures. --- Experimental materials and collective identities 2.10 Crucial elements of experimental practice -materials, machines and methods -also helped to constitute aspects of self-conscious group identity in the laboratories. In the embryology lab people frequently referred to themselves in terms of the animal model that they worked on: there were'mouse people', 'cow people' and 'pig people', although there was a degree of overlap in practice which reflected the laboratory's historical commitment to exploring early embryo growth in a range of animal models. In the plant lab the researchers self-identified as 'plant people' informally; in more formal settings they were 'plant biologists'. This label enabled and produced wider connections within the biology department, for example taking part in a plant biology seminar series with invited speakers and working with plant specialists in the department's technology facility and glasshouses. The distinctive characteristics of methodologies and resistances of materials were also cited when members of both groups explained that they worked in a'slow field'. In a context that demands high outputs in terms of publications and new knowledge claims, researchers felt that they were at a disadvantage working with materials that demanded a lot of care and nurture. In both cases the groups worked with whole living organisms (embryos and plants respectively) and the replication of in vivo conditions, necessitated by epistemologies that focused on the growth and development of these entities. As we have seen above, these materials take their own time, which may not be easy to reconcile with external schedules of career and research outputs. Researchers often compared their materials unfavourably with more conventional biological models (drosophilia, yeasts, mice and frogs) where, as one post-doctoral researcher put it, you 'dependably get results' (observation fieldnote). The idea of the'slow field' also had the connotation of being unfashionable and adrift from cutting edge science. This was particularly marked in the embryology lab, but the plant biologists also on occasion remarked that 'plants aren't sexy' (lab observation fieldnote). --- 2.11 These forms of collective identification are particularly important when we consider another key aspect of laboratory groups -that the members are ever-changing. This was one of the first things pointed out to us by members of the plant lab when we began the observation study. The lab leader remains a fixed point, and is recognised as such by the department, the institution, and the wider discipline, especially in the convention of referring to the group and its work using the name of the leader. Most of the other researchers, however, are passing through for a fixed period of time before moving on to another post. The question, then, is how laboratories achieve a coherent and continuous identity over time. One -partialanswer is that groups work within an intellectual and analytical framework set out by the laboratory leader. In the plant lab this was partly driven by new developments within the wider field of post-genomic biology. In the embryology lab, there was a sense of the accretion and inheritance of successful experimental practices building over many years into a coherent methodological approach. However, in both cases the shared analytical vision was realised in experimental practice through interactions and exchanges between researchers in shared spaces at the bench. In the two laboratories that we studied, the everyday community of practice emphatically did not include the lab leaders, both of whom no longer worked at the bench and were not present in the day-to-day experimental work of the group. [5] Thus the routines, skills, narratives and identifications that were generated through practice and passed on through generations of researchers can be seen as crucial both to developing new knowledge and innovation and to the preservation of skills and incremental development of techniques that are entangled in knowledge work (Amin and Roberts, 2008). --- Epistemic communities as gendered spaces? 2.12 This data indicates how we can read laboratories as face-to-face communities constituted in and by their practice. The production of new knowledge in both cases depended on embodied skills, day-to-day routines performed in shared spaces, and the reproduction of epistemic cultures that brought together specific combinations of materials, machines, and know-how. This approach tends to generate a vision of the lab as a mutually supportive community that develops over time and is reinforced through shared daily practice. It emphasises cooperation, relationality and sameness, in which the only hierarchies are those of skill and experience (Lave and Wenger 1991). Seen from this angle, the relevance of gender to the practice of lab science is difficult to establish. The labs we studied featured women in equal numbers to men in all positions, from professors to PhD students and technicians. Both men and women were involved in all aspects -practical, intellectual, mentoring, and social -of the epistemic community. Both labs had wellliked and academically admired senior post-doctoral researchers, male and female, who took the lead in activities from organising the celebrations for a successful PhD candidate, through teaching key experimental skills to new researchers, to co-authoring journal articles with junior colleagues. We noticed that the temporal routines, master-apprentice relations of the laboratories and even, to a certain extent, the experimental identities of the laboratory groups could be gendered. Experimental routines do not necessarily fit with family life; apprenticeship relations took on a different flavour when the authority figure approached their role with a paternal or a maternal sensibility; and soft toys and fridge magnets of animals are associated with femininity, for example. However, these gendered interpretations seemed marginal. When we commented on them to participants they treated them as irrelevant, and were quick to point out other differences and similarities between men and women in the laboratory which did not chime with any kind of overarching notion of the gendered laboratory. These findings appear to reflect the enormous influx of women scientists into biology and other life sciences over the past 25 years, [6] and to support the claims of the women and science policy literatures that that lingering structural blocks to the advancement of women in academic science are steadily being overcome (DTI 2003;Garforth and Kerr 2009). However, this vision of an increasingly equitable work culture is hard to reconcile with large-scale issues of women's underrepresentation and lack of progression in science careers. The mutually supportive epistemic community constituted in practical activity, then, is only one part of the story. In order to understand gender in the laboratory we need a better appreciation of how different kinds of practice are defined and valued, both within and outwith the lab. This requires us to place the laboratory in the wider organisation of the discipline and the university. --- Organising knowledge work 3.1 In this section we complement the image of the non-hierarchical epistemic community that emerges from our research focus on daily practice with a contrasting picture of gendered inequalities produced through differential evaluations of different types of work. We begin by describing the singular and linear career path for bioscientists that has become increasingly dominant in recent years, underwritten by powerful institutions including the main funding body for life sciences, the Biotechnology and Biological Sciences Research Council (BBSRC), universities themselves, and powerful professors in the field. We go on to consider the tensions and inequalities that sit alongside this, focusing upon the organisational context of contractual insecurity, particularly for post-doctoral researchers. Secondly, we look at how these organisational conditions produce inequalities at the level of the laboratory itself, through a discussion of the gendered meanings and performances of 'housekeeping' work. We argue both that women tend to be concentrated in non-progressing, reproductive housekeeping roles in the laboratory, and that the association of certain kinds of epistemic work with women reinforces a gendered culture in which conventionally masculine attributes are valued and undervalued work is feminised. --- Precarious positions and the science career --- 3.2 In the KNOWING study (see Garforth and Kerr 2009;Garforth and?ervinková 2009) and especially in the UK, we found a dominant discourse of the standardised career embedded in both national and organisational policies. The ideal career path in the biosciences was linear and concentrated, beginning with a PhD undertaken immediately after the undergraduate degree and progressing immediately to a period of short-term post-doctoral research posts. Post-doctoral research was defined as a transient career phase leading either to a permanent, core-funded lectureship, or an independent research fellowship. In 2002 the Roberts report estimated that only around 20% of post-doctoral researchers in science, engineering and technology subjects in the UK would find permanent posts in academic research (Roberts 2002: 12), and large numbers of post-doctoral researchers are continuously moving through the system. [7] This was reflected in our research findings, especially in a focus group undertaken with lecturers and professors in the biology department. In the words of one of our participants, current policy in UK universities aims to'mak e the postdoc duration short and well defined as a training period, and then they need to move on' [focus group, male professor]. [8] 3.3 Another professor emphasised that '...there is no such thing as a long term postdoc in the biosciences. I feel quite strongly that there shouldn't be any'[interview, female professor]. Others endorsed the idea that '[t]here's no position as researcher as such in the UK... no setting for using your sk ill to work in a group as a researcher' [focus group]. As one senior professor in our focus group put it, researchers should not remain in laboratories as 'perennial postdocs'. In the past, he explained, there were people'who were somehow managing to spend nine and ten years maybe [work ing in labs] but weren't particularly going anywhere. [...] The pressure wasn't on. They were on two years, three years, one year with not much chance of an academic post. Maybe we've tak en note of that now and are trying to avoid that situation.' [Focus group, male professor]. --- 3.4 The ability of postdocs to progress in science is of course predicated on their capacity to establish a reputation in their field in the form of concrete outputs: novel findings; highly cited publications; winning independent grant funding. Alternatively, researchers are encouraged and supported to leave academic research. This situation is captured in the circulation around the biology department and the main UK disciplinary grant funding council of the emphatic message that 'post-doc is not a career' [observation fieldnote, BBSRC early-career researchers support meeting]. --- 3.5 This sat somewhat awkwardly against our observations that much of the crucial work of organising the materiality of the laboratory, supporting the progress of more junior researchers, and even developing the detail of future research projects was performed by researchers in the most institutionally precarious and marginalised positions. These researchers were on fixed-term or otherwise open-ended contracts that depended on continuing external grant funding. They included one or two technicians, whose 'invisible' contributions to both the practical and the epistemic aspects of science work have been well acknowledged in the sociology of science (Shapin 1989; see also Goode and Bagilhole 1998). Here, however, we focus on the position of post-doctoral researchers. Our observations and discussions with people in the laboratory, about their own trajectories and those of others who had 'passed through', suggested that researchers in the labs divided into two groups. Many were progressing towards a stage (usually towards the end of a second post-doctoral research project) where they would try to convert their epistemic experience into an independent research fellowship or lectureship. Others had passed into a stage (having moved onto a third or subsequent fixed-term contract) where this seemed less likely. Researchers in these positions might be described as 'hanging on' or'stuck' (Garforth and?ervinková 2009). In terms of their job descriptions, they were fulfilling their roles: conducting original research, writing up findings in publications (at varying rates), developing project and funding applications. In terms of the epistemic community of the laboratory, they were supporting colleagues, juniors in particular, enhancing skills and techniques, and developing the group's epistemic work. However, their continuing employment and scientific career prospects were highly uncertain. --- 3.6 We were particularly struck by the fact that most postdocs in these positions were female, with one notable exception. We observed the career trajectories of three experienced post-doctoral researchers in the embryology lab (one male and two female), and seven post-doctoral researchers in the plant lab (three male and four female). [9] During the study, all three men in the plant lab progressed along the career track into more permanent academic positions (one to a lectureship in a research-intensive UK university and two to independent research fellowship positions in Europe). The others -six women and one man -did not. One of the female postdocs in the plant lab was hoping, with the support of her lab leader, to move sideways into an academic lab manager role. One male postdoc in the embryology lab was in the course of negotiating his next position as he came towards the end of a second post-doctoral project. The other three female postdocs in the plant lab continued to work on precarious fixed-term research projects and expressed uncertainty about their futures. Of the two female postdocs in the embryology lab, one moved sideways to work on a research project at another university, and the other remained in the lab, working primarily as an editorial assistant on a journal edited by the lab's leader. Her position in the organisation of scientific work gives a particularly vivid example of the tensions between institutionalised career expectations and the epistemic life of the lab. During our study we observed an incident where this researcher ('T') was brought into the lab by a second year post-graduate student who was panicking because she could no longer see through the microscope some of the cell cultures she had been growing. 'T' expertly manipulated the microscope, examined the barely visible cell cultures, and diagnosed the problem, as well as reassuring the student. Seen as part of the practice and culture of the laboratory group, her skill and experience were invaluable; the postgraduate student called 'T' 'the goddess of the cells', and our fieldnotes observe that in the lab she was 'white-coated, expert, and reassuring'. As a 'perennial postdoc', however, she was institutionally marginal. 3.7 'Hanging on' and 'getting stuck' (Garforth and?ervinková 2009) in non-progressing academic roles is a structural tendency of academic career systems whose disadvantages are borne by individual researchers rather than by the organisation as a whole. At the heart of this clash between organisational imperatives and the day-to-day practices of epistemic communities is a contradiction between what we call the visibly individual excellence that must be demonstrated by researchers in order to gain organisational recognition and career progression, and the everyday epistemic work on which communities of practice depend. The idea of gaining a 'name' in a particular field is telling; individual reputation and personal visibility are crucial in conventional definitions of scientific success. But this name must be built in and out of particular communities of practice. In an important sense, then, demonstrating visibly individual excellence means dissociating oneself from the community that makes it possible. We discuss the some of the problematic dynamics produced by this situation next. --- Lab housekeeping 3.8 Our study suggested that the consequences of this tension between individual excellence and everyday epistemic work were gendered, albeit in complex ways. We have argued elsewhere that the intense linear career based on building visibly individual excellence reproduces a masculine model of scientific success (Garforth and?ervinková 2009). The admittedly small-scale findings reported here tend to suggest that it also empirically benefits men. In what follows, gender inequities in epistemic work are explored through the example of what we call laboratory housekeeping. Feminist studies of women in academia have argued that women are disproportionately'responsibilised' (Morley 2003;155-159) for communal caretaking, particularly in teaching and administrative roles, while men are positioned to take up competitive leadership and epistemic styles with the emphasis on producing research outputs (Bagilhole 2000;Acker and Feuerverger 1996). Here we show how these dynamics are reproduced within research practice itself. We found that lab housekeeping was mainly but not exclusively performed by women. Perhaps more importantly, however, the organisation of knowledge work was itself being gendered as feminised practices were devalued. 3.9 By housekeeping we refer to the range of tasks, activities and roles that are dedicated to the reproduction and maintenance of the laboratory. This includes taking care of workspaces, experimental materials, and technological equipment, similar to the activities of lab caretaking discussed in Knorr Cetina (1999). We extend this notion to refer also to the work of maintaining the epistemic community itself and its ongoing knowledge projects; Star and Strauss refer to similar sorts of activities as 'articulation work' (1999; see also Star 1995). For the most part, such work 'disappears into the doing' (Star 1995;Suchman 1995). Like domestic tasks of reproduction, it is repetitive, routinised and frequently undervalued. It constitutes the material foundations on which more valuable activities -experiments and analysis -are built, and hence involves the labour required both to support measurable, visible outputs of knowledge practices (findings, claims, results, papers) and keep the lab's individual and collective work moving. --- 3.10 Technicians of course have dedicated roles in this respect, but housekeeping is undertaken by everyone as part of their laboratory life. Indeed, viewed from the angle of practice, few hard and fast distinctions can be drawn between'real' epistemic work and'merely' supporting activities. This has certainly been the argument from science studies, which insists that all knowledge is produced through practical activity, and explicitly rejects the idea that there is anything special about science's cognitive, theoretical or methodological processes (Latour and Woolgar 1986;Latour 1987). In rather different ways, this was also the case for bioscientists in our study. Few made hard and fast distinctions between the practical and analytical aspects of science when discussing their own work. Professors and laboratory leaders were more likely, in interview settings, to make claims to the specific value of 'intellectual' or 'analytical' vision in academic research (interview, female professor; focus group, biology professor), but even here they acknowledged that the bench was a space for analytic reflection and that analysing data to make findings could sometimes be seen in terms of a set of relatively prescribed operations. However, our data also suggests that this blurring of the distinction between routine practice and epistemic production does not translate easily into the structured ways that institutions value both specific researchers and the different kinds of work that they do. --- 3.11 We use the idea of housekeeping primarily as an analytical concept which emphasises gendered divisions of labour. However, it was suggested by the domestic metaphors that were present in the laboratories in the way that researchers and technicians described their work. During our initial tour of the plant lab one of the female researchers described one of the preparation rooms as the 'k itchen of the laboratory'. A senior lab technician described her role as the'mother of the lab' -supervising machines and experimental preparation and 'clearing up' [observation fieldnote]. In the larger plant laboratory the senior technician coordinated these tasks, which were prominently displayed in the form of lists of roles such as 'filter hood monitor' or 'bin prefect' and so on. The jokey titles were perhaps designed to offset the worries that the senior technician expressed that allocating these jobs would be seen as fussy, bossy and infantilising [observation fieldnote]. Care-taking issues that concerned everyone were often raised in the weekly lab meeting. In the smaller lab, maintaining stocks and organizing the work appeared to happen in an ad hoc and more or less spontaneous fashion, managed through day-to-day interactions rather than explicitly raised in particular settings. --- 3.12 As we have indicated above, all members of lab groups were expected to take part in these activities to some extent. However, it was very noticeable in our study that male researchers exhibited a reluctance both to undertake and especially to be publicly associated with mundane housekeeping tasks. They rarely engaged with these discussions in lab meetings, and responded with indifference when they were raised in lab settings. We have a number of examples of this in our data. Some incidents were relatively trivial and private, as we observed when a very junior female PhD student half-jokingly told off an experienced male postdoc in the embryology lab for not tidying up after himself: 'haven't you ever heard of emptying the bin?' [observation fieldnote]. It could also manifest in more serious tensions, such as when the embryology lab's incubating machines became infected, interfering with and slowing down the group's experimental work. Two of the lab's female researchers explained their growing frustrations with the lab's male postdoc, who was reluctant to break off from his experimental programme for the three or four days required to disinfect the machines [observation fieldnote]. In the plant lab, housekeeping issues arose from the group's increasing success in producing useful experimental material in the form of different lines of genetically modified seed stock. They found themselves with a haphazardly organised store of seeds which took up precious physical space in the laboratory. This became particularly pressing after the departure of a very successful male postdoctoral researcher, who in the previous year had made a significant finding and secured a lectureship at another university -leaving behind a good deal of unlabelled material and some rather frustrated colleagues. --- 3.13 Consequently the lab leader recognised a need to reinvent the group's archiving and storing systems. The task of conceptualising and communicating the new seed archive
Over the past thirty years there has been a significant turn towards practice and away from institutions in sociological frameworks for understanding science. This new emphasis on studying 'science in action' (Latour 1987) and 'epistemic cultures' (Knorr Cetina 1999) has not been shared by academic and policy literatures on the problem of women and science, which have focused on the marginalisation and underrepresentation of women in science careers and academic institutions. In this paper we draw on elements of both these approaches to think about epistemic communities as simultaneously practical and organisational. We argue that an understanding of organisational structures is missing in science studies, and that studies of the under-representation of women lack attention to the detail of how scientific work is done in practice. Both are necessary to understand the gendering of science work. Our arguments are based on findings of a qualitative study of bioscience researchers in a British university. Conducted as part of a European project on knowledge production, institutions and gender the UK study involved interviews, focus groups and participant observation in two laboratories. Drawing on extracts from our data we look first at laboratories as relatively unhierarchical communities of practice. We go on to show the ways in which institutional forces, particularly contractual insecurity and the linear career, work to reproduce patterns of gendered inequality. Finally, we analyse how these patterns shape the gendered value and performance of 'housekeeping work' in the laboratory.
[observation fieldnote]. It could also manifest in more serious tensions, such as when the embryology lab's incubating machines became infected, interfering with and slowing down the group's experimental work. Two of the lab's female researchers explained their growing frustrations with the lab's male postdoc, who was reluctant to break off from his experimental programme for the three or four days required to disinfect the machines [observation fieldnote]. In the plant lab, housekeeping issues arose from the group's increasing success in producing useful experimental material in the form of different lines of genetically modified seed stock. They found themselves with a haphazardly organised store of seeds which took up precious physical space in the laboratory. This became particularly pressing after the departure of a very successful male postdoctoral researcher, who in the previous year had made a significant finding and secured a lectureship at another university -leaving behind a good deal of unlabelled material and some rather frustrated colleagues. --- 3.13 Consequently the lab leader recognised a need to reinvent the group's archiving and storing systems. The task of conceptualising and communicating the new seed archive system was given to a male member of the group who had recently completed a PhD and who was working there on a very short-term contract while applying for research fellowships. The lab leader took great pains to present this task as a necessary and valuable one, and other group members volubly agreed, especially in the lab meeting at which the researcher gave his presentation on the seed archive. Our observation notes stress that the presentation was very detailed, precise, systematic, but that he referred several times during the talk to how 'tedious' and boring this must be for everyone, or adopted an ironic tone to talk about how 'important and fascinating' the issue of the seed archive is. After the meeting we asked if we could look at the slides from the presentation and he very reluctantly agreed, explaining that he doesn't lik e the thought of 'being k nown for giving such a boring talk'or 'being the seed archive guy.' Although far from conclusive, this male researcher seemed to be experiencing, and perhaps more importantly publicly performing, some discomfort and embarrassment at being associated with this trivial housekeeping work [italicised extracts from observation fieldnotes]. --- 3.14 This response contrasted strongly with the orientation of many (although by no means all) female researchers in the lab, including the two senior postdocs who carried out the physical reorganisation of the seed stock that the ex-PhD student had designed. These female researchers commented on the value of housekeeping to the epistemic life of the group and on their pleasure in this work, explicitly contrasting it with the instrumental modes of operating associated with the valued norms of academic career paths: I've always k ind of thrown myself a bit more into sorting things out. And I've always been the one that's managed the students [...] There's certain people you can see in the lab that you k now are going to run their own labs. They're very, very focused on their research... they can't be bothered with the minutiae of what's going on in the lab. They've got a very strong mentality about their experiments and stuff. [...] the focus is on the next thing and getting it done... [Interview, female postdoc]. --- 3.15 Other female researchers, who unlike male colleagues moving through their laboratories, found themselves negotiating the problems of being a 'perennial postdoc', wondered whether they were 'too fulfilled by the bench stuff'[interview, female postdoc]. They contrasted this to their male colleagues' tendency to take the'strategic view', always 'think ing about the possible outcome'[interview, female postdoc]. Others felt that they had few choices other than to pursue supporting roles in large laboratories, as they had not been ambitious enough earlier in their career to aim at a later stage to establish their own lab [observation fieldnote, female postdoc]. --- 3.16 These examples are suggestive rather than conclusive, but they do indicate some of the ways in which housekeeping was interpreted and performed in gendered ways by researchers in the laboratory. We do not want to suggest that only women undertook laboratory housekeeping. Indeed, we have been keen to assert that when knowledge production is viewed as a matter of practice rather than cognition, distinctions between reproductive/support roles and properly epistemic practice are hard to maintain and defend. We do suggest, however, that in the context of a career structure that recognises only findings and outputs, and of academic institutions which undervalue necessary epistemic and practical workers by allowing them to continue to be contractually insecure and non-progressing, housekeeping can be seen as a liability in relation to the production of visibly individual excellence. If such work is to be made into a successful career trajectory, its outcomes must be made visible, in the form of publications and individual reputation. This logic of academic life -emphasising product over process -has been reinforced and modified in recent years by audit and performance regimes, particularly in relation to quantifying research outputs (Strathern 2000;Shore and Wright 1999;Felt 2009). Research work that cannot be translated into publication capital, or remains invisible to audit and promotion mechanisms, might be valuable to oneself, one's peers and one's students, but it does not count in formal career terms. Active, goal-oriented, productive tasks are more highly valued than relational and reproductive ones. Taken together with the large numbers of women who have entered the biosciences, in particular as post-graduates, post-doctoral researchers and technicians (see footnotes 7 and 8), we believe we can point to gendered organisational cultures that feminise work that is characterised as collective, materially oriented, ongoing and supportive, in contrast to highly valued masculine work which is associated with outputs, reputation, publications, individual excellence, and linearity. --- Conclusion --- 4.1 We have been concerned in this paper with gendered inequalities in academic research and the ways in which they are shaped both by epistemic practices and organisational structures. We have explored the kinds of work that are necessary to the production and reproduction of knowledge communities, which form the social and epistemic contexts of the production of facts, but which do not themselves become directly visible in terms of outputs and individual reputation. They include the relational work of supporting junior colleagues, the communal work of passing on skills and techniques, and the indispensable tasks of housekeeping and 'articulation work' that maintain and extend laboratory communities. As contemporary science studies has made clear, producing knowledge is not simply a matter of individual cognition or intellectual insight, but is rooted in collective action and is social through and through. Recognising the importance of this practical work by focusing on the epistemic community of the laboratory tends to produce an image of science work as cooperative and mutually supportive. However, feminist critiques of science have shown over and again that epistemic communities viewed at the institutional level are deeply marked by gender inequalities. In this paper we have tried to relate these institutional patterns of inequality to the ways in which everyday work in the laboratory is organised and performed by researchers. We have sought to open up the laboratory as both a community of practice and as a site where work is structured and valued in line with wider institutional priorities. --- 4.2 We do not endorse a monolithic, rigid or static conception of organisational structure that determines a gendered division of labour within epistemic communities. We recognise Law's (1994) account of the complexities of ordering in Organizing Modernity, which addresses the plural, dynamic and open-ended modes through which scientific work is arranged, and argues that 'organisation' is not a reified singular entity but rather a site of productive power. We recognise that organisations are concepts as much as entities, that their ontological status is not given, and that institutions can be seen as 'temporary patterning[s] of a mosaic of tactical interactions and alliances which form relatively unstable and shifting networks of power, always prone to internal decay and dissolution' (Reed 2006: 30). However, organisational processes and divisions -such as the two-tier fixed/permanent contract system in universities, the insecurity of contract research positions, and the differential distribution of reward for different kinds of work -shape the experiences of researchers and the unequal constitution of epistemic communities. Obdurate patterns of inequality, such as contractual status and gendered differentials in career success, cannot simply be dissolved into another type of practice. If ordering is a form of practice, it is one which can have unequal outcomes that need to be recognised. It is perhaps not surprising that formal organisationsuniversities, funding bodies -do not recognise the necessary and communal everyday work of housekeeping in epistemic communities. 'Deleting the work' (Law 1994: 132; see also Star 1995) is an unavoidable part of ordering -organisations need to ignore, simplify and reify complex processes. However, high profile initiatives to support women's progress and participation in science are unlikely to be successful unless we also ask basic political and feminist questions:...cui bono? Who is doing the dishes? Where is the garbage going? What is the material basis of practice? Who owns the means of knowledge production? (Star 1995: 3). --- 4.3 The idea of knowledge work as messy practice and of ordering as dynamic, contingent and prone to dissolution makes science studies resistant to the usefulness of the concept of 'community' -it is too rigid, too stable, too bounded. It assumes a social (and/or epistemic) cohesion that is seen as part of the problem. The preferred focus is on dynamic, shifting knowledge networks that come together around problems rather than in institutions. The idea of community seems too close to the Mertonian emphasis on the institutional nature of science, with its stable norms and structures (Merton 1973). But what is lost in this shift away from institutions is, perhaps ironically, a sense of individuals and their experiences. If the idea of epistemic community has any value, it is precisely its multivalence. It allows us to see shared practices and tacit knowledges, but it also functions as a lens to direct our attention to who is involved in epistemic production, and how, in the context of organisational structures of value. Seen from this perspective, epistemic communities allow us to bring to light both the work and the organising principles that reproduce gendered inequalities in science. Notes [1] It is emphatically not the case that feminist science studies are reducible to these kinds of policy and liberal equity concerns, as our brief outline of debates in feminist epistemology begins to indicate. Wylie et al. (2009) offer a useful contemporary overview of the range of strands of analysis at work in feminist science studies. [2] KNOWING (Knowledge, Institutions and Gender: An East-West Comparative Study) was funded by the European Community's Sixth Framework Programme for Research and Technological Development; see acknolwedgements, above. The final international comparative project report, (ed.) Felt, Ulrike, Knowing and Living in Academic Research (Prague: Institute of Sociology AS CR) can be found online at <unk>http://www.knowing.soc.cas.cz/?page=materials>. [3] A number of semi-structured individual interviews and focus groups were conducted as part of the KNOWING project. Here we refer specifically to four interviews with bioscience researchers who worked in the laboratories where we undertook our observation studies (two female post-doctoral researchers; one male post-doctoral researcher; one female professor), and to a focus group conducted with academic staff working in the wider biology department (two female lecturers and two male professors). [4] The identities of individual participants and institutions have been concealed. Some details of the work of the two lab groups have been changed in order to preserve the anonymity of participants. [5] In a focus group with different lab leaders within the same biology department, however, we found that earlier career staff (in this case two female lecturers) did still work at the bench, and gave accounts in which they emphasised their reluctance to give up experimental work and with it a sense of ownership and control over the whole research process. [6] The proportion of female academics in the biosciences in the UK was around 40% at the time of our study, notably higher than the proportion for the natural sciences as a whole which is around 26% (AUT 2004: 12-13). According to HESA statistics for biosciences from the academic year 2003-04, women comprised 11.5% of professors, 19.5% of senior lecturers and senior research fellows, 37.3% of lecturers and 45.2% of researchers. [7] In 2005 short-term project and programme funding accounted for 68% of the total research income of UK Higher Education Institutions (Universities UK 2007). In the academic year 2005-06 just under 41% of the UK's 164,875 academic staff were on a fixed-term contract (HESA n.d staff data tables 2007). Most of them are research-only staff, who are particularly numerous in the biological sciences (AUT 2004;UCU 2007). Women are over-represented generally in contract research-only roles in UK higher education institutions (AUT 2004;2002). [8] For the remainder of this paper we adopt the conventions of the participants in our study by using the term 'postdoc' to refer to post-doctoral researchers working on short-term externally funded research contracts in the biosciences. [9] In fact there were 5 female post-doctoral researchers in the plant lab during the study, but one had only recently begun her first post-doc project having completed her PhD some months before, so we do not include her case here. There was also an unpaid post-doctoral researcher loosely affiliated with the embryology lab. She had also completed her PhD very recently, and during the course of the observation fell out of contact with the lab, so we do not include her case here.
Over the past thirty years there has been a significant turn towards practice and away from institutions in sociological frameworks for understanding science. This new emphasis on studying 'science in action' (Latour 1987) and 'epistemic cultures' (Knorr Cetina 1999) has not been shared by academic and policy literatures on the problem of women and science, which have focused on the marginalisation and underrepresentation of women in science careers and academic institutions. In this paper we draw on elements of both these approaches to think about epistemic communities as simultaneously practical and organisational. We argue that an understanding of organisational structures is missing in science studies, and that studies of the under-representation of women lack attention to the detail of how scientific work is done in practice. Both are necessary to understand the gendering of science work. Our arguments are based on findings of a qualitative study of bioscience researchers in a British university. Conducted as part of a European project on knowledge production, institutions and gender the UK study involved interviews, focus groups and participant observation in two laboratories. Drawing on extracts from our data we look first at laboratories as relatively unhierarchical communities of practice. We go on to show the ways in which institutional forces, particularly contractual insecurity and the linear career, work to reproduce patterns of gendered inequality. Finally, we analyse how these patterns shape the gendered value and performance of 'housekeeping work' in the laboratory.
INTRODUCTION Depression is one of the most common internalizing problems in adolescence (Kessler et al., 2001;Pennant et al., 2015;Xu et al., 2015). A survey performed by the National Children's Bureau (National Children's Bureau, NCB) in the United States in 2016 showed that the prevalence of adolescent depression was around 11% during 1 year. Wei (2008) also found that approximately 33% of Chinese adolescents experienced depressive symptoms during the past 3 years. Moreover, depression is associated with negative consequences, including academic difficulties, interpersonal dysfunction, as well as health problems (Berndt et al., 2000;Zlotnick et al., 2000;Korczak and Goldstein, 2009), and it may persist into adulthood if left untreated (Aalto-Setälä et al., 2002). Many prior studies have demonstrated that family context and internal resources are potential factors associated with adolescent depression (Elovainio et al., 2012;Moksnes et al., 2012;Sichko et al., 2015). Within the family context socioeconomic status (SES) and parenting are generally viewed as two fundamental factors. SES is a multidimensional concept and most contemporary researchers agree that it is represented by a combination of family income, parental education and occupational status (Bradley and Corwyn, 2002;Conger and Donnellan, 2007;Ye and Wu, 2012). The link between SES and adolescent depression has been found across both Western and Chinese populations (Amone-P 'Olak et al., 2009;Ye and Wu, 2012). According to the family investment model, compared to parents from high-SES, parents with low-SES have less financial capital, lower education and occupational status, making it less likely that they are able to provide good material conditions and engage in positive parenting behaviors, thereby increasing the risk for the development of emotional problems in their adolescent children (Conger and Donnellan, 2007). Parenting as a proximal family context consists of different dimensions and more specific factors derived from those dimensions. For example, Parker et al. (1979) developed a questionnaire with two dimensions of care (e.g., emotional warmth, closeness and empathy) and overprotection (e.g., control, excessive contact, prevention of independent behavior) to measure parenting practice. Maccoby and Martin proposed a two-dimensional model with responsiveness (e.g., warmth and involvement) and demandingness (e.g., control and monitoring) (see also Piko and Balazs, 2012). And Rapee (1997) regarded parental control and rejection as two main parenting dimensions. A large body of reviews and empirical studies investigating the association of parenting and adolescent depression showed that warmth, care, acceptance and other positive parenting behaviors were negatively linked to depression (Milevsky et al., 2007;Brand et al., 2009a,b;Yap et al., 2014;Wang Y.C. et al., 2015;Little et al., 2017), while negative parenting characterized by harsh, control and neglect were risks for depression (Aunola et al., 2013;Reising et al., 2013;Frazer and Fite, 2016;Murdock et al., 2018). Regardless of the differences in dimension of parenting practice, there is consistency that parental care and control have a close association with individual depression. For instance, parental care has been shown to have a negative association with child depression (Morris and Oosterhoff, 2016;Ono et al., 2017), andCampos et al. (2010) further found that individuals with depression perceived less maternal care. Furthermore, parental control including psychological and behavioral control is a risk factor for depression (Parker, 1983;Frazer and Fite, 2016). Care and control are regarded as two typical dimensions of parenting and it is of particular significance to study their specific effect on Chinese adolescent depression (Wang and Zhang, 2007;Xia and Liang, 2016). In China though a universal two-child policy was carried out recently, the enforcement of the one-child policy since 1979 may result in parenting behaviors that are unique and focus solely on the only child, with both parents trying their best to give their children care, love and concern (Deutsch, 2006). And with the far-reaching influence of collectivism, Confucianism and fixed family hierarchy on Chinese society, children are expected to be obedient to authority and they usually experience more control from parents (Dwairy and Achoui, 2010). Although children who grow up in Chinese cultural setting generally consider parental control as normal, a recent study found that these actions were associated with child problem behaviors (Pomerantz and Wang, 2009). The family stress model (Conger et al., 2002) and many empirical studies have revealed that SES may affect parenting behaviors. For example, high-income and well educated parents tend to display more care, warmth and supportive parenting (Zhang, 1999;Waylen and Stewartbrown, 2010), while parents with financial stress are more likely to experience depression, which in turn exacerbates problems in parenting (Ponnet et al., 2016;Devenish et al., 2017). In addition, maternal current unemployment has been shown to be associated with depression in adolescence through ineffective child-rearing behavior (Mcloyd et al., 1994). Therefore, it is possible that parental income, education or occupational status may be linked with adolescent depression through parenting behaviors. According to Benson (2002), internal resources refer to personal characteristics which influence individual development. Recently, sense of coherence (SOC), introduced by Antonovsky (1979) as a personality factor, has attracted more attention, and is defined as the extent to which one has a pervasive, enduring and dynamic feeling of confidence. Antonovsky (1987) suggested that family context was one of the most important factors to have a close association with individual SOC. Children living in high-SES backgrounds usually possess adequate resources (Conger and Donnellan, 2007), and they are more likely to develop stronger SOC compared with children in low SES (Sagy and Antonovsky, 2000). SOC enables people to cope with stress in a health-promoting manner and individuals with higher SOC experience fewer psychological problems (Simonsson et al., 2008). Previous studies have demonstrated the association between SOC and individual depression (Konttinen et al., 2008). For example, Moksnes et al. (2012) analyzed data from 1209 adolescents in Mid-Norway and found that adolescents with a strong SOC exhibited lower levels of depression. In addition, some theoretical and empirical studies found that parenting as another vital family context was related to individual SOC. For example, parental emotional closeness and affection was associated with higher levels of adolescent SOC (Garc<unk>a-Moya et al., 2013), and maternal overcontrol was associated with lower levels of adolescent SOC (Wang et al., 2018). Therefore, it is reasonable to expect that SES and maternal care/control parenting would be associated with adolescent depression through their SOC. Although many valuable findings regarding the relationships between SES, parenting, adolescent SOC and their depression have been reported, potential mechanisms among these relationships need to be further studied. Firstly, the aforementioned studies only examined single indicators of SES rather than a combination of SES with family income, parental education and occupational status. Secondly, the main findings were primarily based on Western cultures. Furthermore, according to Bronfenbrenner's ecological systems theory (Bronfenbrenner and Morris, 2006), SES and parenting are usually regarded as distal and proximal factors respectively, with the distal factor affecting individual development through the proximal one in the family context. However, it remains unknown whether SES is associated with adolescent SOC and depression through maternal care/control. Additionally, to our knowledge, few studies have examined the relationships between parenting and adolescent SOC, and less is known about the relationships between maternal care/control and adolescent SOC, especially in Chinese society. Finally, given that adolescents in lower-income families are more likely to experience depression compared to adolescents in families with higher income, it is necessary to ascertain the relationships among SES, maternal care/control, adolescent SOC and depression to provide empirical support for improving the mental health of adolescent in low-and middleincome families. Therefore, the present study examined the direct association between SES and adolescent depressive symptoms and the indirect association through maternal care/control and adolescent SOC in a community sample of Chinese adolescents. We hypothesized that the main variables of SES, maternal care/control, adolescent SOC and depressive symptoms were related to each other. We also hypothesized that SES was associated with adolescent depressive symptoms indirectly through maternal care/control and adolescent SOC separately and sequentially. --- MATERIALS AND METHODS --- Participants Participants consisted of 783 middle school students (416 boys and 367 girls) from three public middle schools and 437 high schools students (217 boys and 220 girls) from two public high schools in Jinan, an eastern Chinese city. The mean ages of middle school and high school students were 13.33 years (SD = 1.00) and 16.36 years (SD = 1.04), respectively. Included participants were living in a two-parent family and the large majority of them came from low-and middle-income families. Nearly 11% of the households had monthly income less than 1000 CNY (approximately $150), 48% between 1000 and 3000 CNY, and 41% more than 3000 CNY (approximately $450). In the sample, 6% of the fathers and 10% of the mothers had completed primary school education or less, 31% of the fathers and 32% of the mothers had a secondary school education, 28% of the fathers and 26% of the mothers had a high school education, 29% of the fathers and 28% of the mothers had a college/university education, and 6% of the fathers and 4% of the mothers had a postgraduate education. In terms of employment, 15% of the fathers and 28% of the mothers were unemployed, 47% of the fathers and 40% of the mothers were working class, and 38% of the fathers and 32% of the mothers had a professional or semiprofessional position. --- Procedure Prior to data collection, we introduced the study aims and procedure to class master teachers. With permission of master teachers, invitation letters including study information and consent forms were delivered to students and their parents. After obtaining written informed consent by master teachers from the participating adolescents and their parents, participants were asked to complete the self-report Chineselanguage questionnaires, including SES items, Parental Bonding Instrument (PBI), the Sense of Coherence Scale (SOC-13), and the Center for Epidemiologic Studies Depression Scale (CES-D), during the normal school day. To ensure standardization of procedures across classrooms, members of the trained research team supervised questionnaire administration. The study procedures were conducted following approval by the Institutional Review Board of Shandong Normal University. --- Measures Depressive Symptoms Adolescent depressive symptoms were assessed using the 20item Chinese version of the Center for Epidemiologic Studies Depression Scale (CES-D; Chen et al., 2009), which was designed to measure depressive symptomatology in the general population. It consists of four components of depressive symptomatology: somatic symptoms (e.g., "I did not feel like eating; my appetite was poor"), depressed affect (e.g., "I was bothered by things that usually don't bother me"), positive affect (e.g., "I felt that I was just as good as other people") and interpersonal relations (e.g., "People were unfriendly"). The CES-D has been widely used and has demonstrated good internal reliability and validity (Hu et al., 2014). Adolescents were required to rate each item on a 4-point scale ranging from 0 (rarely or none of the time) to 3 (most or all of the time), with higher scores reflecting higher levels of depressive symptoms. In this study, the Cronbach's alpha coefficient for depressive symptoms was 0.87. --- Socioeconomic Status Five indicators were employed to determine SES (Bradley and Corwyn, 2002): monthly family income (henceforth "family income"), parental education, and occupational status. Family income was measured on a 3-point scale ranging from 1 (1000 CNY or less; approximately $150) to 3 (3000 CNY or above; approximately $450) (Wang M.F. et al., 2015). Parental education was coded on 6-point scale ranging from 1 (primary school education or less) to 6 (postgraduate education) (Xu et al., 2009). Following recommendations by Fuligni and Zhang (2004), parental occupational status was coded on a 3-point scale ranging from 1 (unemployed) to 3 (professional or semi-professional). Building on previous research (Bradley and Corwyn, 2002), family income, parental education and occupational status were standardized using z-scores and then summed so that higher scores reflected higher family SES. --- Maternal Care and Control Maternal care and control were assessed using the 23-item Chinese version of the PBI; Yang et al., 2009), which includes care (e.g., "Spoke to me in a warm and friendly voice"), control (e.g., "Did not want me to grow up") and encouragement of autonomy subscales (e.g., "Let me do those things I liked doing"), separately for mothers and fathers. Maternal care subscale (11 items) and control subscale (6 items) were the primary focus in the current study. The PBI has been frequently used to measure fundamental parenting behaviors and has demonstrated good internal reliability and validity (Chen et al., 2011;Tsaousis et al., 2012). Adolescents reported retrospectively about the parenting received from their mothers using a 4-point response scale ranging from 0 (very unlike) to 3 (very like). Six items of maternal care subscale were reverse scored. The maternal care and control variables were calculated separately by summing the scores of subscale items, with higher scores reflecting higher levels of maternal care and control. In this study, the Cronbach's alpha coefficients for maternal care and control were 0.71 and 0.64, respectively. Although the internal consistency of maternal control scale was not high, it was consistent with other studies using the control scale (Chambers et al., 2000). --- Sense of Coherence Sense of coherence was assessed via the 13-item Chinese short version of the 29-item Orientation to Life Questionnaire (OLQ; Bao and Liu, 2005), which consists of three subscales: comprehensibility (5 items; e.g., "Do you have the feeling that you are in an unfamiliar situation and don't know what to do?"), manageability (4 items; e.g., "How often do you have feelings that you're not sure you can keep under control?") and meaningfulness (4 items; e.g., "How often do you have the feeling that there is little meaning in the things you do in your daily life?"). The short version of the OLQ has been shown to be valid and reliable (Liu et al., 2006). Adolescents responded on a 7-point scale ranging from 1 (never happened) to 7 (always happened). Five items were reverse scored. High scores indicated a high level of SOC. In this study, the Cronbach's alpha coefficient for SOC was 0.85. --- Data Analyses SPSS 19.0 and MPLUS 7.0 were employed to conduct all analyses. Missing data in MPLUS were accounted for through full information maximum likelihood. Firstly, univariate analysis of variance (ANOVA) was used to examine gender and school stage (middle school and high school) differences in depressive symptoms, maternal care /control and SOC. Secondly, bivariate correlations were conducted to examine associations between main variables. Next, the structural equation model was used to test whether SES was directly associated with adolescent depressive symptoms and indirectly through maternal care/control and/or adolescent SOC separately or sequentially. According to recommendations by Rogers and Schmitt (2004), items from maternal parenting and adolescent depressive symptoms were randomly assigned to three parcels separately and provided three indicators of each latent variable. SES was identified by five indicators: family income, paternal and maternal occupational status, paternal and maternal education. The latent variable of adolescent SOC was created using three indicators including comprehensibility, manageability and meaningfulness. The model was estimated using the following fit indices: the Comparative Fit Index (CFI; >0.90, acceptable; >0.95, good); the Root Mean Square Error of Approximation (RMSEA; <unk>0.08, acceptable; <unk>0.05, good); and the Root Mean Square Residual (SRMR; <unk>0.08, acceptable; <unk>0.05, good) (Hu and Bentler, 1999). --- RESULTS --- Preliminary Analyses Table 1 contains the means, standard deviations and correlations among all variables. Main variables had skewness and kurtosis that fell within the acceptable range of no greater than <unk> 2.0 and <unk> 7.0, respectively (Finney and DiStefano, 2013). Univariate analysis of variance showed that girls scored higher on depressive symptoms, F(1, 1161) = 18.91, p <unk> 0.001, partial <unk> 2 = 0.02, and maternal control, F(1, 1210) = 6.17, p <unk> 0.05, partial <unk> 2 = 0.01, and lower on SOC than boys, F(1, 1181) = 7.10, p <unk> 0.01, partial <unk> 2 = 0.01. Middle school students scored higher on maternal care, F(1, 1206) = 11.31, p = 0.001, partial <unk> 2 = 0.01, and SOC, F(1, 1181) = 28.65, p <unk> 0.001, partial <unk> 2 = 0.02, and lower on depressive symptoms than high school students, F(1, 1161) = 27.94, p <unk> 0.001, partial <unk> 2 = 0.02. SES, maternal care, and SOC were significantly and positively correlated with each other, and they showed significant negative correlations with depressive symptoms; maternal control was significantly and positively correlated with depressive symptoms and negatively with SOC, but there was no significant correlation with SES. Thus, maternal control was excluded in later analyses. --- Structural Equation Model Analyses First, the measurement model including four latent variables and fourteen observed variables was tested and it provided a good fit to the data, <unk> 2 (71) = 415.16, CFI = 0.957, SRMR = 0.033, and RMSEA = 0.063. We found factor loadings were significant for indicators on latent variables, and all latent variables from the measurement model were significantly related with each other. Next, a structural model was used to examine the indirect effects of maternal care and adolescent SOC on the relationship between SES and adolescent depressive symptoms. Figure 1 shows the final model and standardized regression coefficients. Although the original model fitted well, the direct effect of SES on adolescent SOC was not significant, <unk> = 0.01, SE = 0.03, p > 0.05, and SES was not associated with adolescent depressive symptoms through adolescent SOC, <unk> = -0.01, 95% CI = [-0.064, 0.004], SE = 0.03, p > 0.05. Therefore the path from SES to adolescent SOC was removed to make the model parsimonious and the new model was examined. The new model also fitted well, <unk> 2 (68) = 205.99, CFI = 0.983, SRMR = 0.028, and RMSEA = 0.041. The total effect of SES on adolescent depressive symptoms was significant, <unk> = -0.20, SE = 0.03, p <unk> 0.001. After accounting for maternal care and adolescent SOC, the direct effect of SES on adolescent depressive symptoms was also significant, --- 83.86 FIGURE 1 | The indirect effects of maternal care and SOC in the relationships between SES and adolescent depressive symptoms. FI, family income; POS, paternal occupational status; MOS, maternal occupational status; PE, paternal education; ME, maternal education; SES, socioeconomic status; SOC, sense of coherence; -0.20 * *, total effect; -0.07 *, direct effect; * * p <unk> 0.01; * * * p <unk> 0.001. <unk> = -0.07, SE = 0.03, p <unk> 0.05. The specific indirect effect of SES on adolescent depressive symptoms through maternal care was significant, <unk> = -0.02, 95% CI = [-0.043, -0.004], SE = 0.01, p <unk> 0.05, effect size = 10%, as was the specific indirect effect of SES on adolescent depressive symptoms through maternal care and adolescent SOC sequentially, <unk> = -0.11, 95% CI = [-0.137, -0.075], SE = 0.02, p <unk> 0.001, effect sizes = 55%. The combination of SES, maternal care and adolescent SOC accounted for 65% of the variance in depressive symptoms (R 2 = 0.65, p <unk> 0.001). Given that there were gender and school stage differences in the outcome variables, we tested whether indirect pathways were different for girls versus boys and for younger versus older adolescents using multiple-group analysis. We first examined measurement invariance across groups by comparing the fit of a constrained measurement model (where factor loadings were fixed across groups) with the fit of an unconstrained model (where factor loadings were allowed to vary across groups). Then we tested the structural equivalence by comparing a constrained structural model (where all pathways were constrained to be equal across groups) with the constrained measurement model. According to Cheung and Rensvold (2002), if the difference of CFI ( CFI) is less than 0.01, this suggests model invariance. Results found that for gender, measurement invariance of CFI = 0.00, and structural equivalence of CFI = 0.001 were obtained. For the school stage, both the measurement invariance, CFI = 0.00, and structural equivalence, CFI = 0.002, were verified as well. Therefore, the indirect pathways did not differ for boys and girls and for younger and older adolescents. That is to say, gender and school stage could not moderate the indirect relationship. --- DISCUSSION The present study extended our understanding of the underlying links between family SES and adolescent depressive symptoms in Chinese culture by examining whether family SES was associated with adolescent depressive symptoms indirectly through maternal parenting and adolescent SOC using a Chinese low-and middle-income sample. The results partially supported our hypotheses, suggesting that SES was associated with adolescent depressive symptoms not only through maternal care separately but also through maternal care and adolescent SOC sequentially. Consistent with the previous findings (Vandervalk et al., 2004;Wang Y.C. et al., 2015), in this study girls reported more depressive symptoms than boys and students in high school reported more depressive symptoms than those in middle school. The current study also found that girls reported more maternal control and lower SOC than boys and middle school students reported higher SOC and more maternal care than those in high school. As this is a Chinese sample, it was possible that Chinese mothers were influenced by traditional Chinese notions of gender roles expectation, which stated that boys should be encouraged to be independent whereas girls should be granted less autonomy and more control (Shek, 2006). The extant research (Moksnes et al., 2012(Moksnes et al.,, 2013) ) showed consistent findings for gender difference on SOC, suggesting that relative to boys, girls tended to view stress they encountered in the environment as less controllable and organized their own resources ineffectively. In the present study, lower SOC, less maternal care and more maternal control were associated with higher levels of depressive symptoms, which was useful to explain why girls reported more depressive symptoms than boys and why students in high school reported higher level of depressive symptoms than those in middle school. One study with a sample of 40-to 70-year-old adults found that SOC tended to increase with age (Eriksson et al., 2007), but it is difficult to draw any comparisons from the current findings given that the population in the two studies varies significantly. Thus, further work to explore the developmental tendency of SOC would be warranted. The results of this study suggested that the lower the family SES, the less maternal care was displayed to their children and the higher the level of adolescent reported depressive symptoms. Similar results can be found in previous studies, which show that parents with low-SES experience more depression and anxiety themselves (Phongsavan et al., 2006) and are more like to behave toward their children in punishing and less caring ways, and these children are also more likely to experience internalizing problems (B<unk>e et al., 2014). Although our findings were similar to previous studies, the results need to be interpreted with caution given the weak indirect effect of maternal care on the relationship between SES and adolescent depressive symptoms. Contrary to our expectations and previous studies conducted in other cultures (Hoff et al., 2002), this study did not find direct association between SES and maternal control, which may be partly due to particular Chinese culture relevant to child-rearing. To some extent, maternal control is similar with guan, which is the traditional Chinese parenting notion of "more love and more discipline" (Chao, 1994). Chinese mothers, regardless of their family SES, are generally inclined to exert guan in their children's lives. Additionally, the lack of association also suggested that there may have been other variables our study did not include (e.g., maternal emotional problems) through which SES was indirectly associated with adolescent depressive symptoms. As hypothesized, we found that SES was associated with adolescent depressive symptoms through maternal care and adolescent SOC sequentially. This supported the existing theory and empirical findings that the external context including distal (such as SES) and proximal factors (such as maternal care) can contribute to the development of internal resources (such as adolescent SOC) (Benson et al., 2006) and then individual psychosocial development (such as adolescent depressive symptoms) (Benson, 2003;Benson et al., 2006). It should be noted that, because of the relatively small direct effect (<unk> = -0.11), the effects of maternal care and adolescent SOC on the relationships between family SES and adolescent depressive symptoms need to be further examined in future research. Of note, gender and school stage were not found to moderate the indirect pathways of maternal care and adolescent SOC on the relationship between SES and adolescent depressive symptoms. This may suggest that although there were gender and school stage differences in main variables, the relational mechanism between SES, maternal care, SOC and depressive symptoms was similar for both boys and girls in middle schools and in high schools. Given that few studies have examined school stage as a moderator of the indirect pathways from external context to individual development outcome, further research is required to reach strong conclusions. --- Limitations and Recommendations for Future Research One limitation of the present study is that we focused on the relationships of SES and adolescents' perceived depressive symptoms, maternal care/control and SOC, therefore adolescent self-report was used to obtain all study variables. It was possible that the results were limited by relying solely on a single reporter. Using multiple informant measures may reduce the influence of common method variance in future research. This was a community sample and we used a depressive symptoms measure rather than diagnostic assessments of depression, therefore any conclusions cannot be generalized to a clinical population. Further research could employ clinical populations to examine these relationships further. The second limitation is that we used cross-sectional data to examine the indirect relationships, meaning interpretation of the indirect effects of maternal care and adolescent SOC on the relationship between SES and adolescent depressive symptoms should be cautious and no causal conclusions of relationships can be established. Longitudinal designs should be used to establish the sequential nature of the relationship between SES, parenting, SOC and depressive symptoms in future work. Thirdly, our findings were based on the data reported by participants mostly in low-and middle-income families, which could limit the ability to generalize to participants from higher income families. Finally, it is generally known that mothers tend to have more emotional interaction with their children compared to fathers in both eastern and western societies. Especially in Chinese cultures, due to the historical tradition and social division of labor, of "men working outside while women taking care of the family inside, " it is reasonable to assume that mothers in China are involved much more in their child's life relative to fathers. Furthermore, Liu and Wang (2015) also found that maternal parenting behaviors rather than paternal parenting influenced children's internalizing problems. Therefore the present study only focused on the effect of maternal care and control. It should be noted that both mothers and fathers play their unique role in children's development (Milevsky et al., 2007). In the past two decades, Chinese economic development has led to an increase in the number of women who work, with fathers gradually becoming more engaged in their children's daily lives (Zhang, 1999). Indeed, including both maternal and paternal parenting in the same model may illustrate a more complete picture of how parental behavior is related to adolescent development. Further studies should endeavor to include maternal as well as paternal parenting. Despite these limitations, this current study broadened earlier research by investigating whether SES was associated with adolescent depressive symptoms through maternal care and adolescent SOC separately and sequentially on a Chinese sample. The present study had both theoretical and practical implications. Although family SES has long been implicated as an important determinant of adolescent depression (B<unk>e et al., 2014), investigations exploring relational mechanisms between SES, parenting, SOC and adolescent depressive symptoms, particularly in the Chinese culture, were relatively scarce. In particular, we found that SES was associated with adolescent depressive symptoms indirectly not only through maternal care separately but also through maternal care and adolescent SOC sequentially. Findings highlight that more attention should be given to low-and middle-income families where children are more likely to experience negative maternal parenting and may be more likely to experience depressive symptoms. --- ETHICS STATEMENT This study was carried out in accordance with the recommendations of the Institutional Review Board of Shandong Normal University (Jinan, China). The protocol was approved by the Institutional Review Board of Shandong Normal University. The parents of all participants signed written informed consent in accordance with the Declaration of Helsinki and its later amendments. --- AUTHOR CONTRIBUTIONS FX wrote and revised the whole manuscript. WC wrote and revised the whole manuscript. MP wrote some sections and gave suggestions and revised and polished the whole manuscript. TX collected data of this manuscript and consulted relevant literature. --- Conflict of Interest Statement: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The current study investigated whether socioeconomic status (SES) was associated with adolescent depressive symptoms through maternal parenting and adolescent sense of coherence (SOC). Using a sample of 1220 Chinese adolescents, it was found that SES, maternal care, and adolescent SOC were positively related to each other and negatively related to adolescent depressive symptoms, respectively. Maternal control was positively related to adolescent depressive symptoms and negatively related to their SOC, but not significantly to SES. By analysis of structural equation modeling, we found that SES was associated with adolescent depressive symptoms indirectly through maternal care separately, as well as through maternal care and adolescent SOC sequentially. This study extended our understanding by showing possible indirect pathways by which family contextual factors and individual internal resources for adolescent depressive symptoms may operate separately and sequentially. The overall results highlighted the need to study adolescent depressive symptoms to find external and internal positive factors for maintaining adolescent emotional health, especially in families with relatively lower income.
INTRODUCTION At the close of 2019, fresh drainage of undecorated acut e respiratory syndrome coronavirus 2 blowouts over the world from Wuhan, China, spawning austere acute respiratory disorder (Lu, Stratton, & Tang, 2020). The WHO classified COVID-19 as a worldwide hazard in March 2020. The COVID-19 outbreak is proving to be a historic tragedy in many aspects, including health, social, and economic, particularly in the hardest-hit countries, such as India, China, Italy, Iran, and the United States (Di Gennaro et al., 2020). Separation, quarantine, social separation, and public detention were all performed immediately (Hopman, Allegranzi, & Mehtar, 2020). Selfdistancing, lockdown tactics, improved diagnosis and treatment, and limitations on mass gatherings are all being utilized to try to halt the virus's spread in the affected nations (Gautam, 2020). People in low-and middle-income nations' sexual and reproductive health are expected to suffer as a result of the outbreak's load on health systems (Riley, Sully, Ahmed, & Biddlecom, 2020). Local or national lockdowns that compel health services to close if they are not judged vital, as well as the consequences of physical isolation, travel limitations, and economic hardship, will have an impact on sexual and reproductive health (WHO). To stop the virus from spreading, many governments are restricting people's travels (Riley et al., 2020). During COVID-19, 28,000 maternal mortalities and 168,000 infant deaths would be terrible. Complications affect 1.7 million mothers giving birth and 2.6 million infants (Riley et al., 2020). As of May 1, 2021, there had been over 153 million verified COVID-19 infections and 3.2 million deaths (WHO). When compared to middle-or low-income nations, some industrialized countries with a greater quality of life had higher COVID-19 mortality rates (Singh & Misra, 2020). COVID-19 patients have been linked to metabolic abnormalities such as hypertension, diabetes, obesity, cardiovascular disorders, cerebrovascular accident, chronic obstructive pulmonary disease, asthma, renal problems, and cancer (Singh & Misra, 2020). COVID-19 causes a significant increase in severity and fatality (Li et al., 2020). All persons who provide or assist in the delivery of health services or the operation of healthcare facilities are considered part of the health workforce (Centre for Health Workforce, 2021). Doctors, nurses, dental hygienists, psychiatrists, creative arts therapists, dietitians and nutritionists, Patient Care Coordinators, and others make up the health workforce (Centre for Health Workforce, 2021). Sudden changes in work circumstances and strain are likely to significantly influence p hysicians' long-term health outcomes during COVID-19 (Moazzami, Razavi-Khorasani, Dooghaie, Farokhi, & Rezaei, 2020). Globally, healthcare workers have responded to the task of treating COVID-19 patients, perhaps at great expense to their health and well-being (Billings, Ching, Gkofa, Greene, & Bloomfield, 2020). COVID-19 harmed persons who already have anxiety or mood issues (Asmundson et al., 2020). The main sources of anxiety were confusion about COVID-19 treatment guidelines, insufficient resources, especially individualized safety tool, and the possibility of transmission to loved ones at home. During COVID-19, the most common indications of stress were anxiety, physical tiredness, and sleep difficulties (Asmundson et al., 2020). During pandemics, medical personnel had to cope with a higher number of patients who died at a high rate in a stressful setting (Billings et al., 2020). High-stress occupations, combined with the special demands of the current COVID-19 crisis, have undeniably raised the risk of distress, burnout, melancholy, drug and alcohol addiction, and suicide among frontline healthcare professionals worldwide (Billings et al., 2020). During these trying circumstances, healthcare personnel is at a higher risk of burnout, which could have a substantial influence on their patients' health and care (Fitzpatrick, Patterson, Morley, Stoltzfus, & Stankewicz, 2020). Physician tiredness affects patient care and satisfaction, as well as the quality of care provided (Shanafelt & Noseworthy, 2017). --- REVIEW OF LITERATURE In a study by Khalafallah et al. (2020) analysis of 407 U.S. (Neurosurgeons) to understand the impact of as part of the epidemic on burnout and job fulfilment, the paper correlates emotional tiredness, alienation, and personal success with burnout and professional contentment. According to the data, during the COVID-19 outbreak, future earnings were tied to burnout. Future earnings were uncertain during COVID-19. Because of the reduced work hours during the epidemic, professional life would deteriorate. According to the study, controllable stressors reported among neurosurgeons during the pandemic assisted in reducing burnout and increasing job satisfaction. Jiménez-Labaig et al. (2021) The researchers looked into the burnout levels of 243 Spanish oncologists, as well a s the consequences of the endemic on their working out and healthiness. This article quantifies burnout levels using three dimensions: exhaustion, cynicism, and efficacy, and it classifies burnout into five burnout profile groups for analysis: burnout, ineffective, engaged, overextended, and disengaged. According to the findings, creating a healthy work-life balance, having access to support systems, and taking enough vacation time can all help to minimize burnout. During pandemics, oncologists experience worry and moderate sadness. As a result, young European oncologists frequently experience burnout. Arkanudin and Rupita (2021) Investigated the perceptions of female nurses in Indonesia on burnout throughout their performance and integrity services during the ongoing COVID-19 epidemic. The descriptive qualitative methodology was employed in this article. Burnout hurts performance and service quality, according to the findings. Handling COVID-19 patients causes anxiety, concern, and exhaustion. Work overload, work stress, poor work performance, anxiety, and worry are all factors that contribute to nursing burnout. Wu et al. (2020) Conducted a study on 190 (Chinese medical workers to assess the prevalence of burnout among physicians and nurses working in frontline (Frontline) wards vs those working in usual wards. The descriptive statistics method was utilized in this article. Despite dealing directly with sick patients, frontline wards showed a considerably lower rate of burnout and were also less concerned about becoming infected than the normal wards group. Front-line workers may have felt more in control of their situation, which can assist prevent burnout. Torrente et al. (2021) Looked into the same topic of frontline and routine ward burnout during COVID-19 this article analyzed 674 Spanish healthcare experts. The descriptive-analytic method was used on doctors, nurses, nursing assistants, and emergency healthcare technicians this research sought to find out how common burnout is among front-line hospital staff in Spain. According to this study, professionals who work in COVID-19 frontline wards had twice the risk of burnout as those working in their normal wards. Women had much more burnout than men owing to fear of self-infection performance and the quality of care provided to patients. For adolescent women working on the COVID-19 frontline during the pandemic outbreak, weariness is a big issue. Abdelhafiz et al. (2020) analyzed 220 Egyptian doctors to determine the occurrence of burnout syndrome and related peril variables in a sample of Egyptian physicians according to the findings the majority of respondents had lower levels of personal success while a smaller minority of respondents had high levels of depersonalization and emotional weariness with burnout syndrome the female gender was found to be associated with higher levels of emotional tiredness infection with COVID-19 or death from it among coworkers or family was linked to higher emotional exhaustion and lower personal achievement. Additionally, Kotera, Maxwell-Jones, Edwards, and Knutton (2021) conducted studies on burnout. Burnout, self-compassion, work-life balance, and strain are all investigated in this study on 110 people in the United Kingdom. This study indicated that burnout was adversely correlated with age, self-compassion, and work-life balance, and substantially correlated with weekly working hours and pressure. The weekly working hours and work-life balance were substantially associated with burnout. Self-compassion helped to reduce the relationship between work-life balance and emotional exhaustion. According to this article, encouraging work-life balance and self-compassion would significantly lower burnout among psychotherapists. In order to look into the prevalence and severity of burnout as well as the factors that contribute to burnout and the impact of the coronavirus (COVID-19) on burnout syndrome. Dinibutun (2020) Conducted a descriptive study on 200 Turkish. The poll found that physicians had low levels of depersonalization and personal achievement and a medium level of emotional exhaustion. Burnout affects both married and single people, men and women. The degree of financial fulfilment a person experiences has minimal bearing on burnout. Kelker et al. (2021) conducted a study on 213 emergency medicine practitioners in the US. In the early stages of the COVID-19 epidemic, this study focuses on the characteristics and needs of emergency physicians and advanced practice clinicians, including their well-being, resilience, burnout, and well-being. The majority of frontline emergency medicine provider workers experienced considerable levels of stress, worry, panic, safety worries, and relationship stress as a result of COVID-19, despite their resiliency. The study found that COVID-19 also increased the stress on their relationships. Personal safety, the impact on dependent care, interpersonal conflict, increased job demands, and feelings of isolation have all been linked to COVID-related burnout. Maqsood et al. (2021) The quality of work-life (QWL) of the intensive care units (ICUs) and emergency unit workforce was assessed in a descriptive study of 290 Saudi Arabian healthcare workers. The quality of life among healthcare workers was bad during the COVID-19 pandemic, according to this study. Extra working hours were linked to reducing quality of work-life, whereas demographic factors were linked to higher quality of work-life. Extra working hours and direct interaction with COVID-19 patients did not affect healthcare workers' work-life balance. Osita, Onyekwele, Idigo, and Eze (2020) Studied 342 Nigerian health workers in a descriptive study. The focus of this research was on work-life balance and employee performance, as well as the impact of workload on service quality. Workload affects the quality of service provided by health personnel, according to these researchers. This exploratory longitudinal study revealed an increase in psychological discomfort among frontline employees. Autonomy and expertise in the workplace Work-related phylogeny gratification were linked to lower levels of psychological anguish, whereas frustration was linked to higher levels of mental a gony. Need fulfillment was shown to be inversely connected to psychological discomfort, but need frustration was found to be positively related. Khalafallah et al. (2020) Work on COVID-19 Workflow changes, personal and professional stress, job satisfaction, and burnout are all factors to consider. In order to evaluate medical burnout, the author used the Abbreviated Maslach Burnout Inventory (AMBI). Three categories-emotional exhaustion, depersonalization, and personal accomplishment-were used to group the nine questions. Bivariate analysis was used by the authors to identify potential risk factors for Emotional Exhaustion, Depersonalization, Personal Accomplishment, burnout, and job satisfaction. Burnout and work satisfaction were examined using a multivariate binary logistic regression analysis. A study by Maqsood et al. (2021) examined the importance of the quality of work-life balance for healthcare workers in intensive care units. The author (WHO) used the WHO QoL-BREF instrument with permission from the World Health Organization. In this study, the author also used the last-observation-carriedforward technique. The determinants of quality of life were evaluated using hierarchical regression analysis. Questionnaire was used to assess mental health. To investigate differences between men and women, chi-square was used for categorical data while t-Student was used for quantitative variables. --- MODELS OF CONSCIOUSNESS --- OBJECTIVE This is a literature review-based article so the authors mainly focus on the following area. • To observe and review the work-life condition of healthcare employees by reviewing related literature. --- METHODOLOGY The study is based on the case study on healthcare employees' situation and problems during COVID-19 of 30 articles by various researchers. The problem presented above leads to a considerable need to conduct studies in the work-life condition area. The work has been concentrated on understanding the work-life issues of the current healthcare workforce. --- FINDINGS The recent coronavirus (COVID-19) outbreak has disrupted clinical workflow and healthcare delivery. Following a study of the selected literature, the authors offer an overview of the current state of healthcare employees. During COVID-19, a large number of healthcare personnel experienced burnout. Burnout is defined as sentiments of energy depletion or tiredness, increasing mental detachment from one's employment, negativism or cynicism about one's career, and a lack of professional efficacy the World Health Organization. To avoid burnout, maintain a good work-life balance, have access to support systems, and take enough vacation time. Job overload, workplace stress, poor job performance, anxiety, worry, weekly working hours, pressure, safety concerns, effect on dependent care, and relationship strain are the most common causes of healthcare staff burnout. Burnout is common among young European oncologists. Burnout hurts performance and service quality. Women were more likely than males to have burnout owing to concerns about self-infection, performance, and the quality of care offered to patients, as well as depression, worry, stress, and emotional weariness. Physicians who chose their profession deliberately experience less burnout than those who do not. Depression, suicid al thoughts, and burnout continue to afflict physicians of all specializations. Work-life balance and self-compassion would be extremely advantageous in minimizing burnout among psychotherapists. reconcile the tension between job and family, a high level of self-efficacy is required. Those who possess higher family-to-work conflict had worse emotional intelligence and self-efficacy overall. Self-efficacy is linked to emotional intelligence. Table 1 present articles summary. --- RECOMMENDATION The relevant authorities should change the healthcare policy. The health of the healthcare workforce needs to monitor regularly and apply preventative tactics because they are t he frontline workers. Some studies found that equipment is not enough for providing healthcare services so an adequate level of equipment and necessary assistance to the healthcare personnel. Font line workers faced a huge workload during COVID-19 so the government and authorities recruit more healthcare workers to reduce the workload. Government should implement new healthcare policies to avoid burnout of healthcare workforces and review the curre nt healthcare --- CONCLUSION Our research shows that the pandemic's uncertainty in terms of healthcare workforce policy, enough equipment, job security, depression, future income, and family support is strongly linked to burnout and work -life balance. Workload, worry, fear of harm, uncertainty about future earnings, panic attacks, sleep disturbance, job stress, fear, and exhaustion are all linked to physical and mental health problems. The professional and social life patterns are altered with COVID-19. As a result, it affects both personal and professional life. The healthcare profession is currently under a lot of stress as a result of a higher number of COVID-19-affected patients. The medical staff lacked the required instruments as well as psychological supp ort. As a result, to retain healthcare personnel, organizations and governments should monitor and give this vital help. During COVID-19, burnout was at an all-time high in the health workforce. According to the authors of this article, healthcare workers require both incentive help and appropriate equipment. Quality of work, income, and welfare policies may all contribute to a pleasant and easy working environment. Life satisfaction increases as job satisfaction and work -life balance improve. Workload, stress, and pressure at work, as well as a lack of participation from superiors and coworkers, all lead to job dissatisfaction. As a result, the government and other institutions modified their healthcare policies. In the healthcare business, new policies must be implemented. More healthcare staff are required to keep pace with the growing number of COVID-19 patients. --- Data Availability Statement: The corresponding author can provide the supporting data of this study upon a reasonable request. --- Competing Interests: The authors declare that they have no competing interests. Authors' Contributions: All authors contributed equally to the conception and design of the study. All authors have read and agreed to the published version of the manuscript.
The COVID-19 epidemic is proving to be an unparalleled disaster in all facets, including health, sociological, economic, and financial. Aside from the fact that it may have a substantial influence on their health and well-being, health practitioners all over the world have risen to the challenge of treating COVID-19 patients. The primary goal of this research is to examine and evaluate the work-life balance of healthcare personnel during pandemics. This study conducted a systematic evaluation to determine the current situation of health workforces and to assess the impact of COVID-19 on health workforces. This paper began with 70 article reviews. A brief evaluation of 70 publications, 30 articles were chosen for the research based on area and analysis. After examining all of the publications, the study has found that the primary causes of depression, burnout, and suicidal thoughts include workload, anxiety, worry for safety, future earning uncertainty, and panic attack, and sleep disturbance, job stress, exhaustion, and dread. These characteristics affect workers' physical and mental well-being. Finally, it has an impact on their work-life balance and service quality. Future studies could include additional publications for better results and a review of other sectors. The government and organizations should provide more training, organizational assistance, support for healthcare pract itioners' families, PPE, and mental health services. Regularly monitor the health of the healthcare workers and employ preventative measures. To alleviate work-related stress, higher-level management must recruit extra health personnel. Contribution/Originality: This study of the literature focuses on several papers regarding the work -life balance issues faced by the medical workforce during the COVID-19 epidemic. It draws attention to the challenges experienced by healthcare personnel, such as their hectic schedules, burnout, and insufficient support mechanisms. The report points out how crucial it is to address these challenges to ensure the workforce's performance and wellbeing.
Introduction For better well-being for all and at all ages by 2030, the "Sustainable Development Goals (SDGs)" agenda was endorsed by the United Nations in 2015. No poverty (Goal 1), good health and well-being (Goal 3) and quality education (Goal 4) were highlighted among the seventeen goals [1]. Poverty and illiteracy are common causes that limit individuals' accessibility to public health care, leading to poor hygiene and illness. Additionally, poverty, illiteracy and illness have been considered factors of a vicious cycle of unhealthy lives. Therefore, providing equitable access to health care along with quality education and raising economic standards should be strategies for improving the well-being of individuals in a society. In Thailand, the national "Universal Health Coverage (UHC)" payment scheme for the health service was launched in 2002, so that all Thais could access the public health care system equitably from birth until death. The UHC is composed of various aspects that complete the health service loop, i.e., health promotion, prevention, treatment and rehabilitation, which aim at healthy lives for all in the country. Moreover, other public services necessary for daily living (i.e., electricity, piped water and sanitation), standard education, occupational training, employment assistance, etc., should be available equitably among the people of a nation. However, there are gaps in the nationwide coverage of the UHC program as well as other public facilities in Thailand, especially among minority groups or marginalized peoples, migrant workers, nomads, etc. Inaccessibility to the public health care system and a lack of health knowledge among these people to manage their own health cause them to be highly vulnerable to unhealthy living conditions and to acquire either communicable or non-communicable diseases (NCDs), most of which are preventable. Providing accessible health services and other facilities necessary for better living conditions to these special groups of people should be systematically organized and accepted by these target groups of people. The "Orang Asali" (OA), which means "the first people" or" original people", is a group of indigenous people who have lived dispersedly throughout the Malay peninsula and southernmost Thailand, where it bordered with Malaysia, for 25,000 to 60,000 years [2,3]. According to anthropological information, the OA comprise three ethnically different groups, i.e., the Senoi, Porto-Malay (or Aborigine Malay) and Negrito [4]. The Senoi are the largest group of OA in Malaysia, while the OA in Thailand are mostly Negrito. In Thailand, the OA people live primarily in two mountain ranges, the Banthut mountain range in the Phattalung, Satun, and Trang provinces and the Sunkalakiri mountain range in the Yala and Narathiwas provinces. The latter group is close to the Thai-Malaysia border; therefore, their cultural beliefs and lifestyles are similar to those of the OA in Malaysia. The available research related to the OA in Thailand to date mainly involve anthropology, social status, and lifestyle rather than living conditions, health care services and education. According to a systematic review, the OA in Malaysia commonly suffered from malnutrition, lower growth rates in children, soil-transmitted helminths, pulmonary diseases and cardiometabolic diseases [5]. These acquired diseases also occur among the OA in Thailand due to similar living conditions and environments. To promote secure and healthy living conditions among the OA in Thailand, various interventions have been applied, such as providing these groups with Thai citizenship and accessibility to standard education, especially for school-aged children; health education and health promotion programs; and resettlement areas prepared for their permanent residence. According to the policy of the Ministry of the Interior of Thailand (MOI-T), OA in Thailand are considered Thai citizens, as other Thais are. Numerous operational plans have been deployed to improve their living conditions and increase their ability to access all public services and facilities. In the public health sector, the provision of equitable access to public health services is an aim of health care system management for the OA based on the respect for their human rights, as for other Thais, and in response to one of the SDGs. The UHC is a key measure to eliminate inequitable accessibility to health care service [6]. For some time, the OA in the study area have received basic living support, including medical care from the local government when the requested. We expect that the current registration program for the OA as Thais will solidify a sustainable life-supporting system for them. The registered OA would then be able to access Thai public services, as other Thai citizens are able to. Furthermore, the outcomes and progression for the improvement of their living conditions can be followed systemically. Herein, we describe a pilot program of actions to promote the OA's voluntary adoption of public services and facilities, including public health care services among a group of OA living in the Chanae district of Narathiwas province. --- Material and Methods --- Study Population and Setting This was a qualitative study using semi-structured interviews for data acquisition. The study participants were nine OA leaders and their representatives selected by the OA villagers from Toapaku and Biyis villages (OA); five local governmental personnel working in the offices of civil registration, education, agriculture and public health (GP); and six local Thai community leaders or associate leaders (TC), e.g., the heads of Thai local villages, the head of sub-district offices, religious leaders, etc. We selected two of five OA villages in the area in which the villagers had settled permanently as the study sites. Two weeks after the research information, objectives and process were clearly described to the OA villagers via translators, verbal consents were obtained voluntarily from the OA villagers because they are not able to understand spoken or written Thai. Additionally, the GP and TC groups were informed of the study process and written consents were obtained. The whole research study process was conducted in the OA settlement areas and the local governmental offices in Chanae District, Narathiwas province, with the support of the Southern Border Provinces Administrative Center (SBPAC), which acted as the coordinator for the related local governmental agencies. --- Preparation for Data Collection After ethical approval and consent from the study participants was obtained, the research team started to gather preliminary information regarding the residential locations and the environments of the two study OA villages, their usual lifestyle and, significantly, their willingness to adopt Thai citizenship. On the official side, the relevant Thai laws or regulations and practical guidelines for verification of the OA as Thai people were reviewed. The preliminary information included an initial interview with the GP and TC groups regarding what had been carried out previously and the associated outcomes. Then, the in-depth interview questions were designed and tested for content validity by three experts in anthropology, public health and qualitative research. The questions were divided into three sections according to the research participants, i.e., the OA, GP and TC groups. We used translators who understood the OA spoken language as assistants in data collection. We visited and interviewed the OA participants in their homes, where we spent an average of two hours to complete each interview. We spent an average of one hour to interview the GP and TC participants at their offices. The interviewed content was recorded on audio recorders for later review and validation. The questions used for the interviews with the study participants were as follows: For the OA: After receiving the information from local Thai officers, are you willing to adopt Thai citizenship and why? After receiving the information from local Thai officers, do you understand your rights and responsibilities after you become Thai. In the past, how did you receive information about the registration process? --- Data Analysis Before we started the analysis, the GP and TC reviewed and validated the in-depth interview content themselves, while an independent translator was used to ensure the correctness of the translation of the OA's interview content. We performed data analysis following the "Thematic Analysis" principle [7,8], which used six steps: (1) data familiarization and writing familiarization notes, (2) systematic data coding, (3) generating initial themes from coding and collated data, (4) developing and reviewing themes, (5) refining, defining and naming the themes, and (6) writing the report. After the analysis, an action plan was co-designed by the study team staff, GP and TC based on the results from the thematic analysis and the designed action plan was implemented among the OA study participants. We performed a short-term outcome evaluation, and long-term evaluation was also planned to be done in the future (Figure 1). --- Ethical Considerations Ethical approval for the study was granted by the Ethics Committee of Public Policy Institute, Prince of Songkla University (EC code: 008/64, date of approval 10/06/2021). We strictly followed the 1964 Declaration of Helsinki, its amendments and related guidelines for the ethical conduct of research studies. All the participants' identifiable information was completely anonymized. --- Results --- Study Participants' Characteristic We enrolled nine OA leaders and their representatives selected by the villagers of the Toapaku and Biyis villages, five local governmental personnel (GP) and six local Thai --- Ethical Considerations Ethical approval for the study was granted by the Ethics Committee of Public Policy Institute, Prince of Songkla University (EC code: 008/64, date of approval 10/06/2021). We strictly followed the 1964 Declaration of Helsinki, its amendments and related guidelines for the ethical conduct of research studies. All the participants' identifiable information was completely anonymized. --- Results --- Study Participants' Characteristic We enrolled nine OA leaders and their representatives selected by the villagers of the Toapaku and Biyis villages, five local governmental personnel (GP) and six local Thai community leaders or associates (TC) of the Chanae District, Narathiwas province. The characteristics of the study participants are shown in Table 1. --- Preliminary Information from the Initial Survey Initially, we interviewed the GP and TC groups on three topics for preliminary information before in-depth interviews and subsequent action planning were carried out. The aims of this preliminary interview were to evaluate the preparedness of the study participants from the Thai official sector, who would be involved in the process of planning further action, as well as the current Thai regulations. The topics of the preliminary interview included the following. --- The Settlement Areas and the Lives of the OA in the Study Area Overall, there were five villages of OA in the study area, of which the villagers in three villages lived nomadically by hunting or harvesting natural forest products on the mountain, while the remaining two groups, i.e., the Toapaku and Biyis villages, had permanent settlements. The Toapaku village had six households with 32 members, while the Biyis village had five households with 27 members. The leaders of both villages were males whose leadership was derived from their ancestors. They spoke OA, Malay or, less frequently, the Thai language. The preliminary information obtained from the talks with the GP and TC groups in the area included the following: "The OA have lived along the Dusongyor mountain range in Chanae district for at least 200-300 years. During the earlier days, the forests on the mountains were rich with many natural products adequate for their household use. They usually found a new place to settle when the former settlement place no longer supplied enough food and water for living. --- " [TCa1] "There are 2 groups of OA who reside permanently in this area with a total of 11 households and 59 members. They usually live in mountainous areas where a stream flows. Their houses, called "Tub" (in Thai), are simple and commonly built from bamboo and local forest woods. Their houses have a floor high off the ground and the roofs are made of a kind of palm leaves easily found in their living areas. Each house contains only one family. They prefer to live and move to a new settlement together in group." [GPb1] "These two OA groups have settled in their current living places for at least 1 year. They usually harvest or hunt forest products or wild animals only enough for their consumption (not for commercial purposes) during the daytime and return home at night. This is not like in the past when they regularly moved to a new place every 9-10 days when the harvestable forest products became scarce." [TCa2] "The building styles of the houses of those who settle permanently are different from the styles of those who migrate from place to place in the forest. The settled houses are built with more stability. They have high raised floors, stable wooden support poles and roofs. The "Tubs" of the migratory OA groups are simply built using bamboo, forest wood and leaves easily found in their living areas, and the houses have no walls. The building style of settled houses seems to indicate that they intend to stay permanently in this place." [TCa1] --- Current Thai Law and Regulations and Previous Experience of Granting Thai Citizenship to OA in the Other Provinces The district head governor and his staff followed the relevant policies of the MOI-T and studied the laws and regulations applicable to this issue. They set up legally based portals for the OA to receive Thai citizenship. Initially, the OA were clearly informed about the steps required to obtain Thai citizenship, and their rights and responsibilities after becoming Thai citizens according to the Thai laws. Significantly, it was emphasized to the OA that their acceptance of Thai citizenship was voluntary if they fulfilled the required legal criteria for becoming Thai. The local governor's teams also studied a previous successful project of verification and providing Thai citizenship to OA carried out in the Betong district of the Yala province as a model. "We follow the policy of the MOI-T that OA in Thailand are regarded as Thai citizens. They have the same freedoms and rights to access government facilities and support as other Thais. Therefore, we try to conduct the process of granting Thai citizenship to the OA and have them legally registered as Thais according to their willingness." [GPa3] "We studied the experience of the local government in Betong to deal with the status of the OA there and found that they applied the "Regulations for Civil Registration" of the Department of Governance, Ministry of Interior (5th revision, 2008) to handle any problems. In Betong, the local governor's team traced an OA individual's family tree in combination with the confirmations from local Thai witnesses that the OA had grown up and lived in the area for a long time before as criteria for the approval of Thai citizenship for the OA. Finally, the OA in Betong were registered into the Thai citizen list, and Thai identification cards (IDC) and numbers (IDN) were provided to them. We plan to use the practices in Betong as a model for our plan too." [GPb2] "Historical and anthropological information confirm that the OA have settled in this area for many thousands of years. We observe their physical characteristic, living conditions, livelihoods and languages which are compatible with the original OA described in both Thai and western historical records to support their presence in southernmost Thailand." [GPb1] 3.2.3. Evaluation of the OA's Willingness to Adopt Thai Citizenship and the Barriers of the Verification Process for the OA as Thais Two months prior to our interview, the district governor's team staff evaluated the OA's willingness to adopt Thai nationality by informing them of their rights and responsi-bilities as Thais before they could decide freely. At the same time, they prepared the OA for our research team to perform the in-depth interviews for data collection. "We informed the OA participants about the study and asked them about their willingness to adopt Thai citizenship among those who lived in permanent settlements first as they were easy to contact and knew much about the local Thai living conditions and cultural practices. For the OA who regularly migrate for resettlement on the mountain range, it was very difficult to get in touch with them." [GPa1] "Language was a significant barrier of providing clear information to the OA. We assumed that there were some possible misunderstandings about the citizenship registration process and the answers of their willingness to be registered as Thais between the district governor's staff and the OA. Hence, we had to contact the OA via some persons that they relied on and understood the OA language well." [GPb1] "The OA in this area rely on their employers who have hired them for casual labor for a long time. Apart from the employers, Thai villager leaders or their assistants who usually support the OA can communicate with them well. We asked these persons to help the district governor's team by contacting and making an appointment with the OA before our visit to explain the details of the information to the OA. Every sector of the district government team including representatives of Thai civil registration, district land management, agriculture, public health, education, etc., were prepared to respond to relevant queries from the OA." [TCa1] --- In-Depth Interview Results, the Developed Action Plan and Actions Performed The research team, with the assistance of the local governor's staff, visited the OA who lived permanently on the Dusongyor mountain to carry out in-depth interviews for data collection. We interviewed the OA regarding their living conditions and health care, education and other public services they required. We once again explained the rights and the responsibilities they would have after they were registered as Thai citizens before their willingness to accept the offer was confirmed. We first traced the evidence to confirm their longtime settlement and their relative links with other OA members living in this area. If the OA fulfilled these criteria according to the MOI-T's policies and regulations, civil registration process as Thais was done; and an IDC and IDN were eventually given to them. The district government strictly followed the guidelines for the civil registration process issued by the MOI-T and strongly emphasized the preservation of the OA's traditional ways of living despite their new nationality as Thai people. --- Visiting the OA Living Areas to Ask Their Willingness We found that the villagers of the two villages were willing to be registered as Thais and to comply with Thai laws. They understood their rights as granted by Thai officials and the obligatory responsibilities to Thai society, whilst their traditional ways of living would be preserved. Their reasons for the adoption of Thai citizenship were that their children could attend school, they could receive health care and other public services, participate in health promotion programs, and receive an adequate food supply. The medical treatment among the OA depended on ancestral practices. For example, every pregnant woman went through childbirth naturally with assistance from a village midwife without a prenatal evaluation of maternal and fetal risks. No vaccinations for newborns or the aged were provided. Although they had experience in the medicinal properties of many natural products used as medicines, many complicated diseases were unable to be treated successfully. "We would like to have adequate food for our kids as the harvestable forest products have progressively diminished over the years. On some days we catch or trap no wild animals to cook for our kids." [OAa1] "The plants or wild animals in the forest reduced in number from previous years. Although we try to plant cassava trees to collect their roots, the harvestable products are smaller than before. So, we are necessary to settle in permanent living areas and become employees, instead." [OAa3] "Sometimes, only one wild cock is caught for our food. So, we cook it for our kids first. The adults have to reduce or miss their meals and wait for other cooked foods." [OAb1] "The Southern Border Provinces Administrative Center (SBPAC) has followed the living conditions of the OA for a long time. Because they live on high mountain ranges, travelling to their living areas to provide the information about various legal and social regulations or other necessary social or living supports is very troublesome. In the initial visits of the district governor's staff, the OA reported their willingness to be registered as Thai." "The SBPAC has had an action plan to provide Thai citizenship for the OA since 2019. The actions planned are registration of the OA as Thai nationals, providing them with Thai IDCs and citizen IDNs as well as other welfare supports, e.g., UHC, monthly payments for the aged and newborns, etc., and improving their living conditions and livelihoods, children's education, vocational training. All the actions are based on the principle of obtaining equitable living condition with other Thais and compliance with the current Thai regulations and laws, whilst their traditional lifestyles are preserved. We asked for their cooperation in forest preservation." [GPa2] "Her Royal Highness Princess Mahachakri Sirindhorn gave an idea to us to help the OA to improve their living conditions while retaining their identity through their traditional livings and cultural practices. In cooperation with the "Supporting the Minority of Orang Asali Network", the SBPAC facilitates the process of providing the OA with Thai nationality, welfare support, and improving the life skills necessary for modern living conditions." [GPa1] "In our talks, we (the OA) recalled an event in which a man living in a village fell from a tree breaking his pelvic bone and required a treatment in hospital, but he had no Universal Health Coverage payment support to pay the treatment cost. So, we think it will be better for us to decide to be registered in the Thai civil registration list so that the Universal Health Coverage payment scheme is open to us, and the cost of treatment will be covered by this payment scheme when we are necessary to visit a hospital for treatment." [OAb2] --- Clearance of Legal Issues and Preparation for Providing the OA with Thai Nationality To save the expense and time travelling to the various district offices to complete the steps of verification and registration process by the OA themselves, the district governor's team and associated local governmental agencies visited the OA settlements to complete the process, following which, the Thai IDCs and IDNs were given to them. "It is very troublesome for the OA to have them travel to the various district offices to complete the registration process. Our team and other district governmental officers will jointly visit them at their settlement sites again to trace the evidence of their long presence in this area, their relative relationships or a family tree based on the legal standards of the MOI-T before registering them as Thai people and adding their names to the Thai civil registration list." [GPb2] "We will follow the regulations and legal guidelines for the registration process. When the registration process is completed, we will provide Thai IDCs to any OA age 7 years and older according to Thai law." [GPb1] "These OA are voluntarily accepting registration as Thai citizens. Based on the policy of the MOI-T, the verification of long inhabitation in this area and relative relationships are confirmed by the local Thai community leaders, religious or social activity leaders, or their employers." [GPa3] "We learnt from the successful registration of the OA in Betong, Yala province. In Betong, they traced the evidence that confirmed the long settlement of the OA in the area and their relative relationships before providing them with Thai citizenship. Our district governor's team in association with other governmental support and service agencies will follow the same process carried out in Betong." [GPa2] "In Betong, the local governmental team also added a program for improvement of living conditions to the registration process for the OA there. The operations yielded satisfactory outcomes to both the OA and the Thai local governmental officers." [GPb2] "The OA who receive Thai nationality will have IDCs and IDNs starting with the number "5" at the beginning which means that they are a minority person or a foreigner who have been approved to be registered in the Thai civil registration list." [GPa2] "The policy and practice guidelines launched by the MOI-T will facilitate the local district governor and associated governmental sectors to complete the registration process." [GPa3] "We will provide the information regarding the rights that the OA will obtain from Thai government when they are completely registered as Thais, such as various financial and social welfare programs." [GPb1] "The Thai government asks the OA for their cooperation in forest preservation, while their traditional ways of living will be preserved. The adoption of Thai citizenship will be voluntary depending on their own decisions. The OA who have settled in the national forest and wild animal preservation zones will be informed of the same practical principles before they can decide to adopt the offers freely. Alternatively, this group of OA can voluntarily resettle in one of the governmental resettlement areas provided for minority people." [GPb2] --- Theme Development After the completion of the study participant interviews, the themes were developed under the thematic analysis disciplines as follows: OA children had malnourishment and illiteracy that affects their growth and development. e. The OA demand that they will be allowed to maintain their traditional ways of living with the forest after they are registered as Thai. f. Livelihood assistance, health care and all-level education suitable for each OA individual's needs are necessary. g. The OA used ancestral methods to treat illnesses they experienced. They had no knowledge of health prevention or promotion. h. They were very anxious when they needed to visit a hospital due to their misunderstanding of current medical treatments. --- 2. Studying current Thai laws, regulations and national policies to facilitate the process of the provision of Thai citizenship to the OA by local governmental staff. a. Because the OA are regarded as Thai people, the Ministry of the Interior, Thailand (MOI-T), instituted a policy for the registration of the OA as Thai citizens. b. Related acts and regulations, including guidelines for the verification of the Thai citizenship process will be reviewed and discussed among the governmental sectors at both the national and local levels. c. The local governmental agencies will collaborate in planning and carrying out the registration process. d. The successful registration process carried out in the Betong district of the Yala province was studied as a model. --- 3. The OA are willing to adopt Thai nationality. a. Detailed information regarding the OA's rights and responsibilities as Thai citizens was provided before their voluntary decision. b. Their ethnic identities and lifestyle will be preserved. c. Earning a living, health care access and basic or vocational education for children or adults, respectively, were considered essential for the OA. B. Registration process according to the developed action plan 1. The legal process of verification and registration as Thai people. 2. Registered OA obtain equitable rights and have the same responsibilities as the other Thais. 3. Collaborative work of related local government agencies is a key for successful registration. 4. Living conditions, health service and education are the three main targets for the development of the OA's well-being. 5. Health services under the UHC payment scheme is essential for the OA to access health care. --- 1. The local governmental staff visited the OA living area on the mountain to ask about their willingness to be registered as Thai people. --- 2. The OA would like to accept the conditions after registration as Thai. --- 3. The registration to accept Thai citizenship is voluntary. --- 4. Personal verification will be carried out by local government staff based on the MOI-T's policies and regulations. --- 5. Language was an information provision barrier. 6. The registration practice successfully carried out in the Betong district, Yala province, is to be followed. --- 7. Thai IDCs and IDNs will be provided to OA aged 7 years or over, and they will be listed in the Thai civil list after completing the verification process. 8. A parcel of land for residence or earning a living will be provided. 9. Health insurance under the UHC payment scheme of the Thai public health system will be provided to the OA. 10. UHC will support payment for the OA to receive health services. 11. OA children will be allowed to attend public schools to study Thai. --- Themes Codes C. Outcome evaluation 1. Immediate after-action evaluation of the registration process outcomes focusing on improved living conditions, health services accessibility and children's education. 2. Long term follow-ups and repeated evaluations in the future are planned. 1. The OA's homes were redesigned for hygienic living. --- 2. Follow-ups will be carried out to ensure the OA have received equitable social support and access to welfare programs as Thai citizens. --- 3. Evaluation of the understanding of and accessibility to public health services under the UHC payment scheme among the OA and their satisfaction with the services will be evaluated. --- 4. Teachers and local Thai community leaders will encourage OA children to attend a primary school. --- 5. The OA parents and their children' satisfaction with the organized education system will be evaluated. --- Official Provision of Thai Citizenship to the OA According to the Plan Nearly 3 months after the survey of the OA's willingness to be registered as Thai and the preparation by the district governmental agencies, the Chanae district governor's team and the associated agencies managing the legal issues and planning the process for registration and provision of Thai citizenship, provided Thai IDCs and IDNs to the OAs. "In June 2022, the Thai citizenship for OAs was approved and Thai IDCs were provided to 25 and 20 OAs who aged from 7 years old from Toapaku and Biyis villages, respectively." "On 31 August 2022, the Yala provincial governor presented the IDCs and the citizenship confirmation documents to the OA who had received the approval to be registered as Thais following the registration process. This signifies the OA have the rights to receive various governmental supports such as health services, education, and monthly financial support for the aged and newborns. The whole process was implemented to get rid of social inequality based on a Thai strategy for national development theme which states that "We will never leave anyone behind." [GPa3] "The head of the Cooperation Center for Development and Special Activities under the Royal Initiations (CCDSR) under the SBPAC in cooperation with the Chanae district governor's team, the head officer of the local forest protection office, Narathiwas province, a representative of the Department of Forestry, and representatives from the Sukirin Self-dependent Resettlement Area visited the OA settlement areas in the forest to help them improve their housing and livelihoods options." [GPa1] "The SBPAC has facilitated the registration process for the OA so that they can access public facilities and other public supports like Thai people for raising their basic living standards in health, education, livelihoods, etc." [GPb1] "The SBPAC is requesting a parcelof land from the Department of Land Management to establish a new resettlement area for the OA where they can grow crops and raise livestock for adequate household consumption. The project is under negotiation among the related agencies now." [GPa2] "On 31 August 2022, we (the OA) received the IDCs and other related welfare cards. The community leader gave us the detailed information in the Malay language. We thank the district governor's team for helping us to receive the IDCs and their visit to our living areas in village No. 7 (Moo 7, in Thai)." [TCb2] "The head of the Cooperation Center for Development and Special Activities under the Royal Initiations (CCDSR) had the representatives of the National Health Guarantee Office to travel with him to visit the OA. Additionally, the district public health officers and the associated team set up a field meeting for open discussion regarding the rights for the OA after registration to access public health care services according to item 18 (13) of the National Health Guarantee Act, 2002" [GPa2] "The representatives of the National Health Guarantee Office explained the details of the rights the OA would have after the registration to the OA leaders or representatives before they transferred the information to the individual OAs living on the mountain area." [GPb2] --- Post-Action Short-Term Evaluation After the registration process for the OA was completed, we carried out a followup visit a few weeks later to assess how frequently the OA accessed public support, as well as their satisfication. We found that the OA were satisfied with the help of the local governmental agencies to improve their well-being and quality of life. Since they received their Thai citizenships, every individual OA was able to own a parcel of land to plant crops or raise livestock, access the public health care or health promotion services, and their children were able to attend the local primary schools, etc. "The first time we visited the hospital, we were very nervous and felt insecure. However, with the help of a local public health officer who took the injured OA to the hospital and the cooperation between the SBPAC and the hospital administration staff, the injured OA received the treatment free of charge. Initially, the patient was very concerned about the hip surgery advised by the doctor that the whole flesh on his body would be taken away during the operation." Moreover, the informal education and vocational training were provided for OA who were 15 years or older. Scheduled classes were regularly organized in the communityshared building in which 10-20 OA youths attended each class. To assist in the improvement of their living conditions, they were taught to crop vegetables and raise livestock to maintain their food security. Additionally, seeds, baby chickens, baby ducks, etc., were provided. Regarding public health services, thirteen and five OA individuals voluntarily received one and two doses of COVID-19 vaccine, respectively. They responded well to the COVID-19 vaccination campaign. They still preferred to use traditional herbal medicines for initial treatment, except for a complicated disease for which they were willing to receive modern medical treatment from the Thai community health volunteers, who regularly visited them, or in the district hospital when it was required. The number of OA utilizing modern medical services increased. --- Discussion One principal item in the Thai constitution emphasizes equitable rights of all Thais to access and receive public services or support. The OA in this study, as well as other minority people in Thailand, have the rights to receive public support according to the rights outlined in the clause of Thai constitution. Hence, the verification and registration process for the OA in the Chanae district of Narathiwas province was undertaken. All aspects of the quality of life of the study OA are significantly affected due to their migratory living style, which depends on the quantity of natural products harvestable or wild animals caught for adequate consumption. Good livelihood for ending hunger and poverty (SDG 1 and 2), equitable access to health care (SDG 3) and quality education (SDG 4), the three of the seventeen SDGs endorsed by the UN, have been prioritized by Thai official agencies as the primary targets for upgrading the living conditions of all Thais, including the OA in this study. Additionally, reducing inequality (SDG 10) in accessing public support services with the aim for achieving the three targeted SDGs equitably was stressed in this project. The verification and registration process for the OA in the current study was the first and principal action which initiated the cooperation among the related local governmental agencies under the administration of the SBPAC. The actions performed were based on the thematic principle of establishing equitable accessibility to public support, as other Thai citizens are able, without the significant disruption of the OA's identity and traditional living. Many previous projects in Thailand and Malaysia involving the resettlement of the OA to new living areas failed because their traditional living styles were abruptly changed due to the policies being implemented without receiving their agreement beforehand. The abrupt changes from traditional to modernized living conditions adversely affected the OA's traditional ways of life and cultural practices. Most of the resettled OAs soon left the new housing provided by the officials and returned to their previous living sites in the forest. We learned from our previous experiences and were aware that it is necessary to balance the preservation of the identity of this ethnic group and their traditional living style with the officially supported modern living. If the changes were not familiar to the OA, they would reject the offers. Compulsory changes by the provision of certain support programs, despite seemingly useful from the provider's perspective, commonly bring about conflict or project failure. For these reasons, the SBPAC first conducted integrative actions of local governmental agencies by surveying the OA's requirements, living styles, cultural beliefs and practices and, especially, their willingness to adopt the registration and development programs provided for the improvement of their well-being. After the legal conditions were fulfilled and the OA's consent to join the program was obtained, then the integrative actions were started. The program in this study prioritized the improvements of the OA's livelihoods, health care and education as initial and urgent targets for receiving local governmental support, because these were considered powerful influencers that interactively affected individuals' well-being. It was known that the OAs lived by harvesting natural forest products for their daily household consumption. They had no knowledge
Ending social inequality by 2030 is a goal of the United Nations' endorsed sustainable development agenda. Minority or marginalized people are susceptible to social inequality. This action research qualitatively evaluated the requirements for and barriers to full access to public services of the Orang Asali (OA), a minority people living in the Narathiwas province in southernmost Thailand. With the cooperation of the staff of the Southern Border Provinces Administrative Center (SBPAC), we interviewed the OA, local governmental officers and Thai community leaders regarding the OA's living conditions and health status. Then, an action plan was developed and implemented to raise their living standards with minimal disruption to their traditional cultural beliefs and lifestyle. For systematic follow-ups, a Thai nationality registration process was carried out before the assistance was provided. Living conditions and livelihood opportunities, health care and education were the main targets of the action plan. Universal health coverage (UHC), according to Thai health policy, was applied to OA for holistic health care. The OA were satisfied with the assistance provided to them. While filling the gap of social inequality for the OA is urgent, a balance between the modern and traditional living styles should be carefully considered.
their previous living sites in the forest. We learned from our previous experiences and were aware that it is necessary to balance the preservation of the identity of this ethnic group and their traditional living style with the officially supported modern living. If the changes were not familiar to the OA, they would reject the offers. Compulsory changes by the provision of certain support programs, despite seemingly useful from the provider's perspective, commonly bring about conflict or project failure. For these reasons, the SBPAC first conducted integrative actions of local governmental agencies by surveying the OA's requirements, living styles, cultural beliefs and practices and, especially, their willingness to adopt the registration and development programs provided for the improvement of their well-being. After the legal conditions were fulfilled and the OA's consent to join the program was obtained, then the integrative actions were started. The program in this study prioritized the improvements of the OA's livelihoods, health care and education as initial and urgent targets for receiving local governmental support, because these were considered powerful influencers that interactively affected individuals' well-being. It was known that the OAs lived by harvesting natural forest products for their daily household consumption. They had no knowledge of how to plant vegetables or other plants or to raise livestock for their food reserves. Normally, they followed a nomadic lifestyle, migrating to a new location in the mountain range every 7-10 days on average, when the harvestable forest products in their current living area became inadequate for consumption. The increased Thai population and accompanying requirements of land for agriculture and forest industries led to ecological changes in the forests. Both natural and man-made ecological changes in the forest have had a negative impact on the amount of forest products harvestable by the OA, resulting in an insecure food supply among the OA. This is the reason why some of the OA were required to permanently settle in a single living location or come down from the hills for labor work in the commercial area of the district. To reduce poverty and lack of food security in response to the first and second SDG goals, our program encouraged and supported the OA to settle permanently in locations along the forest margins, as well as teaching them planting and livestock raising techniques. After the preliminary discussions with the OA and the program were undertaken, we believe that this method of providing social support is suitable for and satisfies the OA very much, in that their ancestral lifestyle and beliefs have not been seriously affected. The traditional health beliefs among the OA were based on their strong belief in supernatural powers rather than their own inner power or self-efficacy or control in managing their own health. According to the health locus of control concept, they had a lower belief in an internal health locus of control than in an external health locus of control. This kind of belief causes adverse effects to a person's health [9][10][11][12][13]. Additionally, they lacked the conceptual thoughts or knowledge necessary to generate an appropriate health belief model (HBM) [14] to care for their own health. This concept was also recently used to explain the lack of compliance for receiving a COVID-19 vaccine during the COVID-19 pandemic and vaccination campaign [15,16]. From our interviews with the OA, we learned they were very anxious when discussing modern medical care. Their long-held perception was that attending a hospital led to a dreadful outcome or death. With the help and psychological support from the local public health volunteers, they felt more secure and relaxed. Apart from medical treatment, we believe that the health education from the UHC program will enable them to voluntarily follow health prevention and promotion advice. Previous studies have found that improved knowledge followed by attitudes and practices together influenced soil-transmitted helminth (STH) infection control among the OA in Malaysia [17,18]. The UHC payment scheme in Thailand includes all aspects health services, i.e., health promotion, disease prevention, treatment and rehabilitation, and is available to all Thais from birth to death. Local public health volunteers are the points of first contact in the system when accessing health services. The UHC payment scheme ensures that all Thais will receive the holistic health services equitably. After the OA in this study were successfully registered as Thai citizens, they had equal rights as Thai people to access UHC programs. A study showed that the OA in Malaysia were found to have shorter life expectancies than the Malay people, with overall life expectancies of 53 years (54 years for females and 53 years for males) [19]. Common diseases diagnosed in the OA in Malaysia were STH infections, pulmonary diseases, liver diseases and malnutrition, all of which were occasionally severe enough to cause death [5,20]. We expect that the UHC program will be of considerable benefit for the healthy lives of the OA in this study. Education is another target of action planned to be promoted in parallel with improving living conditions and health care. Quality education, either formal or informal, can help the OA children or youths to understand Thai or train them vocationally to offer them more choices of career in the future. The OA in this study were encouraged to allow their children of school age to attend formal education, while informal (non-school) as well as vocational education programs were available for adult OA. Because the OA have only their own spoken language but no written language, teaching them the Thai language requires teachers who understand the OA language well. We found that most of the OA parents and children appreciated this offer of educational opportunity. The small sample size is a limitation of this study. The difficulty of travelling to the OA living areas where they reside on the hilltops and their lifestyle of hunting or harvesting in the forest during daytime are the causes of this issue. However, we made an effort to include all available OA, including their leaders, in our interviews. Male OA of a comparable age range were predominantly included for the interview, since they were the main group of OA making their livings and leading their family members' lives. --- Conclusions "We will never leave anyone behind", a theme of social equality compatible with the UN-endorsed SDGs, was the major concept of the current action plan implemented for the OA communities in this study. We found that the OA in this study were satisfied with the officially provided support. Herein, we suggest that the changing of any indigenous people's living conditions while aiming for their better well-being by an official project should carefully consider their traditional beliefs and practices. Changing living conditions or implementing obligatory public services that markedly disrupt a minority's ancestral beliefs and lives and, significantly, without their willingness to adopt these changes as if they are sharing the ownership of the project, commonly result in unfavorable outcomes. Finally, we suggest that the long-term follow-ups of the OA's accessibility to official services or support programs will elucidate the sustainable benefits and satisfaction among the recipients of the support. --- Data Availability Statement: The study data and analysis methods are described in the Material and Methods section of this paper. No data were deposited in other pre-print servers. --- Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki, and it was reviewed and approved by the Institutional Review Board of Public Policy Institute, Prince of Songkla University (EC code: 008/64; date 10 June 2021). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. --- Conflicts of Interest: The authors declare no conflict of interest.
Ending social inequality by 2030 is a goal of the United Nations' endorsed sustainable development agenda. Minority or marginalized people are susceptible to social inequality. This action research qualitatively evaluated the requirements for and barriers to full access to public services of the Orang Asali (OA), a minority people living in the Narathiwas province in southernmost Thailand. With the cooperation of the staff of the Southern Border Provinces Administrative Center (SBPAC), we interviewed the OA, local governmental officers and Thai community leaders regarding the OA's living conditions and health status. Then, an action plan was developed and implemented to raise their living standards with minimal disruption to their traditional cultural beliefs and lifestyle. For systematic follow-ups, a Thai nationality registration process was carried out before the assistance was provided. Living conditions and livelihood opportunities, health care and education were the main targets of the action plan. Universal health coverage (UHC), according to Thai health policy, was applied to OA for holistic health care. The OA were satisfied with the assistance provided to them. While filling the gap of social inequality for the OA is urgent, a balance between the modern and traditional living styles should be carefully considered.
Introduction In Brazilian Amazonian Area, the fishing activity involves 368 thousand fishermen approximately, and has a production of 166,477 tons of annual fish, moving US$ 130 million, being the North area the main responsible for fishing production of continental waters in the year of 2009, where the states of Amazon (71,110 tons) and Pará (42,083 tons) they were the main producers in relation to the volume of capture of fish of the area on this year (Brasil, 2010). Thus, the Amazonian area is holder of the highest values of consumption per capita of fish with 380 to 600 g/day. The fish is the main source of proteins of the riverine populations (Lima et al., 2012). In the Amazonian basin they can be found six fishing modalities, being these: subsistence, commercial multi specified, commercial mono specified, fishes in reservoirs, fishes sporting and the ornamental fishing (Freitas and Rivas, 2006). The commercial fishing and the one of subsistence is the ones that represent larger source of jobs generation and income in the fishing section in the area (Lima et al., 2012). Besides these, the ornamental fishing is described as fundamental importance for the local riverine populations, being this responsible one for the subsistence of the same ones in several areas of the Amazonian (Anjos et al., 2009), with prominence in Barcelos area (Medio River Negro) (Oliveira et al., 2016(Oliveira et al.,, 2017a, b), b). The Amazonian basin is the main area of fishing extractivist of ornamental fish of Brazil, happening in the areas of Medio River Xingu, River Tapajós, River Purus, River Juruá, Medio River Solim<unk>es and Medio River Negro. The basin of Medio River Negro in the State of Amazonas is the area of larger representativeness of this activity of the country, exporting about 20 million fish/year generating for the economy of the state about of US$ 3 million (Chao and Prang, 1997). The ornamental fishing practiced in this area used 10 thousand people approximately in Amazonas, being responsible for about 60.0% of the income of the municipal districts of Barcelos and Santa Isabel in Negro River, demonstrating to strong connection that those areas had with the international trade of ornamental fish at that time (Prang, 2001). In the last decades the ornamental fishing practiced in the area of Medio Negro River has been suffering changes in socioeconomic scenery, influenced by the economic crisis of the last decade, besides linked problems the lack of organization of the local productive chain, of the high taxes of imposed on the exported products, of the reproduction in captivity of the main species as the cardinal Paracheirondon axelrodi and the neon tetra Paracheirondon innesi for countries importers as USA, Europe and Asia. Besides, the competition with the countries South American neighbors, like Colombia, Venezuela, Ecuador, Guyana and Peru, where the commercialization of the species that has sale for ornamental ends prohibited at the country, they are offered without any restriction type and for values below practiced them in the national market (Prang, 2001). Such economical events consequently contributed in a direct and indirect way to reduction of the export volume of coming fish from the area of Medio Negro River, taking many piabeiros and craft fishermen of ornamental fish to abandon the activity, migrating for new economic activities close to the headquarters of the Municipal district of Barcelos, such as guides in the sporting fishing (Sobreiro, 2016;Ferreira et al., 2017). In the field of ethnoichthyology, many works have been carried out in the last decade, adopting different themes, in order to describe the different ways of use and cognitive knowledge of local communities (ecology and classifications of organisms). The works with local ecological knowledge of artisanal ornamental fishermen developed in the Amazon region, more precisely in the basin of the middle Solim<unk>es river (Amazonas) and in the middle Xingu river (Pará) (Mendonça and Camargo, 2006;Souza and Mendonça, 2009;Carvalho-J<unk>nior et al., 2009;Rossoni et al., 2014;Ramos et al., 2015) In addition, it is important to note that there is a high correlation between the number of ornamental species and the number of ornamental species studied, emphasizing the importance of including this knowledge in local fisheries management strategies. It is well known that artisanal fishermen have detailed knowledge on ecological, behavioral and fish classification issues, and this knowledge is influenced and influenced by fishing practices (Begossi et al., 2016). This reality can be observed in the daily life of the artisanal fisherman in the Amazon region. Through the cognitive capital (empirical knowledge) acquired through years of experience in the activity (work practices), they are able to employ and choose the best technique, equipment and location (Witkoski et al., 2009;Batista and Lima, 2010). In spite of the ornamental fishing to be an activity with more than half century of existence in the area of the basin of Medio Negro River, and of great economic and social importance for the riverine communities, it exists a lack of information related to the degree of socioeconomic contribution that this activity carries out on the community that depends on this as main sustenance source and income, being necessary the rising of information that describe the aspects social, ecological and economical of these activities in the area. a dinâmica da pesca ornamental mudou em pouco tempo e afetou diretamente os pescadores. Além da baixa taxa de renovaç<unk>o, com a participaç<unk>o de pescadores mais jovens, ameaçando a transmiss<unk>o de conhecimento ecológico para as geraç<unk>es futuras. Como resultado, notamos o aumento dos problemas relacionados à cadeia produtiva e a ausência de poder p<unk>blico na atividade. A pesca ornamental já foi tratada como uma das principais atividades econômicas para as comunidades locais e para o Estado do Amazonas. Palavras-chave: socioeconomia, pescadores artesanais, pesca ornamental, Barcelos, Amazônia. The aim of this study was describes the socioeconomic profile of the fishermen of ornamental fish of the area of Medio Rio Negro, recognized popularly as "piabeiros" in the Municipal district of Barcelos, as well as the scenery of the ornamental fishing practiced locally, pointing the main flood plain times to fishing, main species, fishing atmospheres, equipment used and capture techniques. --- Material and Methods --- Study area The present study was accomplished in the areas urban and rural of the Municipal district of Barcelos, State of Amazon (Figure 1). The city of Barcelos was the first capital of the state of Amazon in 1758, used as warehouse for slave expeditions and later for the vegetable (cycle of the eraser) extraction and installation of agricultural projects for the cultivation of coffee and tobacco (Machado, 2001). The municipal district of Barcelos is considered the largest municipal district in territorial extension of the State of Amazonas with 112,450.76 km 2, located to 496 of distance of the capital Manaus, with thirst the right margin of Médio Rio Negro and possesses a population about 26,000 inhabitants (IBGE, 2017). In this area it is located the Area of Mariuá Environmental Preservation, considered the largest fluvial archipelago of fresh water of the world, with about of 1,600 islands, besides the Ja<unk> National Park and Araçá State Park (Inomata and Freitas, 2015). --- Data collection The data were obtained through interviews semi structured with application of questionnaires containing open and closed questions, with craft fishermen of ornamental fish known locally as "piabeiros" (N= 89), in the period among January to April of 2016. The accomplished questions were destined the obtaining of information on the socioeconomic (gender, age group, marital status, naturalness, monthly income, education and economical activities developed in the district) profile and aspect of the ornamental fishing practiced in the area (time, species, atmospheres, equipment's and capture techniques, association and problems in the execution of the activity). The average of the time of the interviews was of 30 minutes in individual ways, with the due presentation of the Free and Illustrious (TCI) Term of Consent to the interviewee and previous edition of " Plataforma Brasil" (N. 53847316.6.0000.5015), always with an accessible language to have owed understanding of the paper of the participation of the piabeiros in the respective study. Was carried out a consultation to Fishing Cologne of Barcelos Z33 for determination of the number of fishermen in active state in the ornamental fishing in the Municipal district of Barcelos, where they were pointed 135 assets being 97 men and 38 women (2016). The places of interviews were determinate with random in the urban area after the researcher's previous identification and the invitation to candidate the participation. For the rural area the determination happened in agreement with the representation level that the ornamental fishing had for to present community riverine, as informed for piabeiros-key in the headquarters of Barcelos. The present study was accomplished in the urban and rural area of the Municipal district of Barcelos. Eleven communities were visited in the rural area, being these: Ponta da Terra, Santa Inês, Daracuá, Muluf<unk>, Rom<unk>o, Elesb<unk>o, Bacabal and Jaqueira (Table 1). The field study was accomplished in the month of January in the urban area and month of April in the rural area (2016), for half fluvial in an embarkation of medium load, tends exit of the headquarters of the Municipal district of Barcelos. The area is characterized as the tributaries of larger representativeness of the ornamental fishing in the area (Aracá and Demeni Rivers). Socioeconomic and fishing "piabeiros" Braz. J. Biol., 2020, vol. 80, no. 3 pp.544-556 547/556 547 --- Analysis of data The obtained data were used for the construction of graphs and tables and presented through descriptive statistics with calculation of relative frequency. --- Results The frequency of interviewees in the communities was showed in Table 1, in which it presents the highest percentage of respondents in the urban area. The naturality of the interviewees from municipality of Barcelos was showed in Table 2, being the city of Barcelos the highest percentage recorded. The area covered by the study was the municipality of Barcelos, with the communities of Ponta da Terra, Santa Inês, Daracuá, Bacabal, Rom<unk>o and Elesb<unk>o (Figure 1). The representation of genus was represented in Table 3, in which the majority of fishermen interviewed are male. The representation of age and civil status was represented in Figures 2 and 3 respectively, in which in urban areas the highest percentage was of fishermen aged over 60 years. The representation of the educational level, family income and economic activities performance of the "piabeiros" comparing the urban and rural areas of the municipality of Barcelos was showed in Figure 4, 5 and 6 respectively. The representation of the ornamental fishing seasons, the main families of ornamental fish caught and marketed and fishing environments according to the "piabeiros" comparing the urban and rural areas of the municipality of Barcelos was showed in Figure 7, 8 and 9 respectively. The representation of fishing equipment used was represented in Table 4, in which the rapiché and the cacuri were the instruments most used by the "piabeiros" The associativism representation and the main problems related to ornamental fishing according to the "piabeiros" comparing the urban and rural areas of the Municipality of Barcelos was showed in Table 5 and6 respectively,. smaller number of interviews such as the community of Jaqueira (2.25%). Although there were efforts to reach the communities of Alalaó and Maqu<unk>, there were not possibilities of success the field activities happen in the period of inundation of the rivers. Thus there were some "igarapés" that still met with low level of water. These difficulties in the area were also pointed for Leme and Begossi (2013) and also for Ferreira et al. (2017). The comparative frequency among the interviewees showed that 58.43% of the fishermen of ornamental fish are living in urban area, while 41.57% still live in the rural area. In spite of that it looks similarity, we can observe is a dispersion of those along the communities, giving larger prominence the communities of Ponta da Pedra and Daracuá that still active stay in the ornamental fishing. That observation was also registered by Sobreiro (2016) and Ferreira et al. (2017) during investigations on the social and economic profile in the community of Daracuá, demonstrating like present work a community's importance for the ornamental fishing in the area. It was also registered a low number of families living at these places. In agreement with the residents that fact is due the decadence of the ornamental fishing in the last years. Starting from 2008, many of the old fishermen had to migrate for the headquarters of the municipal district --- Discussion In relation to interviewees frequency in the communities, larger frequency was observed in the headquarters of the municipal district of Barcelos (Table 1), corresponding a total of 58.43%. For other side had communities that presented in search of new job opportunities, besides having the access to services of health and closer education, mainly for the fact of many of those residents to possess smaller children of age. The Medio River Negro represents one of the most uninhabited areas of the Amazonian, with low population density, caused by soil characteristics that it is predominant sandy and it doesn't favor the development of the agriculture, and of the rivers that due to the low level of dissolved nutrients and low productivity would excel of the waters. The River Negro is recognized popularly as "river of the hunger." Those crucial factors act as limiting for the fixation of the population (Leme and Begossi, 2013). Studies by Silva (2007Silva (, 2011) ) tell a historical migration of rural communities starting from 1980 towards the urban centers of the area of Medio River Negro, mainly the headquarters of Barcelos and Santa Isabel do Rio Negro. That migratory process happened due to need of search of better jobs, and access to education services and health. Thus many of those families maintained contact with their native communities for several reasons like relationships to the accomplishment of activities as the fishing, cultivation of having cleared and collection of fruits of the forest as the chestnut of Brazil. We can notice that the practice of residences alternation accomplished by riverine communities from Medio River Negro is not something new, and it is happening to many years under influence of the dynamics of the environmental (readiness of natural resources) and economical (offer of jobs and income) social (access to services of health and education, marriages, school vacations, local conflicts, etc.) (Barra et al., 2013). Most of the interviewees piabeiros was of the masculine gender, so much the residents in the urban (74.51%) area as the one of the rural (81.08%) areas (Table 3). Low representativeness of the women's action was observed in the ornamental fishing. In spite of that, it is observed that the woman has an expressive participation because carries out the work in an equalitarian way to the men. Scenery different to the that is found in other fishing modalities in Brazil and in the Amazonian where happens the division of gender of the work (Fonseca et al., 2016;Palheta et al., 2016). A lot of times the woman doesn't participate directly of the activities, being that exclusive to the men and the women fits the children's cares and the accomplishment of the daily domestic services (Santana, 2014). Studies conducted by Carvalho J<unk>nior et al. ( 2009), Souza and Mendonça (2009) and Rossoni et al. (2014) related the ornamental fishing practiced in other areas of the Amazonia, it has been demonstrating the women's smaller participation or inexistences of those in the activity, with great predominance male. We can mention as example the ornamental fishing in the area of the Medio River Xingu (Pará) (N= 60 men), the ornamental fishing in the Reservations of Maintainable Development area Piagaçu-Purus (RMD-PP) in the bass river Purus (Amazon) (N= 15 men) and the ornamental fishing of the area of Tefé (Amazon), where the works describe the male predominance. Analyzing the women's participation in the sea/continental fishing in agreement with data of MPA (Brasil, 2010), it is observed that they represented 40.85% (348,553) of the Brazilian fishermen registered in the period of 2008-2009, being the northeast area the one that presents equalitarian dimension among genders (46.3% women and 53.7% of men). In the state of Amazon the women represented 40.0% of the fishermen for this period. In relation to age group a percentage was observed high (40.38%) in the urban area of piabeiros with superior age to 60 years. On the other hand the piabeiros of the rural area presented percentage of 35.14% among the fishermen with age from 34 to 38 years of age (Figure 2). We can notice that a tendency of little renewal of proceeding piabeiros exists in the urban area because the same ones tend to migrate for another commercial activity. On the other hand, the perception of renewal of the piabeiros exists in the rural area, caused due to lack of commercial opportunity. It was also observed that in the rural area the largest group of piabeiros is in the one of age among 34 to 48 years of age. In agreement with the studies driven by Ferreira et al. (2017) the average of age of fishermen of ornamental fish was of 45 years, with variation from 35 to 71 years. In the present study the average of piabeiros age in the urban area was of 53.72 varying from 24 to 82 years. Already in the urban area the varying was of 47.91 (from 21 to 76 years of age). Same result observed in relation to the predominance found with the resident piabeiros in the rural area, should have the influence of this division (urban and rural), because in that study there was a low number sample that was reiterated by the authors (Ferreira et al., 2017). The averages of ages of the piabeiros of both study areas were above the medium distribution of the age groups of the national professional fishermen that it is among 30 to 39 years, representing 26.6% (221,804) of the total (Brasil, 2010). The results demonstrate that the ornamental fishing in Medio River Negro is composed by older fishermen with ages above 40 years. It is noticed that most of the piabeiros in the rural areas (37.54%) as in the urban areas (53.85%) they are natural from Barcelos. To the whole they were mentioned in the interviews 32 different birthplaces (Table 2). It is practical common observed along the interviews that the piabeiros portrays their naturalness as being the rivers or the communities. Part of this characteristic is due the communities' distances in relation to the urban areas, especially caused by the geographical isolation of the area and the great amount of bodies of waters that difficulty the access to the urban centers. Most of interviewees declared married marital status (Figure 3). In the Amazonian the ornamental fishing is developed by riverine families through job of formed work of groups, being is one of the main local economic activities, alternative for obtaining pays and middle of sustenance (Moreau and Coomes, 2008;Carvalho J<unk>nior et al., 2009). The study of Prang (2001) through an analysis of the ornamental fishing accomplished in the basin of Medio River Negro describes the participation of about 10 thousand families in the capture practices and transport of ornamental fish. Study made by Souza and Mendonça (2009) it also described the participation in the ornamental fishing practiced in the area of Tefé, with 87.5% of the fishermen of ornamental fish in marital status married, with families usually constituted by six people. Thus 35 families were identified as dependents of the fishing and of the I trade ornamental in the area. The education level presented by the piabeiros was considered low with incomplete fundamental teaching or illiterate (Figure 4). Study conducted by Alencar and Maia (2011) when analyzing the distribution of the Brazilian fishermen for education in relationship the areas of the country, noticed that most of the fishermen just possesses the incomplete fundamental teaching, and the North area 82.8% of the fishermen with that profile. For Lima et al. (2012), the low education level presented by fishermen of the North area links them still more the fishing, because the no qualification disables that those get job in other economic activities with better remunerations. This characterization is more evident when the family income of the piabeiros is analyzed. It can be observed that these appear in the smallest strip of income (less than a minimum wage) (Figure 5). Part of the piabeiros from rural area declared to have a family income around a minimum wage (78.38%) or even minor than that value, as mentioned by the piabeiros from urban area (41.46%). Being the larger values to three minimum wage more concentrated by piabeiros of the urban (39.02%) area, possibly due to the presence of piabeiros "bosses", being responsible for the purchase and sale of the ornamental fish in Barcelos and the supply to the companies of aquarismo of Manaus (Prang, 2001). For Silva (2007), besides the ornamental fishing, the incomes of many families of the area of the Medio River Negro are composed by social benefits of the Federal Government. According to the data of socio-environmental profile of Barcelos accomplished in partnership by the Federation of Negro River (FOIRN) Indigenous Organizations, Indigenous Association of Barcelos (ASIBA) and the Instituto Socio-environmental (ISA) accomplished in the period from 2009 to 2010, 55.54% of the residents of Barcelos possess fixed income and 62.22% are employed, where the public employees represent 28.15% of that total one. Those works also describe that 48.82% of the families of the municipal district receive some type of financial aid of the Federal Government (Barra et al., 2013). In the present study most of the interviewees informed to have the ornamental fishing as the main source of income (61.11%). However it is noticed that that no longer it is the main source for some (38.89%), mainly for the resident piabeiros in the headquarters of Barcelos, where 57.14% not declared more to depend on the activity to survive. Opposite scenery to this of the urban area is described by the piabeiros of the rural areas, where 78.38% affirmed to have the ornamental fishing as main middle of obtaining of income. Many piabeiros end up carrying out other economic activities as form of complementing family income, being those accomplished in the period of "defeso" of cardinal Paracheirodon axelroldi among the months of May to July (IBAMA, Law n° 28/1992) (Brasil, 1992). The main economic activities were the eatable fishing of fish, the agriculture (coivara) and guides of the sporting fishing. Many of these carry out occasional activities, as salesperson of fish, aid of bricklayer, craft, chambermaid and cook of boat hotel (Figure 6). According study by Silva and Begossi (2013) the activities as the fishing, the agriculture, the craft and the forest extractives are the main economic activities in communities' of the area of the Medio River Negro. The fish eatable commercial fishing was the activity that most of the fishermen said to carry out in periods of exports of the ornamental fishing. Among the piabeiros of the urban area, this activity represented 71.43%, while for the fishermen of the rural area this activity represented 43.59%. In the study of Inomata and Freitas (2015), it was described that the fishing activities represent the main source of income and generation of jobs of the primary section, and the fact of the craft fishing not to demand the qualified work, does with that many fishermen choose these activity. The study of Sobreiro ( 2016) showed a tendency of changes of activity of the ornamental fishing for other linked activities to the fishing section in 2011, with migration of fishermen of the urban and rural area for the eatable commercial fishing of fish. This fact can be related with the scenery found in the present study, where the interviewees' largest portion declared preference in working in the eatable fishing instead of the sporting fishing. Even the sporting fishing tends expressive growth in the area getting to move US$ 5 million per year, the equivalent to 10.0% of GDP of Barcelos (Barroco and Freitas, 2014). Many fishermen from urban area answered that don't identify with the sporting fishing, mainly due to the way of this work is developed, and that activity demands that the fishermen work long hours a day, besides the need of more formal communication with the tourists. That fact was also described in study by Ferreira et al. (2017), where the craft fishermen of ornamental fish of Barcelos complain of the work conditions that are exposed in the sporting fishing in the area. The reports describe that these are not treated with respects by the tourists and forced to work in schedules of larger solar incidence. The fish eatable commercial fishing was the activity that most of the fishermen said to carry out in periods that contain the exports of the ornamental fishing. Among the piabeiros of the urban area, this activity represented 71.43%, while for the fishermen of the rural area this activity represented 43.59%. In the study of Inomata and Freitas (2015), it was described that the fishing activities represent the main source of income and generation of jobs of the primary section, and the fact of the craft fishing not to demand qualification. The study of Sobreiro ( 2016) showed a tendency of changes of activity of the ornamental fishing for other linked activities to the fishing section in 2011, with migration of fishermen of the urban and rural area for the eatable commercial fishing of fish. The participation in the sporting fishing was mentioned by the fishermen from rural area (12.82%) that carry out that activity in the fishing season that corresponds to the period from October to March (period of low water level). In this period those fishermen are contracted mainly for local ecological knowledge, that helps in the choice of best local of fishing of the tucunaré (Cichla spp.) main species of that activity (Barroco and Freitas, 2014). Besides the sporting fishing, the fishermen of the rural area work in the agriculture (12.82%) with cassava cultivation in coivara. These fishermen practice the ornamental fishing only during part of the year. In the period in that the exports of the ornamental fishing are closed or the financial incomes with activity are very low when the agriculture is intensified (Chao and Prang, 1997). The coivara agriculture is had as one of the most important activities for the families from the Negro River (Barra et al., 2013). The "field" is described as the system of earth use by riverine populations with predominance of the cultivation planting of annual species (Fraxe et al., 2009), where the agriculture of coivara is based in the cut and it burns of the forest vegetation for the incorporation of the present nutrients in the ashes to the soil, guaranteeing like this the success of the cultivation (Peroni, 2013). --- Characterization of the ornamental fishing of Barcelos The ornamental fishing in the basin of the Medio River Negro suffer influence direct of the dynamics of the waters cycle of the Negro River (flood pulse), being to criterion of the fishermen the choice of the best time for the practice of the activity. For the fisherman from the urban area the best is the ebb period and for the rural area fishermen the drought. These are the factors that determine the beginning of the activity (Figure 7). The variation of the waters in Negro River is one of the main characteristics of that tributary, that depending on the annual seasonality and of the place it can vary from 10 to 12 meters and those variations of the level of Negro waters form new placves as lakes, beaches, flooded fields and igapó forests, that serve as shelters, reproduction places and feeding for the aquatic communities (Sioli, 1985;Zeidemann, 2001). According to data of the Geological Service of Brazil (CPRM, 2017) the seasonal periods of the Negro River understand the following phases: inundation (from December to May), flood (from May to July), ebb tide (from July the other) and drought (from October to November). For the piabeiros of the urban area the best time for the capture of ornamental fish if gives at that time of ebb tide (43.56%) and inundation (31.68%). For the piabeiros of the rural areas the best period would be at that time of drought (35.82%) and in the inundation (32.84%). However for both, the dry period is described as a time of difficult displacement due to the banks of sands and existent geographical isolation in some places. According study by Siqueira- Souza et al. (2006), these periods are considered as the best time for the capture of ornamental fish in the area (ebb tide and drought), because in time of full of the rivers in the area the fish end if dispersing to the igapó forests the food search and reproduction, hindering like this the capture for the fishermen. The Black river show a rich diversity of species of fish, approximately 450 species, and many are endemic of the basin and still ignored, not being classified / described. In this vast number of species, some are used for ornamental ends (Loebens et al., 2016). With base in the reports and descriptions accomplished by the piabeiros during the interviews, it was possible to accomplish the identification of the main families of ornamental fish marketed in the area, being to the whole ten families: Characidae, Lebiasinidae, Gasteropelecidae, Cichlidae, Anostomidae, Oesteoglossiformes, Loricarridae, Potamotrygoniade, Callichthyidae and Gymnotidae (Figure 8). The piabeiros of the urban area mentioned a larger amount of families than the residents of the rural area. This fact happens possibly in reason of the locomotion capacity of the fishermen to their fishing (fishing areas) ones that are a lot of times located in more distant areas of the headquarters of the Municipal district of Barcelos, and also that the same ones count with a structure of larger fishing than the residents in the rural area, as well as the costing on the part of the middlemen (bosses) of ornamental fish of the area for accomplishment of their fishing campaigns, counting with support of embarkations of medium load motorized. On the other hand, the piabeiros of the rural area end up practicing the fishing only in close areas their places, where a lot of times the atmospheres favor the capture of a smaller diversity of species of ornamental fish. We can note that exist piabeiros experts in capture species from specify families such as Characidae, Cichlidae, Loricarridae and Potamotrygonidae. That fact can be observed in the urban area and in some communities as Daracuá. When asked on the capture specificity, the main explanation was the value commercial attachment of the species that compose these groups, that compensating the expenses and generating a satisfactory markup to the fisherman. The selectivity in the ornamental fishing can also be observed in other areas of the Amazonian, as in the case of Medio River Xingu (Pará State) that in spite of existing about 200 species of ornamental fish in the area, the fishing ends if concentrating in only 10 species in reason of their high market values (Carvalho J<unk>nior et al., 2009). In the present study it is possible infer that the ornamental fishing is concentrated mainly in the family's Characidae, Lebiasinidae, Gasteropelecidae, Cichlidae, Loricariidae and Potamotrygonidae. The main representatives of these families were the cardinal Paracheirodon axelrodi, rodostomos Hemigrammus bleheri, butterfly Carnegiella spp., acará disco Symphysodon discus, bodós (Loricarridae) and stingrays (Potamotrygonidae) respectively, corroborating the data described by Chao and Prang (1997), Anjos et al. (2009), Sobreiro (2016) and Ferreira et al. (2017). For the capture of those species the main used equipment's are the rapiché, the cacuri, puçá, cuts and the zagaia (Table 4), as well as having described for of Barra et al. (2012), Sobreiro (2016) and Ferreira et al. (2017) that studied the ornamental fishing in the area. The rapiché was more frequently cited by the fishermen of the urban area (43.81%) and rural (41.89%), it is an accessory made by piabeiros with a flexible wood known locally by "ripeira", where two sticks and stitched a nylon mesh, giving the appearance of basket (Barra et al., 2012;Ferreira et al., 2017). According to the piabeiros, it is a very useful tool for catching "piabas" in streams and in areas free of vegetation or shrubs, and its use is related to the efficiency that this device demonstrates at the time of fishing. The cacuri, as well as the rapiché, was also widely used. This was cited by 29, 52.0% of the urban piabeiros and 35.14% of the rural area. However different from the rapiché, this equipment is constructed with a wooden arch with nylon fabrics sewn on the sides, as well as a heavier wood attached to the bottom, being used as a temporary trap in environments of difficult access such as flooded fields, with the use of baits to attract fish (Barra et al., 2012;Ferreira et al., 2017). The puçá, like the rapiché and the cacuri, is constructed with nylon canvas by the piabeiros in an artisan way, with the purpose of catching the ornamental fish individually or in smaller quantities (Barra et al., 2012;Ferreira et al., 2017). Being very used in the handling or counting of fish like stingrays and acará disco (Symphysodon discus) during the transport. The hoist according to the information of the piabeiros is a tool used for the lifting of heavy loads, which has been
The Negro river basin is considered the largest area of extractive of ornamental fish in Brazil. This area has fundamental importance for the populations from the Amazon. The present study aimed to describe socioeconomic profile of ornamental fishermen known as "piabeiros" in the Municipality of Barcelos, as well as the ornamental fishery, fisheries area, target species, environments, fishing techniques, equipment, capture techniques and main difficulties faced by the current activity. This study was carried out in municipality of Barcelos, through semi -structured interviews, with artisanal ornamental fishermen (N= 89). The main families of ornamental fish caught and traded were: Characidae, Lebiasinidae, Gasteropelecidae, Cichilidae, Anostomidae, Loricaridae, Potamotrygonidae and Gymnotidae. The main catchment areas were igarapés, lakes, flooded fields, beaches, river banks and igapó forest. Rapiché was the most used equipment in the fisheries both by the fishermen of the urban areas (43.81%) and rural (41.89%). Most of the fishermen are associated with the colony of fishermen of Barcelos (Z33). The data showed that the dynamics of ornamental fishing have changed in a short time and directly affected fishermen, in addition to the low age renewal with the participation of younger fishermen, threatening the transmission of ecological knowledge to future generations. As a result, the increase of the problems related to the productive chain and absence of public power to the activity, since ornamental fishing has already been treated as one of the main economic activities more important for the local communities and for the State of Amazonas.
was also widely used. This was cited by 29, 52.0% of the urban piabeiros and 35.14% of the rural area. However different from the rapiché, this equipment is constructed with a wooden arch with nylon fabrics sewn on the sides, as well as a heavier wood attached to the bottom, being used as a temporary trap in environments of difficult access such as flooded fields, with the use of baits to attract fish (Barra et al., 2012;Ferreira et al., 2017). The puçá, like the rapiché and the cacuri, is constructed with nylon canvas by the piabeiros in an artisan way, with the purpose of catching the ornamental fish individually or in smaller quantities (Barra et al., 2012;Ferreira et al., 2017). Being very used in the handling or counting of fish like stingrays and acará disco (Symphysodon discus) during the transport. The hoist according to the information of the piabeiros is a tool used for the lifting of heavy loads, which has been adapted for ornamental fishing, mainly for the capture of bodhi, since they are fish residing in the submerged trunks on the banks of the lakes and in the bottom of the igarapés. The zagaia is a wooden rod approximately 2 meters long with a trident style steel spear at the end (Barra et al., 2012), used to catch stingrays during breeding season, where the fisherman after a throw on the edges of the fish disc, holds it and places it with the belly up in the bottom of the canoe, forcing the delivery with release of the cubs by the female. When asked about this practice of capture and not the collection of nature, the piabeiros expose that such a procedure is often occasional, but that greatly facilitates the capture of stingrays in the natural environment, thus gaining more time and achieving an amount even greater than usual, because the individuals obtained through this practice are within the maximum size limit that the market demands, which is 30 cm wide for the species P. motoro, P. schroederi, and 14 cm for P. cf. histrix (Brasil, 2005). Ornamental fishing occurs in several environments according to the local seasonality, in this study the igarapés were the most preferred environments for those interviewed with 36.96% by the piabeiros of the urban area and 38.10% of the rural área (Figure 9). In addition to the igarapé other environments were cited as Lakes, river banks, flooded fields, beaches and igapó forest. It is noted that the choice of fishing sites is based on the empirical knowledge (cognitive capital) of the fishermen on the behavior of the ornamental fish species of the region, such knowledge is constructed with the years of work practice in the activity, which enables the fisherman to choose the best equipment, technique, season, and place to fish with better efficiency the desired species (Witkoski et al., 2009;Silvano, 2013). --- The ornamental fishing scenario in Barcelos The fishing colony Z33 in the city of Barcelos is the main institution in which the fishermen are associated, followed by the Fishermen and Fishermen's Cooperative of Ornamental Fish of the Middle and Upper River Negro (ORNAPESCA), however it is noted that there are fishermen who does not have any type of link with the institutions of the local fishing sector (Table 5). Access to closed insurance is one of the main reasons for the association in the category of fishermen of both zones with the Z33 Fishing Colony, as a fundamental aid to support the family in the period of prohibition of fishing of some species of commercial importance in the region, this fact is also exposed in the study by Ferreira et al., 2017, where part of the fishermen of ornamental fish demonstrate not to expect contributions their professional lives coming from the fishing colony. Among the piabeiros in the rural areas, most reported not being associated with any institution in the sector (63.64%), with only 30, 30.0% associated with Z33 Fishing Colony, and 3.03% with Ornapesca, others 3.03% to the Farmers' Union of Barcelos. Logistical and bureaucratic issues were the main reasons why many fishermen in this region were not associated with the sector institutions. According to Sobreiro (2016), as of the year 2000, ornamental fishery in the region of the Medio River Negro began to show signs of decline, leading to the mobilization of national and regional institutions in order to promote the activity, the actions of this project resulted in the creation of Cooperative of Fishermen and Fishermen of Ornamental Fish of the Medio and Upper Rio Negro (ORNAPESCA) in 2008, where new structures and technology were introduced to improve the sanitary quality of ornamental fish in the region. Through an interview with president of Ornapesca in december 2016 in the city of Barcelos, it was reported that the initial idea of this development project was to improve the working conditions, storage, health quality and values of ornamental fish through However, the actions carried out did not have the expected results due to bureaucratic problems on the part of the government agencies, which made it impossible to provide financial aid to the sector and did not comply with the proposals agreed with the fishermen. Currently, what is seen in the ornamental fishery is a balance of the number of active fishermen in the municipality, where 48.0% of the total number of piabeiros interviewed were active and 51.19% said that they did not work on the activity. However many piabeiros still live in this activity, even in the face of the socioeconomic changes that ornamental fishing has faced in the last decade, influenced by the reduction of the export volume of fish of origin of the Negro river basin and problems related to the organization of the productive chain, besides the lack of attention to the sector by the public power (Sobreiro, 2016). This is more evident when it is observed that in the rural area the number of assets 63.89% is higher than in the urban area (37.50%) where the number of inactive is higher (62.50%). According to Ferreira et al. (2017) fishermen interviewed in their study reported that ornamental fishing in the region is in decline, and that many of these fishermen only began to feel the effect of such changes between the years 2007 to 2010. This period coincides with the closure of the largest buyer of ornamental fish in the region, the company Turkys Aquarium, which owned 51.2% of the ornamental fish market in the State of Amazonas (Sobreiro, 2016). This event may have reflected directly on the local economy, where the main current related problems are related to the absence of ornamental fish buyers, the reduction of sales, as well as the low financial profit that activity currently provides (Table 6). Chao and Prang (1997) in their study already alerted to the emergence of problems related to regulatory insufficiencies of the activity, unequal international competition, reproduction and marketing of species targeted in captivity by importing countries, and may influence social, economic and environmental impacts. We note the occurrence of other problems related to the production chain in addition to those cited by the authors, such as delay in the payment of production by the middlemen and health issues due to the average age of the piabeiros that most present, where many claim not to be more able to work in the activity, since it has an arduous work order that requires certain physical aptitude and perfect vision for locating and identifying many species. In view of the results of the present study it is well known that the dynamics of the ornamental fishery seems to have undergone changes in a short time, directly affecting the category of artisanal fishermen of ornamental fish, causing many to adopt other economic activities such as commercial fishing of edible fish. This event should be seen with some attention because the migration of fishermen to other activities may increase the dispute over the resource with other fishing modalities in the region, a fact already pointed out by other studies developed in the region. This study also demonstrates a greater participation of older fishermen and a low age renewal, a fact that indicates the absence of younger fishermen, since the activity scenario is not more attractive for the younger ones, bringing a concern regarding the transmission of the future generations, should this scenario continues. Regarding the ecological aspects, there seems to have been no change over the years in relation to the composition of the main groups (families) of fish traded in Barcelos, as well as in the artisanal techniques of capture. What has intensified are the problems related to the local productive chain, which lacks the structure and technology to improve the quality of the animals coming from the Middle Rio Negro basin, in addition to the public attention to the activity, since the ornamental fishery has already been treated as one of the most important local economic activities, contributing financially and socially to the State of Amazonas.
The Negro river basin is considered the largest area of extractive of ornamental fish in Brazil. This area has fundamental importance for the populations from the Amazon. The present study aimed to describe socioeconomic profile of ornamental fishermen known as "piabeiros" in the Municipality of Barcelos, as well as the ornamental fishery, fisheries area, target species, environments, fishing techniques, equipment, capture techniques and main difficulties faced by the current activity. This study was carried out in municipality of Barcelos, through semi -structured interviews, with artisanal ornamental fishermen (N= 89). The main families of ornamental fish caught and traded were: Characidae, Lebiasinidae, Gasteropelecidae, Cichilidae, Anostomidae, Loricaridae, Potamotrygonidae and Gymnotidae. The main catchment areas were igarapés, lakes, flooded fields, beaches, river banks and igapó forest. Rapiché was the most used equipment in the fisheries both by the fishermen of the urban areas (43.81%) and rural (41.89%). Most of the fishermen are associated with the colony of fishermen of Barcelos (Z33). The data showed that the dynamics of ornamental fishing have changed in a short time and directly affected fishermen, in addition to the low age renewal with the participation of younger fishermen, threatening the transmission of ecological knowledge to future generations. As a result, the increase of the problems related to the productive chain and absence of public power to the activity, since ornamental fishing has already been treated as one of the main economic activities more important for the local communities and for the State of Amazonas.
Introduction Billions of people use online social media applications such as Facebook (FB) and Instagram (IG) as part of their daily activities. Social media applications indeed make possible to exchange opinions, get news and maintain social interactions through posts, comments, and likes. In particular, FB has been the most popular social media application for quite a long time, while IG has experienced a surge in popularity in the last few years. In both Facebook and Instagram, influencers (i.e., popular users, groups, newspapers, or companies) post content (i.e., the so-called posts) in the form of photos, videos or texts. Users of these social networks can follow influencers and interact with posts by liking, reacting, sharing, or commenting them. Several studies on online social networks (OSN) have analysed content popularity as a function of the total number of interactions (views, likes, etc.), measured at the time data was crawled. Many works focus on predicting the popularity of posts, often given their intrinsic characteristics as well as the characteristics of the influencers and their followers. Few works, instead, focus on understanding the temporal dynamics of the popularity of content generated in OSNs. While it has been largely recognised that content popularity decreases over time, different models have been proposed for the decay rate of popularity, depending on the platform and content itself. Sometimes popularity is modelled by a negative exponential function, sometimes by heavy-tailed functions, and in other cases simply as constant (see Sect. 2). However, a large-scale characterisation of the temporal evolution of the popularity of posts in OSNs is still missing. In this work, we aim at filling this gap by (i) providing an experimental analysis of the time evolution of interactions with user-generated content, both on a per-post and perinfluencer basis, and (ii) developing an analytical model capturing the main aspects of user interactions on OSNs. To this end, we focus on two popular social networks, Facebook and Instagram. These applications currently have a large ecosystem of influencers that try to gain popularity in different ways, e.g., by increasing the number of posts, by posting content of large or mixed interest, by debating or posting a reply on others' posts (Kim et al. 2017). In this work, we analyse, model, and compare user engagement and interactions by leveraging a dataset of more than 13 billion interactions over approximately 4 million posts of 651 Italian influencers on FB and IG. The collected dataset covers a period of more than 5 years, from January 1, 2016 to June 1, 2021. We analyse such data aiming at answering the following fundamental questions. What are the main factors impacting on temporal dynamics of posts published by the influencers? How do followers interact with such posts? In particular, what is the time evolution of the reactions to these posts? Can we develop a model of these dynamics and exploit it for practical applications? Our main findings can be summarised as follows: • Both influencers' activity and users' activity exhibit a characteristic daily pattern, but with a different shape; • The inter-arrival time of posts has a long-tail distribution, reasonably fit by a log-normal; • On average 50% of user interactions occur within the first 4 h after content creation on FB, and after 2 h on IG; interactions arrival rate exhibits approximately an exponential temporal decay; • Most of the posts are short-lived, with a lifetime between 20 and 50 h, after which they no longer attract interactions; • The fraction of total interactions obtained within a given time interval is affected by the number of newly published posts in the same interval; • The distribution of the total number of interactions (likes, reactions, comments, and shares) is well fit by a lognormal distribution; • The average number of interactions received by posts is roughly linear with the number of followers of the publishing influencer; • The total number of interactions gathered by a post can be well predicted by measuring the interactions received within the first hour or even from the first few minutes. Our exploratory data analysis identifies the main features that should be incorporated into an analytical model trying to capture the temporal evolution of interactions received by a post. We first attempt to develop such a model, fitting a small set of parameters to the specificity of posts published by a given influencer. Interestingly, we discover that many of these parameters do not vary significantly from influencer to influencer; moreover, they only weakly depend on the considered social network (IG or FB). Our model can provide an accurate prediction of the total number of interactions gathered by a post (and an estimate of the prediction error) by observing only the initial phase of its lifetime. We believe this ability of our model can have interesting applications. Finally, we mention that a preliminary version of our work, presenting a subset of the results obtained from our dataset, has appeared in Vassio et al. (2021). This paper extends the data analysis and introduces a novel model for the temporal evolution of interactions with posts, which is then validated and applied to early prediction of post popularity. The remainder of the paper is organised as follows. Section 2 summarises some relevant related work. Section 3 describes the methodology we used to extract and process the data, while Sect. 4 presents the results of our data analysis. Section 5 describes the complete analytical model that we have developed, which is then evaluated and compared to a baseline. Finally, Sect. 6 concludes the paper. --- Related work Social media provide a powerful and effective platform for the exchange of ideas and rapid propagation of information (Al-Garadi et al. 2018). Hence, their study is of paramount importance to understanding the opinion trends in our society and the main actors, i.e., the influencers (Conover et al. 2012;Gorkovenko and Taylor 2017;Pierri et al. 2020). Although a large body of literature has analysed OSNs, the temporal dynamics of posts and interactions are still not well understood. Indeed, the majority of existing studies ignore such temporal dynamics, focusing on the "spatial" analysis of a single, large snapshot. A few works have focused on predicting content popularity, considering content intrinsic characteristics and social interactions features (Li et al. 2013;Rizos et al. 2016). The main factors that impact the popularity of posts on FB are identified in Sabate et al. (2014), using an empirical analysis involving multiple linear regressions. Similarly, (Ferrara et al. 2014) highlights the characteristics related to the dynamics of content production and consumption in IG, while (Gayberi and Oguducu 2019) and (Carta et al. 2020) predict the popularity of a future post on IG by combining user and post features. Instead, few studies have analysed the time dynamics of content generated in OSNs. The decay in popularity over time, i.e., the rate of new interactions, of Internet memes (Leskovec et al. 2009) is shown to be well modelled by a negative exponential function. The work in van Zwol (2007) measures the time evolution of the popularity of images in Flickr, finding that heavy-tailed distributions can represent the decay in rate of new interactions over time. Instead, (Cha et al. 2009b) observe that the most popular Flickr pictures exhibit a close-to-constant interaction rate. The study in Hassan Zadeh and Sharda (2014) models the popularity evolution of posts by Hawkes point processes, using Twitter data to fit the required parameters. Gabielkov et al. (2016) analyse and predict clicks on Twitter posts. They find that while posts appear as bursts in a short-time frame, clicks appear and decay at larger time scales, with a long tail. The authors leverage early interactions to predict future clicks, as in our work. They show that a simple linear regression based on the number of clicks received by tweets during its first hour correctly predict its clicks at the end of the day, with a Pearson correlation of 0.83. Ramachandran et al. (2018) propose a model that reproduces the clicks created by social media. In particular, the authors consider news posted on Twitter, and observe that hourly impressions decrease geometrically with time. They model information diffusion to determine current and future clicks, using a memoryless generative model with a few time-invariant parameters. Finally, Ferrara et al. (2014) show that the distribution of likes to posts on IG is best fit by a power-law, suggesting that popularity of media as measured by the number of likes might grow by a preferential attachment mechanism. However, Ferrara et al. (2014) provide no evidence of this kind of evolution. Other works analyse the temporal dynamics of particular user-generated content, outside of OSNs. For example, videos on YouTube exhibit various popularity decay patterns over time (Cha et al. 2009). For some videos, the decay can be modelled with heavy tail distributions, while for others with an exponential distribution. Similarly, Ahmed et al. (2013) show that user generated videos have distinct patterns of popularity growth (in terms of views) over time. Our previous studies (Ferreira et al. 2020;Ferreira et al. 2021) focus on the peculiarity of user interactions with political profiles on IG during the 2018 European and Brazilian elections, with the goal of identifying the structure emerging from the co-interactions. We studied the appearance and evolution of communities of users, obtained through a probabilistic model that extracts the backbone of the interaction networks. Interestingly, politicians are able to attract more persistent communities over time than non-politicians. Related to the topic of popularity prediction, we proposed (Bertone et al. 2021) a parallel between the OSN world and the stock market: influencers can be viewed as stocks while users are investors. The study shows how this market-like approach successfully estimates short-term trends in influencers' followers from external variables, such as Google Trends. Finally, our previous study (Trevisan et al. 2021) investigates the changes in habits in OSNs during the COVID-19 outbreak. It is shown how people, during the lockdown, due to restrictions enforced to in-presence social activities, changed their interaction patterns, shifting more towards the night. We emphasise that a large-scale characterisation of the temporal evolution of post popularity in OSNs is still missing. In this work, we aim to fill this gap by providing (i) an experimental analysis of the time evolution of interactions with user-generated content, both on a per-post and per-influencer basis, and (ii) an analytical model that can accurately represent user interactions on OSNs. --- Data collection In FB and IG, a profile can be followed by other profiles, i.e., its followers. A profile with a large number of followers is also called an influencer. Influencers post content (i.e., posts), consisting of either a photo or a video, or plain text. The profile's followers, and anyone registered on the platform in the case of public profiles, can view the influencer's posts, like/react to them, comment on them, and share them with their contacts. Notice that, by the term influencer, we refer not only to individuals, but also to groups, football teams, newspaper, and companies. We monitored the activities triggered by top Italian influencers on the two aforementioned social networks. To this end, we built lists of the most popular Italian influencers, including different categories, like politicians, musicians, and athletes. Those marked as Italian are the ones that communicate on the online social platform mainly using the Italian language. To get popular profiles, we exploited the online analytics platform www. hypea uditor. com for IG, and www. socia lbake rs. com and www. pubbl icode lirio. it for FB. The analysis has been restricted to the influencers with at least 10, 000 followers on June 1, 2021. The lists of influencers we used are publicly available. 1For each monitored profile, we downloaded the corresponding metadata, i.e., the profile information, and all the generated posts, using the CrowdTangle tool and its API.2 CrowdTangle is a content discovery and social analytics tool owned by Meta, 3 which is open to researchers and analysts worldwide to support research, upon subscription of a partnership agreement. Furthermore, for each post, we downloaded the number of associated interactions, along with their timestamp. Monitored posts are sampled by CrowdTangle within the first 20 days (480 h), with a higher sampling rate (up to few minutes) closer to the publication time of the post. Notice that, on IG, users can like posts, whereas on FB, they can react to posts with a thumbs up or other five predefined emojis. Thus, for each post, we collected the number of likes/reactions the post received, hereinafter referred to as interactions, which CrowdTangle provide in an anonymized manner. Moreover, we also collect statistics about number of comments per post for FB and IG and number of times posts are shared for FB. Finally, we have stored the data, which takes around 110 GB of disk space, on a Hadoop-based cluster, and we have used PySpark for scalable processing. For each influencer, we downloaded all the data related to the posts published between January 1, 2016 and June 1, 2021. Table 1 reports the main features of our dataset, separately for each OSN. In total, we monitored 651 public profiles, which published approximately 4 million posts, accounting for more than 13 billion interactions. The number of comments and shares of the posts are also reported. Notice that while the influencer's posts are widely shared by their followers (around 1.3 billion times, hence on average 370 times per post), our analysed influencers rather rarely repost other influencers' posts. Indeed, we observed only around 24 thousand shared posts by the influencers on FB, accounting for only about 0.7% of all the posts. Figure 1 depicts the empirical Cumulative Distribution Function (CDF) of the number of posts per influencer. The 651 influencers show a large variability in the distribution of number of posts: some influencers published few tens of posts, while others, such as newspapers pages, up to 10 5 posts. Also, in the period under study, influencers on FB published more than those on IG. The main reasons are twofold: i) on FB more influencers are actually pages or organisations, rather than single individuals, and ii) many popular IG influencers did not exist at the beginning of the considered time period (i.e., in 2016), or have become active much later. Figure 2 depicts the Cumulative Distribution Function (CDF) of the number of followers per influencer, as recorded on June 1, 2021. The number of followers per influencer varies between 10k and tens of millions. Also, the profiles in the set chosen for IG are usually more popular than those selected for FB. --- Temporal user engagement with posts In this section, first we characterise the patterns of the influencers' and followers' activity (Sect. 4.1), then we study the time evolution of interactions (Sect. 4.2) and their relation with the number of followers (Sect. 4.3). Finally, we investigate the correlation between the interactions a post attracts and the number of newly published posts (Sect. 4.4). --- Activity of influencers and followers We first characterise the daily patterns of influencers' and followers' activity. Figure 3 presents the influencers' hourly activity, obtained considering the time instants at which posts were published. The activity is normalised by their maximum in both social networks to have comparable results. The plot accounts for all the analysed 4 million posts, and it is reported using a 24-hour local-time clock (using the Italian time zone), according to the ISO 8601 standard. Similarly, Fig. 4 shows the daily activity distribution of the followers, considering the timestamps of the followers' interactions (considering all 13 billion likes/reaction). Note that, due to our particular selection of influencers, we can reasonably expect that the vast majority of posts and. This is also supported by the results in Benevenuto et al. (2012), where authors show that followers/friends interacting in social networks are usually within close geographical proximity of the influencer. We observe that influencers' and followers' activities exhibit similar patterns over FB and IG: they significantly decrease during the night, and exhibit two peaks during the day. However, it is interesting to notice that followers tend to be more active later in the evening with respect to influencers. Moreover, looking at the behaviour of specific influencers on FB and IG (the results are omitted for brevity), we observed that their followers' activity over time tend to be similar to that in Fig. 4, although the single influencer's daily activity might deviate significantly from the one shown in Fig. 3. This is confirmed by the results in Fig. 5, showing that the average followers' activity per new post maintains a similar shape to the ones in Fig. 4. Although influencers generate very few posts late at night, such posts are typically fresher and encounter less 'competition'. Nonetheless, they still collect very few interactions during the night. We now investigate the distribution of the inter-arrival time between different posts. In particular, we focus on the tail of the distribution, considering time-scales of several tens of hours, i.e., time-scales at which the impact of the day-night activity pattern is negligible. Figure 6 depicts the tail of the intertime of all posts generated by the influencers, including the best fitting log-normal distribution. The log scales in the plot suggest that the log-normal distribution provides a substantially better fit than what would be obtained by an exponential distribution (i.e., under a Poisson process assumption). This is due to the fact that influencers sometimes remain silent for (quite) long periods. We also analysed single influencers and found that, for the median influencer, the average posts inter-arrival time is equal to 19 hours on FB, and 57 hours on IG. Then, fitting separately each influencer with a log-normal distribution, on average, we obtained as parameters of the log-normal = 2.0, = 1.4 on FB, and = 3.1, = 1.3 on IG. --- Temporal dynamics of interactions We now analyse the temporal evolution of the interactions to a post, considering up to 20 days (480 h) after the creation of the post itself. We compute, for all the 4 million posts and for every sample-time (rounded to the closest integer hour), the fraction of received interactions with respect to the total number of interactions obtained by a post after 20 days. We consider fractions in order to compare different posts, and different influencers. Finally, we compute the average over all posts. The results, representing the dynamics of the average fraction of interactions over the first 3 days since the creation of the post, are shown in Fig. 7. One can notice that the majority of the interactions occur within the first few hours. On average, the first hour accounts for 31% of all of the interactions on FB (40% on IG), reaching over 80% after 1 day. Moreover, on average, 50% of user interactions occur within the first 4 h since content creation on FB, and after 2 h on IG. It is thus clear that the freshness of a post has a significant impact on the level of attractiveness of the post. Interestingly, the growth of the number of user interactions is faster on IG than on FB, although both curves converge after around 30 h. Studying the evolution of the rate of new interactions, we found that, at least in the first 24 h after the post creation, this rate is well approximated by a negative exponential decay function (with mean equal to 5.4 for FB, and 8.7 for IG). As expected, individual posts can have widely different patterns in terms of their accumulation of interactions over time. As an example, we show the results related to two specific posts on FB published by a well-known Italian influencer (namely, Giuseppe Conte, former Italian Prime Minister). The temporal dynamics of interactions over the first 3 days since the post creation are represented by red marks in Fig. 8a and8b. We notice the presence of periods in which the number of interactions is almost constant, after which it increases again. We verified that this behaviour is essentially due to the non-stationary behaviour of users' activity during the day (see Fig. 4 and5), i.e., quasi-flat portions of the curves correspond to night hours. Green vertical lines highlight newly published posts (see Sect. 4.4). For the first example trace (left plot), many posts are published within the first three days; for the second trace (right plot), no new post is published before the first 62 h. We now turn to the interesting question of whether the total number of interactions collected by a post can be forecast by observing just the interactions received during an initial interval after publication. A first, strong indication that such prediction is indeed feasible is illustrated in Fig. 9, showing a scatterplot of roughly 3,000 points, each corresponding to a post published on IG by a given influencer (in this case, the Italian politician Matteo Salvini): the y axes provide the total number of interactions, while the x axes correspond to the number of interactions received after half an hour. 4The left plot corresponds to measurements n(t) collected at physical time, while the right plot correspond to measurements n(t ) transformed into virtual time to remove daily effects (see Sect. 5.1). Despite the large variability in the number of interactions (notice the log scale), we observe a strong correlation, resulting in a Pearson correlation coefficient of about 0.90 (in physical time) and 0.92 (in virtual time). Similar strong correlations was observed for other influencers, on both IG and FB, and considering different measurement times (e.g., even after just a few minutes after post creation). This result motivated us to develop the model that will be presented later in Sect. 5. In addition, we computed the mean arrival time, defined as the average time after which an interaction occurs, after post creation. The average is computed over 480 h, for a given post of a given influencer, using the empirical distribution of all interaction arrival times. Figure 10 depicts (in log x-scale) the CDF (among different posts of the same influencer) of the mean arrival time of interactions. We consider posts with at least 1,000 interactions and focus on the first 480 h. We can observe that posts on FB are characterised by a higher mean arrival time with respect to IG: 15 h for FB, and 11 h for IG. The faster dynamic in IG confirms what Fig. 7 already suggested. Finally, we investigate the lifetime of posts. To this end, we consider that a post basically no longer attracts interactions after 20 days, and thus we define as total number of interactions received by a post the number of interactions received after 20 days. Then, for a given post, we compute its lifetime as the time at which the post has received 95% of its total interactions (as defined above). We consider only posts that collect at least 1000 interactions to get statistically meaningful results. Figure 11 depicts the distribution of the post lifetime in hours, using a log scale on the x-axis; by construction, the maximum lifetime is 480 h, i.e., 20 days. Interestingly, the difference between the two OSNs is small, even though on average FB attracted a smaller fraction of interactions than IG within the first hours (see Fig. 7). The median value of the lifetime is 33 h for both FB and IG, while the mean lifetime is 50 h for FB and 55 for IG. --- Followers' dynamics Influencers do not have a constant number of followers over time. Rather, such a number typically increases monotonically over time, with IG exhibiting a more significant increase in the analysed time period (2016)(2017)(2018)(2019)(2020)(2021) than FB. This is likely because FB is an older OSN, already largely widespread in 2016 (i.e., the first year we monitored). Figure 12 shows the temporal dynamic of the number of followers for two sample influencers on IG. 5 Influencer 1 is Matteo Salvini (an Italian politician), while Influencer 2 is Martina Colombari (an Italian actress). Figure 12 suggests that the change in the number of followers can be very different for the influencers. Influencer 1 started using the social network much later (late 2017), and his increase rate varies wildly over time, likely due to reasons unrelated to the operation of the OSN (elections, new laws, etc.). The increase in the number of followers of Influencer 2 is instead smoother over the considered time span. Figure 13 shows the distributions of the total number of interactions per post (represented by vertical boxplots), considering all posts published when the number of followers is comprised within the bins specified along the x axes (notice that the extremes of the considered bins increase geometrically with ratio <unk> 3 ). All posts published on IG available in our dataset are here considered. We notice a strong correlation, suggesting a linear dependence of the mean total number of interactions with the number of followers (we will exploit this dependency in our model in Sect. 5). We found that the distribution of the total number of interactions per post is well fit by a log-normal distribution, see the CDFs on Fig. 14. Again, in the figure, we considered the influencers Salvini (Influencer 1) and Colombari (Influencer 2) on IG. Comparing the empirical distribution with the fit, we obtain a Kolmogorov distance of the log-normal of 0.10 and 0.03, respectively for Influencer 1 and 2 (with parameters of the log-normal = 10.2, = 0.7 for Influ- encer 1 and = 8.7, = 0.9 for Influencer 2). Considering the (almost linear) dependency with the number of followers, as suggested by results in Fig. 13, we also computed a normalized total number of interactions, dividing it by the number of followers at each post creation timestamp. We call this number interactions per follower. As expected, the log-normal fit is even better when we consider this normalized number, see CDFs in Fig. 15, especially for influencers whose number of followers varies significantly over the considered period (e.g., Influencer 1). Indeed the Kolmogorov distance decreases to 0.05 for Influencer 1, with parameters of the log-normal = -3.9, = 0.8 (Kolmogorov distance 0.03 for Influencer 2, with parameters = -4.6, = 0.8 ). In Appendix 1 we report analogous results related to other kinds of interactions, i.e., shares and comments on FB. All in all, considering the followers' dynamics helps to disentangle the impact of the users that potentially interact with the post (i.e., the followers) and the variability of the post intrinsic attractiveness. --- Impact of newly published content As observed in Sect. 4.2, the arrival rate of new interactions decays roughly exponentially with time. To better understand the nature of the arrival process of interactions generated by a specific post, we asked ourselves whether this is affected by the fact that, meanwhile, new posts are published by the same influencer, thus reducing the 'novelty' of the post. For example, Fig. 8a shows a case in which many new posts are published within the first three days after post creation, while in the case of Fig. 8b no new post is published within the first 62 h. On the other hand, after 12 h the first post sample has already collected 91% of its total interactions, while the second post, after the same amount of time, has collected just 62% of its total interactions, due to the fact that its interaction rate decays more slowly. This suggests that the number of newly published content might affect the growth rate of the number of interactions received by a post. To verify this, we consider a fixed period of 12 h since the post creation, and compute the number of new posts published within this period. Figure 16 shows the average fraction of interactions collected by a post after 12 h, as a function of the number of newly published posts in the same period, for all posts published by the previously considered influencer Giuseppe Conte. We observe a clear correlation between the two quantities: the higher the number of new posts published within the first 12 h, the faster the post approaches the end of its lifetime. Indeed, in the absence of newly generated posts, a post on average collects 72% of its total interactions within the first 12 h. When 7 newer posts are generated in the same period, the average fraction of collected interactions increases to 82%. This shows that the arrival rate of interactions also depends on how many new posts are published since the post creation, as newly published content can slow down the interaction arrival rate (this can be attributed to the limited budget of attention of users). --- Modelling user interactions From our measurements and analysis, we learnt several important lessons that can help us model the temporal evolution of the number of interactions collected by a post: (i) Posts are characterised by an intrinsic initial 'attractiveness', which varies significantly even across the posts published by the same influencer; (ii) The growth rate of interactions naturally decays over time, but the decay rate is itself highly diverse from post to post, besides depending on the considered OSN; (iii) The interaction rate should be modulated by the daily pattern of user activity, which appears to be independent of the particular online platform; (iv) On average, there is a linear dependency between the total number of interactions received by a post, and the current number of followers (which can be considered constant during the short post lifetime); (v) The distribution of the total number of interactions, normalised by the number of followers, is well fit by a log-normal distribution, whose parameters depend on the specific influencer and OSN; (vi) The generation of new posts by the same influencer progressively reduces a post's attractiveness level. This can be explained by the fact that users focus their attention on the posts at the top of the timeline. Despite the intrinsic difficulties in incorporating all of the above features into a simple and tractable model, our preliminary investigation (see Fig. 9) suggests that it is feasible to accurately predict the total number of interactions after observing the very initial phase of the post lifetime. With this objective in mind, we propose the analytical methodology described in the following sections. --- Removal of daily activity effects Given a trace (i.e., time evolution data) of users interactions <unk>t i <unk>, t i > 0 with a given post published at time 0, we can eas- ily derive a modified trace <unk>t i <unk> in which the impact of vari- able daily activity is removed. Let (t) <unk> [0, 24] <unk> R + be the daily followers' activity averaged across all posts of a given influencer (here t is in hours), similar to what is shown for all influencers in Fig. 4. Let <unk> = 1 24 <unk> 24 0 <unk>(t)dt be the average user activity across the day. Assuming that the post was published at hour T 0 <unk> [0, 24], we define the modulating function g(t), t <unk> 0, as: which is simply a shifted and replicated version of (t) pro- viding the expected activity of users at an arbitrary time t after the post publication. Then, an interaction which occurred at real time t i is shifted to virtual time t ′ i : Note that the above transformation preserves the ordering of interactions, i.e., if t i > t j then t ′ i > t ′ j, while removing the g(t) = ((t + T 0 ) mod 24) (1) t i = <unk> t i 0 g(t)dt <unk>. Fig. 16 Average fraction of interactions vs no. of published posts, after 12 h since their creation impact of variable daily activity by diluting (densifying) interactions occurring in periods of high (low) activity. We expect the virtual trace <unk>t i <unk> to be more regular than the real trace <unk>t i <unk>, and thus easier to model and predict. At last observe that if g(<unk>) is assumed to be continuous, previous equation can be rewritten as: for some <unk> [t i-1, t i ]. In particular, if g(<unk>) is sufficiently slowly varying, we can approximately write: Figure 17 shows some examples of traces of the number of interactions accumulated over time by four posts, published roughly at 1am (purple), 8am (black), 4pm (blue), and 12pm (green). Thick curves refer to physical time t, while thin curves refer to virtual time t ′, and were obtained by applying transformation (1). We observe that the virtual time transformation removes the 'plateau' due to low user activity late at night (purple and green curves). Similarly, it allows us to distribute more smoothly over time interactions accumulated over periods of high user activity, like at midday (black curve), or early at night (blue curve). From now on, we will only reason in terms of virtual time, assuming that any measurement N(T 0, t) of the num- ber of interactions collected by a post published at time T 0, within time t, has passed through transformation (1), producing an equal number N(T 0, t ), shifted at virtual time t ′. --- Modelling the generation of interactions Let us assume that each post is characterised by an intrinsic level of attractiveness described by a positive realvalued mark X <unk> R +. Marks associated with posts of a given influencer are assumed to be i.i.d. with PDF f X (). t i = t i-1 + <unk> t i t i-1 g(t)dt <unk> = t i-1 + g(<unk>) <unk> (t i -t i-1 ) t i <unk> t i-1 + g(t i-1 ) <unk> (t i -t i-1 )1,. We can consider some simple law for f X (), incorporating long-tail behaviour, e.g., a log-normal distribution with parameters X, X. We assume that the final number N <unk> of interac- tions received by a post is equal to F(0)X, where F( 0) is the number of followers at the time of the post creation. Note that, if X <unk> Lognormal( X, 2 X ), F(0)X <unk> Lognormal( X + log(F(0)), 2 X ). Let N(t ) be the number of interactions received within (virtual) time t ′ after the post creation. First, we condition on N <unk> = n <unk> : where 1 is the indicator function. We assume that follow- ers access the platform (independently from each other) according to a Poisson process of rate <unk>, which is itself a random variable with probability density function f <unk> ( ). Let F <unk> (s) = [e -s<unk> ] be the Laplace transform of f <unk> (
A relevant fraction of human interactions occurs on online social networks. In this context, the freshness of content plays an important role, with content popularity rapidly vanishing over time. We therefore investigate how influencers' generated content (i.e., posts) attracts interactions, measured by the number of likes or reactions. We analyse the activity of influencers and followers over more than 5 years, focusing on two popular social networks: Facebook and Instagram, including more than 13 billion interactions and about 4 million posts. We investigate the influencers' and followers' behaviour over time, characterising the arrival process of interactions during the lifetime of posts, which are typically short-lived. After finding the factors playing a crucial role in the post popularity dynamics, we propose an analytical model for the user interactions. We tune the parameters of the model based on the past behaviour observed for each given influencer, discovering that fitted parameters are pretty similar across different influencers and social networks. We validate our model using experimental data and effectively apply the model to perform early prediction of post popularity, showing considerable improvements over a simpler baseline.
f X (), incorporating long-tail behaviour, e.g., a log-normal distribution with parameters X, X. We assume that the final number N <unk> of interac- tions received by a post is equal to F(0)X, where F( 0) is the number of followers at the time of the post creation. Note that, if X <unk> Lognormal( X, 2 X ), F(0)X <unk> Lognormal( X + log(F(0)), 2 X ). Let N(t ) be the number of interactions received within (virtual) time t ′ after the post creation. First, we condition on N <unk> = n <unk> : where 1 is the indicator function. We assume that follow- ers access the platform (independently from each other) according to a Poisson process of rate <unk>, which is itself a random variable with probability density function f <unk> ( ). Let F <unk> (s) = [e -s<unk> ] be the Laplace transform of f <unk> ( ). Then: is a Bernoulli random variable with mean 1e -t. It fol- lows that Moreover, we note that does not depend on N <unk> ; hence, we can obtain the Laplace transform of f <unk> ( ) by averaging out N <unk> : We found empirically that a surprisingly accurate model for f <unk> ( ) is a mixture of a uniform distribution in [0, a] and an exponential distribution of parameter : from which Parameters m, a, have to be fitted for each specific influencer, though they do not vary significantly from influencer to influencer, as shown in the following. Figure 18 presents the fitted Laplace transform F <unk> (s), through parameters m, a,, using the traces of 9,204 posts published by Italian politician Matteo Salvini on IG. Fitted values are: m = 0.83, a = 0.41, = 0.7. Figure 18 requires a careful explanation. First of all, notice the log x axes, spanning from 0.01 hour (36 s) to 24 hours. Since in the following we will be especially interested in the early stages of post lifetime, this will be the time scale used in all plots hereinafter. N(t )|(N <unk> = n <unk> ) = n <unk> <unk> i=1 1<unk>user i interacts before t <unk> 1<unk>user i interacts before t <unk> | <unk> i = (2) [N(t ) | (N <unk> = n <unk> )] =n <unk> [1 -e -t ] =n <unk> (1 -F <unk> (t )) N <unk> -N(t ) N <unk> | (N <unk> = n <unk> ) = F <unk> (t ) (3) F <unk> (t ) = N <unk> -N(t ) N <unk>. f <unk> (<unk>) = m 1 a 1<unk> <unk> a<unk> + (1 -m)<unk>e -<unk> F <unk> (s) = m 1 -e -a s a s + (1 -m) s +. The vertical axes reports the fraction of residual interactions, N <unk> -N(t ) N <unk>. By (3), the mean across all traces of this fraction provides the sought Laplace transform F <unk> (s) for Influ- encer Salvini. Small circles show such averaged fraction at various points in time, while the black solid curve is the fitted model, which turns out to be very accurate. The figure also shows the ensemble of 1000 actual traces (in yellow), which produces a large band around the mean. At last, green curves above and below the mean are plotted at a distance equal to the measured standard deviation. Figure 18 reveals that there is a significant variability of traces around the mean, which is, unfortunately, not captured by the model introduced so far.6 However, we found that the distribution of the fraction of residual interactions (similarly, the distribution of the fraction of already collected interactions) is approximately normal. This fact is shown in Fig. 19, which depicts the empirical distributions of the fraction of received interactions, measured at the times at which the mean fraction of received interactions is equal to 10% (blue), 50% (red), 80% (purple), as denoted by vertical dashed lines in Fig. 18. Note that the mean fraction of collected interactions is given by for which we already have an accurate model. However, we still lack a model providing the deviation (t ). We suspect that, beyond the initial level of attractiveness X, the post dynamics is characterised by random temporal fluctuations of the rate at which users interact with it. These fluctuations are due to time-varying popularity, generation of new posts (which tends to decrease the attention of users on the considered post, see Sect. 4.4), and self-reinforcement effects due to users observing the engagement of other users (which can increase the interaction rate after a period of low user interest). In order to incorporate the effects all such elements in the model, we resorted to a simple fitting of the empirical standard deviation by a 2-parameter curve. Specifically, we found that the function: provides a reasonable approximation, where parameters c and b can be computed for each influencer, though they do (4) not vary significantly from influencer to influencer (however, we noticed that traces on FB have larger variability than traces on IG, see Tables 2 and3). (t ) = 1 -F <unk> (t ) (5) (t ) = c t b e - <unk> t Figure 20 shows the empirical standard deviation of traces of influencer Salvini on IG (red circles), and the best fit by the proposed function (5) (solid red curve). It also repeats the fit for the mean already shown in Fig. 17, but this time in terms of average fraction of already received interactions (black). Indeed, what is ultimately important is to obtain a good estimate of the coefficient of variation CV = <unk>, which is also shown on the plot (blue). Knowing that the fraction of received interactions within time t ′ is approximately normal, and having derived param- eters (t ) and (t ) as function of (virtual) time t ′ (for each influencer and social platform), we can now make analytical predictions of post dynamics. For example, in Table 2 we report some predictions obtained for six different influencers on IG (first column). The other columns provide, from left to right: the average fraction of interactions collected during the first 6 min, and the corresponding standard deviation; the average fraction of interactions collected during the first hour, and the associated standard deviation; the time at which we expect to see half of the total interactions, denoted as t(50%) ; the time at which we expect to see 80% of total interactions, denoted as t(80%) ; the maximum standard deviation over all time (denoted by max ). We report the values observed from the collected data and the corresponding values obtained from the analytical model for each influencer. Table 3 reports similar results for six influencers on FB. Besides noticing the good fit of the model in all cases, it is interesting to see that some numbers are surprisingly similar across different influencers and platforms: roughly 4% of all interactions are collected within 6 min since post creation, and roughly 25% after one hour; on IG, 50% (80%) of all interactions are collected after roughly 3.3 (12.4) hours; on FB, these last figures are a bit larger: 50% (80%) of all interactions are received after roughly 3.8 (17) hours. The similarity of results for the mean fraction of collected interactions is further illustrated in Fig. 21, showing on the same plot the curves (t ) = 1 -F <unk> (t ) computed analyti- cally for all 12 influencers considered in Tables 2 and3. At last, as anticipated, it is interesting to observe (last column of the tables) that the maximum standard deviation of the traces is larger on FB than on IG, by a factor of about 1.5. --- Model exploitation: post popularity prediction One of the most interesting applications of our model is the early prediction of post popularity, which can have several applications. For example, the social platform can use this prediction to sell advertisement slots to be shown in proximity of the post, and prediction of the number of views that the post will receive in the future is crucial to bid a price for the available ad slots. Suppose to measure the number of interactions n(t) received by a post, published at time T 0 by a given influ- encer, after a period of duration t. What can we infer about the total number of interactions n <unk> that the post will eventu- ally receive? Our analysis suggests the following approach should be taken. First, suppose to know the number of followers F(0) at the post creation time. Moreover, assume that analysis of the history of post popularity of the given influencer has allowed us to estimate parameters X, X of the log-normal distribution of the intrinsic level of attractiveness X (see Sect. 4.3). Then the unconditioned distribution of the random variable N <unk> is Lognormal( X + log(F(0)), 2 X ). A standard maximum a posteriori estimation (MAP) allows us to compute a prediction n<unk> on the total number of interactions that the post will receive, given observation n(t). First, we transform the observation n(t) into virtual time n(t ) to remove the effect of daily variation of user activity. This is an important step: for example, if a post is published late at night, it might eventually become popular even if it receives just a few interactions during, say, the first hour. We assume that analysis of the history of the post dynamics of the given influencer has allowed us to fit parameters of functions (t ) (4) and (t ) (5). Then the conditioned distribution of random variable N(t ) | n <unk> is normal N(n <unk> (t ), n 2 <unk> (t ) 2 ). A standard application of Bayes' theo- rem provides the posterior distribution of N <unk> | n(t ): and our MAP prediction will be the mode of it: Note that the above analysis also provides an estimate of the error that we will run into by our prediction, since we have the entire posterior distribution of N <unk> | n(t ). As an example, Fig. 22 shows the MAP prediction (blue circles) of N <unk> for 40 posts published by influencer Salvini on IG, given some observation n(t ) (one for each post), where t ′ is shown on the horizontal axes. Red squares denote true values of N <unk>, while boxplots provide a graphical represen- tation of the posterior distribution of N <unk> | n(t ) computed analytically. We can observe from these sampled posts that the larger the time t ′ at which we observe the number of ( 6) 2 and3) Fig. 22 Boxplots of a-posteriori distributions of N <unk> | n(t ), predicted value n<unk> (circles), actual values n <unk> (squares), for 40 different posts of Influencer Salvini on IG, starting from (single) observations n(t ), where t ′ is reported on the x axes interactions, the smaller the prediction error in the total number of interactions, as expected. However, Fig. 22 suggests that accurate predictions are already feasible a short time after post creation. [N <unk> =n <unk> | n(t )] = [N(t ) = n(t ) | n <unk> ] [n <unk> ] <unk> n <unk> [N(t ) = n(t ) | n <unk> ] [n <unk> ] (7) n<unk> (n(t )) = arg max n <unk> [N <unk> = n <unk> | n(t )]. --- Comparison with baseline model To better show the goodness of our approach, we compare our predictions with those obtained by a baseline model. In this baseline model, followers of a given influencer independently access the platform according to a Poisson process of rate a, where a is the same for all users. Moreover, suppose that the decision to interact with a given post is made independently from the access time to the platform, and independently from user to user. Consequently, followers who decide to interact with a given post will do so after an amount of time distributed according to an exponential distribution of parameter *, where * is the same for all users interacting with a given post. For a fair comparison with our model, we will assume that the baseline model shares the same information about the history of posts of a given influencer. In particular, the distribution of the final number of interactions received by a post is known, modelled by a fitted Lognormal( X + log(F(0)), 2 X ), where F(0) is the number of followers at the time of the post creation (see Sect. 5.2). Moreover, we assume that detailed temporal history of interactions allows the baseline model to fit its single parameter * against the trace of all posts generated by a given influencer (i.e., * is adapted to each specific influencer). Finally, again for the sake of a fair comparison, the baseline model is applied to the temporal evolution of interactions transformed into virtual time to remove daily effects. One can easily see that our model subsumes the above baseline model, when f <unk> ( ) = ( - * ), where () is Dirac's Delta function. Its Laplace transform F <unk> (s) = e -s * can then be fitted against the normalized traces of the residual number of interactions, as illustrated in Fig. 18. Following the same MAP framework introduced before, let n <unk> be an instance of the final number of interactions received by a post, and n(t ) be the number of interac- tions observed after virtual time t ′ since post creation. Notice that, according to the baseline model, the conditioned distribution of random variable N(t ) | n <unk> can be approximated by a normal N(n <unk> q(t ), n <unk> q(t )(1q(t ))), where q(t ) = 1e - * t, being the sum of n <unk> independent Bernoulli random variables of mean q(t ). Therefore, we can apply (6) as well to the baseline model and compute a MAP prediction for the final number of interactions according to (7). Figure 23 shows the average relative error of the baseline model in the prediction of N <unk> | n(t ), for influencer Salvini on IG, considering 1,000 posts for each bin, i.e., for each bin in Fig. 23, we have averaged the relative error | n<unk> -n <unk> n <unk> | of 1,000 different posts (for which an observation n(t ) is available in the dataset such that t ′ falls in the bin). In contrast, Fig. 24 shows corresponding results obtained with our approach. We observe a significant reduction in the prediction error as obtained by our model, with respect to the baseline model, especially for smaller values of the measurement time t ′, suggesting that our approach is significantly better at performing early prediction of the final number of interactions. As expected, the relative error of the prediction diminishes over time. It is remarkable that a relative error of only 48% is incurred if an observation is available just between 0.01 (36 s) and 0.02 (72 s) after the post creation, i.e., a very early prediction. After 6 min (0.1), the error reduces to about 28%, and after about 1 h it reduces to about 16%. Similar results, not shown here, for the sake of brevity, have been obtained for the other considered influencers. The superiority of our approach is essentially due to the fact that the baseline model describes a homogeneous population of followers through a single parameter ( * ). In contrast, our model employs multiple parameters to describe heterogeneous followers, accounting for the fact that different users interact with posts more or less promptly, depending on the frequency with which they access the platform, which is highly diverse from user to user. --- Discussion and conclusion In this work, we studied the temporal dynamics of Facebook and Instagram for five years, focusing on top Italian influencers. After a thorough analysis of real-world data, we characterised several interesting features of the above OSNs, including: (i) the influencers' and followers' activity over time, (ii) the posts inter-arrival time and the post life-time, (iii) the arrival process of user interactions with a given post. The insights gained from our dataset analysis allowed us to develop a mathematically tractable, yet accurate model describing the temporal evolution of the number of interactions collected by a post. We validated our model against real traces for both Facebook and Instagram. In particular, we demonstrate our model's ability to perform early prediction of post popularity and the large improvements with respect to a simpler baseline. The existence of many interesting possible applications that may profit from early popularity predictions, such as anomaly detection and price bidding of ad slots, encourages further analytical efforts in this direction to incorporate effects not yet captured by the proposed methodology. --- Appendix 1. Shares, comments, and reactions on FB The interactions with an influencer's post can be measured in different ways: in this paper, we focus on the total number of likes/reactions, but this metric can be complemented, or substituted, by the number of shares of the post and the number of comments (see Table 1). On FB, a user can share an influencer's post, and this action will appear as a new post from the user. In Fig. 14 we presented the CDFs of the total number of likes for two influencers of IG, and we showed the goodness of fit with a log-normal distribution. Here, we repeat the analysis by also considering the number of shares and comments as different metrics for interactions on Facebook. Figures 25 and 26 depict such different metrics, along with their fittings, for two of the studied influencers of FB, namely Jackal, a comedian group, and Laura Pausini, a singer (see Table 3). As can be seen, most interactions consist of reactions, while the number of comments and shares is at least an order of Fig. 25 CDF of the total number of interactions, measured as reactions, shares, and comments on influencer Jackal (comedians) on FB at the end of the posts' lifetime, along with their log-normal fit Fig. 26 CDF of the total number of interactions, measured as reactions, shares, and comments on influencer Laura Pausini (singer) on FB at the end of the posts' lifetime, along with their log-normal fit Fig. 27 CDF of the total number of interactions per follower (measured as reactions, shares, and comments per follower) on influencer Jackal (comedians) on FB at the end of the posts' lifetime, along with their log-normal fit Fig. 28 CDF of the total number of interactions per follower (measured as reactions, shares, and comments per follower) on influencer Laura Pausini (singer) on FB at the end of the posts' lifetime, along with their log-normal fit magnitude smaller. However, the behaviour of these metrics is again well approximated by a log-normal distribution. Regarding the first influencer (Fig. 25), the Kolmogorov distance of the log-normal is 0.04 for reactions, 0.06 for shares, and 0.07 for comments. For the second example influencer (Fig. 26), the Kolmogorov distance is 0.05 for reactions, 0.06 for shares, and 0.06 for comments. Similarly to Fig. 15, we also report the normalised total number of interactions, dividing them by the number of followers at each post creation timestamp. Again, we analysed reactions, shares, and comments. The results are presented in Figs. 27 and 28, respectively for influencer Jackal, and influencer Laura Pausini. As for the examples on IG, the log-normal fit is as good (or even better) when considering this normalized number. In conclusion, shares and comments follow a distribution shape similar to the one of reactions, and they are well fitted by a log-normal distribution, even though their absolute number is smaller. --- Declarations Conflict of interest The authors have no competing interests to declare that are relevant to the content of this article. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A relevant fraction of human interactions occurs on online social networks. In this context, the freshness of content plays an important role, with content popularity rapidly vanishing over time. We therefore investigate how influencers' generated content (i.e., posts) attracts interactions, measured by the number of likes or reactions. We analyse the activity of influencers and followers over more than 5 years, focusing on two popular social networks: Facebook and Instagram, including more than 13 billion interactions and about 4 million posts. We investigate the influencers' and followers' behaviour over time, characterising the arrival process of interactions during the lifetime of posts, which are typically short-lived. After finding the factors playing a crucial role in the post popularity dynamics, we propose an analytical model for the user interactions. We tune the parameters of the model based on the past behaviour observed for each given influencer, discovering that fitted parameters are pretty similar across different influencers and social networks. We validate our model using experimental data and effectively apply the model to perform early prediction of post popularity, showing considerable improvements over a simpler baseline.
Introduction The adverse effects of high population growth are numerous. Pressure on land and natural resources, accelerating climate change, persistent food insecurity, insu cient availability and quality of public infrastructure, and increased infant and maternal mortality are amongst the major sustainable development challenges aggravated by large population numbers and high fertility rates (Drechsel et al., 2001;Shi, 2003;Alexandratos, 2005 Sullivan, 2020). In Asia and Latin America the demographic transition -the process in which countries evolve from high mortality and fertility rates to low mortality and fertility rates -started in the 1950s, with a drop in total fertility rates from 6 to 2 births per woman between 1950 and 2015 (United Nations, 2019; World Bank, 2020). Sub-Saharan Africa's (SSA) demographic transition, on the other hand, started signi cantly later and is proceeding at a slower pace, leading to a total fertility rate still as high as 5 births per woman in 2015 (Bongaarts, 2008(Bongaarts,, 2017;;Schoumaker, 2019). As a consequence, SSA's population numbers are projected to continue to grow for the next several decades. This will result in a doubling of the population between 2010 and 2050 -in some countries possibly even tripling -and ultimately leading to a more than quadrupling of its population over the short period of one century (Ezeh, Bongaarts and Mberu, 2012; United Nations, 2019). In the last decades, scholars have intensively studied the drivers behind reductions in total fertility rates (TFR) [1]. Household income, parents' education and the changing position of women within the household and society have been put forward as major drivers, but the mechanisms are not well understood. The quantity-quality (Q-Q) trade-off theory assumes that a lower TFR will improve the quality of child-raising (e.g. in terms of education, nutrition and health care). However, empirical evidence does not always support this, as it is di cult to disentangle the factors that jointly determine the quantity and quality aspects of childraising. Investigating preferences for child-raising might shed more light on the mechanisms. This holds especially for rural SSA, where actual fertility might diverge from desired fertility because of insu cient access to family planning. Yet, few studies investigate parents' preferences for child-raising, mainly due to the di culty of accurately measuring preferences and the trade-offs that parents are willing to make. In this study, we use a discrete choice experiment (DCE) to assess men's and women's preferences for child-raising in rural Senegal and Uganda. To our knowledge, we are the rst to empirically investigate Q-Q trade-offs in fertility decisions from a preferences perspective and to apply a DCE method within the framework of the Q-Q trade-off theory. Respondents are asked to choose between different hypothetical scenarios, which describe the number of children, and how they would be raised in terms of education, health care and nutrition. Using mixed logit models and interaction terms between the attributes, we quantitatively evaluate the trade-offs that people make in their choices. In addition, we test whether preferences for fertility rates and the Q-Q trade-off differ according to household poverty status, parental education and gender, thereby contributing to the literature on the drivers behind the demographic transition. Understanding child-raising preferences of households in SSA is important from both a theoretical perspective, to understand the factors in uencing the demographic transition, and from a policy perspective to develop more effective family-planning programs and policies. The case-study regions -rural Senegal and rural Uganda -are relevant, as both countries are suffering from high TFR (4.7 and 5.1 for Senegal and Uganda respectively in 2017), therefore remaining in the pre-demographic dividend [2] phase. Moreover, both countries experience low secondary completion rates, high prevalence of undernourishment, child mortality, and other problems connected to high fertility rates (World Bank, 2020). [1] The number of childbirths a woman would have if she were to live to the end of her reproductive life and bear children in accordance with age-speci c fertility rates of the speci ed year. [2] The demographic dividend corresponds to a period in a country's demographic transition when the ratio of work age population to young dependents increases rapidly due to a fast decline in the country's fertility. This leads to a window of opportunity for rapid economic growth as there are more people in the labour force and less people to support. --- Literature Review The demographic transition theory describes the evolution in society from high mortality and fertility rates, to low mortality and fertility rates. The driving forces behind the demographic transition, and especially those that reduce fertility, continue to be a source of discussion among demographers and economists. Three main models are put forward: economic and investment models, risk and mortality models, and cultural transmission models (Shenk et al., 2013). Economic and investment models mainly focus on the inverse correlation between income and population growth. Risk and mortality models consider fertility reduction to be the logical consequence of the reduction in infant mortality. When the chances of children surviving the rst ve dangerous years are higher, parents will reduce their fertility accordingly (Bousmah, 2017). Cultural transmission models explain the demographic transition through societies' perception of children and preferences for ideal family size, the transformation of social structures and networks, and prestige-seeking behaviour (Shenk et al., 2013). In this paper, we combine insights from economic and investment modelsand more speci cally the quantity-quality trade-off theory -with insights from cultural transmission models, which emphasise the importance of the preferences of individuals, households and societies with regards to fertility. The Q-Q trade-off theory, founded by Becker and colleagues (Becker, 1960;Becker and Lewis, 1973;Becker and Tomes, 1976), relies on the premise that the quantity and quality of children are related, because the shadow price of quantity (the cost of having a certain number of children) depends on the quality of child-raising of these children (sending children to school, providing them nutritious food, etc. is costly), and vice versa, the shadow price of quality of child-raising depends on the quantity of children a household has to raise (Becker and Lewis, 1973). The theory explains why fertility reduces when incomes rise, and why a negative correlation might exist between the quantity and quality of child-raising (Rosenzweig and Wolpin, 1980). Some models, however, assume that children do not only imply a cost to the household, because of mutual help between siblings in extended families (Baland et al., 2016), children helping with productive and reproductive tasks in the household (Mueller, 1984;Marteleto and de Souza, 2013) or economies of scale concerning child-raising (Guang and Van Wey, 1999;Steelman et al., 2002;Qian, 2009;Rosenzweig and Zhang, 2009). The empirical literature demonstrating the existence of the Q-Q trade-off remains inconclusive (Alidou and Verpoorten, 2019). The main bottleneck in studying the Q-Q trade-off empirically exists in removing confounding factors that jointly determine the number of children and the quality of their upbringing -these are parents' and society's characteristics such as education and income level, labour market opportunities, social norms, pension schemes or education policies. Many studies rely on semi-natural experiments such as unanticipated twin births (Black, Beegle and Christiaensen, 2019), or mothers' employment (Jensen, 2012;Van den Broeck and Maertens, 2015;Tiruneh et al., 2016) and autonomy (Abadian, 1996). The literature studying fertility preferences and their socioeconomic drivers is much less comprehensive, however, especially for SSA. This gap could to some extent re ect data limitations, as preferences are di cult to measure, and the usual method -to ask for a respondents' desired number of children -has been proven to overestimate respondents' actual wanted fertility (Bongaarts, 1990). But it is also linked to the general idea of unreliability and variability of stated fertility preferences (Trinitapoli and Yeatman, 2018;Bhrolcháin and Beaujouan, 2019). Still, several studies show that desired fertility rates can be highly predictive of later fertility outcomes (Kodzi, Johnson and Casterline, 2010;Günther and Harttgen, 2016;Cleland, Machiyama and Casterline, 2020;Yeatman, Trinitapoli and Garver, 2020). The few studies that analyse fertility preferences show that on average fertility desires in SSA is markedly higher than fertility desires in other parts of the world (Casterline and Agyei-Mensah, 2017), but that desired family size is lower on average for women (Bankole and Singh, 1998;Westoff, 2010;Matovu et al., 2017;Bashir and Guzzo, 2021), for better educated individuals (Westoff, 2010;Bongaarts, 2011;Muhoza, 2019), for households with higher economic welfare (Muhoza, 2019), and in a context of lower population density and lower child mortality (Headey and Jayne, 2014; Muhoza, 2019). --- Background And Data We focus on two regions: the Mount Elgon region in Eastern Uganda and the Saint-Louis region in North-Western Senegal. Both regions are rural areas with a strong connection to a secondary urban centre (Saint-Louis town in Senegal and Mbale in Uganda), and are characterised by their high dependence on agriculture, high poverty rates and weak infrastructure development. The Mount Elgon region is located in the humid tropics and is a densely populated mountainous area dominated by smallholder farmers and a commercially-oriented coffee-banana farming system. The Saint-Louis region is located in the semi-arid tropics and is a diverse area including agricultural communities focussing on small-scale commercial horticulture and rice production, livestock communities relying on rather extensive and semi-nomadic grazing systems, and large-scale horticultural companies hiring labourers from nearby communities. Population density is higher in the Mount Elgon region (838 inhabitants/km 2 ) than in the Saint-Louis region (49 inhabitants/km 2 ), but both regions are characterized by high population growth rates, respectively 3.9% and 3.4% in the Mount Elgon and Saint-Louis region (ANSD, 2015; Uganda Bureau of Statistics, 2017). The Wolof (41.7% of the rural population in Senegal) and Peulh (30.4% of the rural population in Senegal) ethnicities are the two most prevalent in the Saint-Louis region (ANSD, 2015). Both ethnicities have traditional hierarchical customs (Creevey, 1996) that attribute a major importance towards having a large family and where polygamy is common. As the Islam is the dominant religion in the research area (96.5% of the rural population in Senegal is Muslim (ANSD, 2015)), everyday life is ruled by Islamic and customary patrilineal laws [3] (Lambert and Rossi, 2016). Both the introduction of the Islam, and the imposition of the French colonial system [4] that incentivized the growing of cash-crops (mainly peanuts) on large monocrop elds using animal drawn ploughs, lowered the position of women in the rural Senegalese society, and increased even more the value of having a large household (Creevey, 1996;Alesina, Giuliano and Nunn, 2013). The Mount Elgon region in Uganda also knows a patrilineal tradition, and is mainly populated by the Bagisu ethnicity (84.4% of sampled households), with a minority of people from the Sabiny ethnicity (14.4% of sampled households). The Bagisu and Sabiny attach a high value to children and a woman's worth is highly connected to the number of children she bears (Makwa, 2012;Kwagala, 2013). There is a religious diversity in the research area (unrelated to ethnicity), composed of protestants (46.8% of sampled households), Catholics (29.4% of sampled households) and born-again Christians (16.5% of the sampled households). Polygamous marriages are common, but wives do not necessarily live under the same roof. In theory, Bagisu women are under control of their father or husband. However, the status of men highly depends on his marital status (more than women), livelihoods in the area are highly dependent on women's labour, and divorce and remarriage is fairly easy for Bagisu women. Women's independence might therefore be larger than expected (Jackson, 2013). British colonial rule [5] (forcibly) introduced the growing of cash-crops, mainly coffee, in the region (Petursson and Vedeld, 2015). However, as the growing of coffee is highly labour-intensive and animal drawn ploughs are of little use on these steep cultivations, the shift in cropping systems did not signi cantly alter the position of women in the Bagisu society (Alesina, Giuliano and Nunn, 2013). While in both countries family planning programs have been high on the political agenda since 2010, contraceptive uptake remains low, especially in rural areas. In 2011, Senegal signed the 'Ouagadougou Call to Action' and thereby committed to double the budget for family planning programs (Sidibe, Kadetz and Hesketh, 2020). This resulted in the National Action Plan for Family Planning and a government commitment to increase contraceptive uptake to 45% by 2020 (FP2020, 2020). Still, recent gures show that the uptake of modern contraceptives among married women remains as low as 28% (Track20 Project, 2020a). In 2012, Uganda committed to providing universal access to family planning, to reduce the unmet need for family planning from 40% to 10% by 2022, and to increase the modern contraceptive prevalence rate to 50% by 2020; resulting in the launch of the Family Planning Costed Implementation Plan (FP2020, 2021). Modern contraceptive uptake among married women is currently 39% in Uganda (Track20 Project, 2020b). We use data from a quantitative household survey and a discrete choice experiment. Survey data were collected in 2016, from respectively 464 and 758 households in Senegal and Uganda. Households were sampled using a multistage[6] strati ed random sampling method, with strati cation based on distance to an asphalt road and wage employment status in Senegal, and urbanization level and altitude in Uganda. A structured quantitative questionnaire was used to collect data on household demographics, farm production, land and non-land assets, living conditions and employment. The discrete choice experiment (DCE) was implemented in 2017 with 250 households in each region, which were randomly selected from the household survey sample. Within a household, both husband and wife completed the DCE separately, resulting in a total sample of 1000 respondents [7]. The DCE was followed by questions on attribute non-attendance and attribute ranking, and the respondent's current and preferred number of children. Table 1 describes the socio-economic characteristics of the sampled households and respondents. The average household size, as well as the variation in household size, is larger in the Senegal sample than in the Uganda sample, which relates to multiple generations living together in extended families in Senegal versus a singular couple plus children living in nuclear households in Uganda. Households in the Senegal sample live closer to the main surfaced road compared to households in the Uganda sample. Polygamy is common in both regions but is slightly more prevalent in the Senegal sample. Only 7% and 16% of the respondents have completed lower secondary education in respectively the Senegal and Uganda sample -which is lower than the national averages of 18% for Senegal (2017 data) and 24% for Uganda (2012 data) (World Bank, 2020). Female and poor respondents are on average less likely to have completed lower secondary education compared to male and non-poor respondents. The majority of the sampled households is poor[8], with a higher incidence of poverty in the Uganda sample than in the Senegal sample. Male respondents prefer and actually have on average a higher number of children than female respondents. There is a gap between respondents' preferred number of children (9.5 on average) and their actual number of children (4.6 on average) in the Senegal sample, which could relate to uncompleted fertility or undesired infertility. In Uganda, respondents' preferred number of children (5.8 on average) is below their actual number of children (6.9 on average), which could relate to an unmet need for family planning and contraceptives. Female and poor respondents are on average less likely to have completed lower secondary education compared to male and non-poor respondents. Male respondents prefer and actually have on average a higher number of children than female respondents. Secondary educated respondents actually have on average the same number of children compared to non-secondary educated respondents. However, only secondary educated respondents are able to realize their preferred household size, as they do not experience a signi cant gap between their preferred number of children and the number of children they actually have. Respondents who do not hold a secondary education diploma would prefer to have a higher number of children than the number they actually have [9]. [Table 1 near here] [3] The Wolof's inheritance and family relationships were partially determined by matrilineal decendency patterns, while the Peulh's traditions were highly patrilineal, already before the arrival of the Islam (Creevey, 1996). [4] The rst French settlement in (nowadays) Senegal dates back from the 17 th century with the establishment of Saint-Louis town. The French colonial period in Senegal began in 1884 and ended in 1960 with the independence of Senegal (Bawa, 2013). [5] The British colonial rule in the study region began in 1904 and ended in 1962 with the independence of Uganda (Wanyonyi, 2018). [6] Two-stage sampling design in Senegal and three-stage sampling design in Uganda. [7] The original Ugandan sample consisted of 265 couples, of which we dropped two because of outlying data and 13 because the couples' mean age was older than 80 years. [8] A household is de ned as poor when per adult equivalent total household income falls below the international moderate poverty line of $3.1 per person per day. [9] Difference signi cant at the 1% signi cance level. --- Methods --- Design of DCE Discrete DCEs provide a link between observed behaviour and economic theory. They are based on Lancaster's consumer theory which posits that consumer preferences are shaped by the individual characteristics a good or service possesses and not by the good as a whole (Lancaster, 1966). During a DCE, a respondent is asked to choose between hypothetical scenarios that consist of attributes with varying attribute levels. In our study, the alternatives describe different situations of child-raising, with, in line with the Q-Q trade-off theory, speci c emphasis on the quantity of children and the quality of child-raising. As the validity of a DCE largely depends on the choice of the attributes (Mangham, Hanson and McPake, 2009), a detailed literature review, and multiple key informant interviews and focus group discussions in Senegal and Uganda were implemented prior to the design of the experiment to identify and select attributes that represent the quality of child-raising. In total we specify four attributes, with the rst referring to the quantity aspect of child-raising, and the next three to the quality aspect, covering education, nutrition and health care (Table 2). [Table 2 near here] The rst attribute speci es the number of children and consists of six levels. The range of 1 to 12 children is based on the observed number of children in the study regions, as derived from the quantitative surveys. The second attribute is de ned as the share of children from the household that can complete lower secondary education. The attribute contains three levels: none, half and all of the children. In both countries lower secondary education comprises 4 years (children ages 12 to 15 in Senegal, and 13 to 16 in Uganda), after which a certi cate of completion is awarded. In Uganda, we consider private schools that charge high tuition fees, as focus group discussions pointed towards a very low quality of public schooling. In Senegal, we consider public schooling, as private schools are not available in rural areas. While no o cial tuition fees are charged in Senegalese public schools, costs remain high as children have to commute or stay overnight, books and school supplies are expensive, and informal "maintenance fees" are charged. The third attribute is de ned as the number of days a week a household can eat a complete meal, including carbohydrates, proteins and vegetables, for the main meal of the day. On the other days an incomplete meal is consumed (e.g. only carbohydrates). The attribute consists of three levels: zero, four and seven days a week. In Senegal, a very known local dish "thieboudienne" is taken as an example, which comprises rice, sh and vegetables. In Uganda, a complete meal is de ned as containing posho (maize meal) or matoke (green bananas), beans and green leafy vegetables. The fourth attribute is de ned as the quality of health care institute the household can consult when a child has a severe fever. The attribute contains three levels: low-quality, medium-quality and high-quality. A low-quality health care institute is de ned as a small health care facility, located in almost every rural community, with only a health worker or nurse present. Basic treatments can be performed using basic equipment (such as disinfecting wounds, applying bandages, and treating common pathologies, such as acute diarrhoea or malaria), but diseases cannot be diagnosed. A medium-quality health care institute is operated by a larger staff of at least one nurse, one midwife and one assistant nurse. Medical equipment is more advanced and basic disease diagnosis is possible. In a high-quality health care facility, at least one doctor is stationed, often complemented by different specialists. Diagnosis and treatment of more complex or uncommon diseases are possible. The three quality attributes are similar in both countries but are adapted to the local context (e.g. in Senegal low-quality health care is a "case de santé", while in Uganda it is a "health center II facility"). We use choice sets with three alternatives and visual illustrations to assist illiterate people with making their choice (Figure A1 in appendix). We present 12 choice cards to each respondent and randomize the order to avoid learning bias. Hence, we obtain 36,000 observations in total. As information is missing on one choice card of two respondents, we retain 35,994 observations for the DCE analysis. We use a Bayesian design, which accounts for uncertainties around the true values of the a priori information of the parameters (Kessels et al., 2008). Using the D-e ciency criterion, the determinant of the variance-covariance matrix of the parameter estimators is minimized and the e ciency of the design is increased (Kessels, Goos and Vandebroek, 2006). As we hypothesize that respondents have positive preferences for the quality attributes, we use small positive priors for these levels. We hypothesize a positive but diminishing preference for the number of children, so priors are higher for 4 and 6 children, and lower for 1, 2, 9 and 12 children. While the inclusion of an opt-out or status quo option is common practice in DCEs as it increases the realism of the choice task (Carson et al., 1994;Louviere, Hensher and Swait, 2000;Vanermen et al., 2021), we did not include an opt-out or status quo option in our choice experiment. This means that respondents are forced to make a choice between the three proposed scenarios. The decision not to include an opt-out option was guided by the fact that an opt-out option reduces the e ciency of the experiment because of a higher number of no-choices (Brazell et al., 2006;Veldwijk et al., 2014), and because the respondents are confronted with these forced choices in daily life, without the possibility to opt-out if characteristics do not match with their preferences. The application of a forced choice structure entails some consequences for the data analysis and interpretation of the results. As research shows that individual willingness to pay is different in forced and unforced choice sets (Penn, Hu and Cox, 2019), we do not calculate welfare estimates based on the estimated coe cients. Moreover, we need to remain cautious when interpreting the results. The experiment was introduced to both spouses together. The attributes were explained in detail and the hypothetical character of the experiment was emphasized (i.e. that the proposed household should be taken as given, not taking into account possible economic, social or physical limitations, or the actual composition of the household). After the introduction, the spouses were interviewed separately by an enumerator of the same gender. Each experiment started with a test card, to check whether the respondent understood the rationale of the study [10]. --- Analysis of DCE Discrete choice modelling is embedded in the random utility theory framework and assumes that respondents choose the alternative that yields the highest utility level from each choice set (Louviere, Hensher and Swait, 2000). Following this framework, we separate the utility respondent i derives from alternative j ( ) into an observable deterministic component ( ) -linearly depending on the attributes of the alternative ( ) and individual-speci c socioeconomic characteristics ( ) -and a stochastic component ( ) capturing unobserved heterogeneity across alternatives and individuals (equation 1) (Hensher, J. M. Rose and Greene, 2005). As utility is a latent variable, we cannot directly observe the utility the respondent derives from a speci c alternative. Therefore, we use the probability that respondent i chooses alternative j as an approximation, expressed in terms of a logistic distribution (Louviere, Hensher and Swait, 2000; Hensher, J. M. Rose and Greene, 2005). We use a mixed logit model (MXL), which is most common and allows for preferences to be heterogeneous across respondents, and maximum likelihood estimation (Train, 2003). We estimate different speci cations of the MXL, treating all attributes as random and using 1500 Halton draws, to analyse respondents' fertility preferences as well as the Q-Q trade-off. A rst model speci cation is a basic MXL, including only main effects (1). In a second speci cation, we add a quadratic term of the quantity attribute to control for non-linear preferences (2). In models three to six, we test whether respectively gender, education level, poverty status and country are correlated with fertility preferences by adding interaction terms with four binary variables in separate models: female respondent (3), owner of a lower secondary education diploma (4), household poverty status (5) and country (6). In a fourth model speci cation, we assess the Q-Q trade-off by adding interaction terms between the quantity attribute and the three quality attributes (7). In models (8) to (11), we test whether gender, education level, poverty status and country are correlated with the Q-Q trade-off by adding double-and triple-interaction terms with these four variables. --- Limiting Potential Bias A DCE is a stated preference method and is inherently subject to hypothetical and social desirability bias (Menapace and Raffaelli, 2020). On the one hand, we expect these biases to be low in our DCE. The choices that the respondents are forced to make are similar to the decisions they face in daily life (i.e. do we invest in sending more children to school, or do we invest in more nutritious food). In addition, enumerators were carefully selected such that they share the same gender and ethnicity as the people they interview, with the survey taking place in the private setting of the own household with husband and wife interviewed separately. On the other hand, DCEs are typically used to infer preferences on consumer goods or in an environmental valuation setting. These choices can be considered to be more trivial or independent from the choices you previously made in life, and are less likely in uenced by hypothetical or social desirability bias. For example, the hypothetical character of choosing between a conventional or an organic apple might be less in uenced by the fact that you actually bought conventional apples a week ago, than that the choice between a hypothetical household of two or eight children might be in uenced by the fact that you actually have seven children. Moreover, buying apples in the supermarket is likely to be less of a taboo topic when compared to a household's fertility. Enumerators were trained to emphasize the hypothetical character of the experiment, such that respondents focus on the characteristics of the household presented on the choice cards, and to not take into account characteristics of their actual living conditions, or possible social, monetary or physical limitations. Even though we tried to reduce the hypothetical bias to a maximum, complete mitigation is impossible. The remaining hypothetical bias could in ate the preference for the number of children in cases where the respondents' actual number of children is bigger than the number of children they would have preferred. On the other hand, the hypothetical bias could de ate the preference for the number of children, in cases where the respondents' actual number of children is lower than the number of children they would have preferred. We nd other studies that use a DCE to infer preferences on more di cult, unconventional or "once in a lifetime" choices, or taboo topics. Dieci We control for attribute non-attendance, as previous studies show that ignoring one or more attributes could lead to biased estimates (Hensher, J. Rose and Greene, 2005; Hensher and Greene, 2010; Hole, Kolstad and Gyrd-Hansen, 2013), by using the information on respondents' attribute non-attendance statements. We test for scale heterogeneity, using the generalized multinomial logit model developed by Fiebig, Keane, Louviere, and Wasi (2010). We cluster standard errors at the respondent level. The results of the stated attribute non-attendance and scale heterogeneity checks are presented in Table A1 in the appendix. The estimates of both the stated attribute non-attendance and the scale heterogeneity models are similar in magnitude, sign and signi cance level compared to estimates of the basic MXL model estimates. We therefore conclude that the MXL model estimations are not sensitive to attribute non-attendance and scale heterogeneity, and base our results and interpretation on these estimates. [10] The test card contained three scenarios of which one scenario dominated the other two. If the respondent did not choose the dominant scenario, every aspect of the experiment was explained again. --- Results Estimation results for the rst six MXL speci cations are presented in Table 3. In general, we nd that respondents derive positive utility from more children as well as from the three quality attributes. The model including a quadratic term (2) shows that respondents' marginal utility derived from an additional child increases with the number of children at a diminishing rate. The number of children at which utility is maximized is 7.5. From all the attributes related to quality of child-raising, children's education contributes most to respondents' utility, followed by nutrition and health care. Models (3) to (6) show that interaction terms with gender, education, poverty and country dummies are signi cant, partially explaining preference heterogeneity, while main effects are robust to the inclusion of interaction terms. We nd that women prefer fewer children and have stronger preferences for education and health care than men; respondents who completed lower secondary education have stronger preferences for education and nutrition than respondents who did not; non-poor respondents prefer fewer children and have stronger preferences for health care and nutrition than poor respondents; and Ugandan respondents have stronger preferences for education but weaker preferences for health care and nutrition than Senegalese respondents. [Table 3 near here] Table 4 presents the results of the Q-Q trade-off analysis. The basic model (7) shows that main effects are robust to the inclusion of interaction terms between Children and the other attributes. All interaction terms (except for the interaction with all children completing lower secondary education) have signi cantly negative coe cients, implying that parents are willing to have fewer children if they can raise them under better circumstances. Hence, our results point to the existence of a Q-Q trade-off in fertility preferences. The estimates indicate strong effects. When half of the children can complete lower secondary education, compared to none of the children, respondents' marginal utility of an additional child reduces from 0.13 to 0.03, a decrease of 79%. With access to high-quality health care, compared to low-quality health care, the marginal utility of an additional child reduces to 0.01, a drop of 91%. However, the interaction term between the number of children and all children completing lower secondary education is signi cantly positive. When all children can complete lower secondary education, compared to none of the children, respondents' marginal utility of an additional child increases from 0.13 to 0.19, an increase of 35%. The latter effect implies that respondents want to have more children if they can assure all of them will obtain a lower secondary degree. We do not nd evidence for differences in the Q-Q trade-off between men and women (8), or between non-or primary-and secondary-educated respondents (9). The only difference we nd between poor and non-poor respondents (10) is that the non-poor make a trade-off between the number of children and medium-quality health care while the poor do not. While Senegalese respondents experience a Q-Q trade-off for medium-and high-quality health care, for Ugandan respondents the coe cient becomes insigni cant for high-quality health care and even negative for medium-quality health care (11). [Table 4 near here] --- Discussion And Conclusion The results of the choice experiment on fertility preferences and the quantity-quality trade-off point to four main ndings. First, our results con rm that Sub-Saharan African households prefer to have many children -which is re ected in a utility-maximizing point around 7.5 children. This number is similar to the survey response on the preferred number of children, indicating that the choice experiment re ects people's preferences accurately. This utility-maximizing point resulting from the choice experiment does not correspond to an economic optimum as costs of child-raising are not taken into account, so we cannot directly compare this with the prevailing TFR of 4.7 in Senegal or 5.1 in Uganda. Yet, it is worrying that desired fertility rates remain very high, despite many family planning campaigns in both countries. Second, we nd that women prefer fewer children than men, which has been validated by other studies in many different settings, including SSA (Bankole and Singh, 1998;Westoff, 2010;Matovu et al., 2017). In addition, we nd that women have stronger preferences for education and health care. This nding supports a recent experimental study in Uganda that shows that health care training for mothers has a greater impact on children's health compared to health care training for fathers (Nyqvist and Jayachandran, 2017).
To attenuate the adverse effects of high population growth in low-income countries and to achieve the Sustainable Development Goals, knowledge on rural fertility preferences and the existence of a quantity-quality trade-off between the number of children and child-raising quality is key. To tackle this, we implement a choice experiment in Senegal and Uganda. We include three quality and one quantity aspect of child-raising, and three socio-economic drivers of fertility, resulting in a comprehensive assessment. We show that rural households prefer to have many children, but women and non-poor respondents demonstrate a lower preference for many children than men and poor respondents. We nd that the quantity-quality trade-off is a two-sided story. On the one hand, for most of the quality attributes, we con rm the existence of a trade-off. On the other hand, quantity and quality are complementary when all children in the household can attain a lower secondary school diploma. Our results imply that broadening the currently narrow focus on contraceptive uptake in family planning programs, and more speci c targeting of such programs to people with low fertility preferences, could improve their effectiveness.
re ected in a utility-maximizing point around 7.5 children. This number is similar to the survey response on the preferred number of children, indicating that the choice experiment re ects people's preferences accurately. This utility-maximizing point resulting from the choice experiment does not correspond to an economic optimum as costs of child-raising are not taken into account, so we cannot directly compare this with the prevailing TFR of 4.7 in Senegal or 5.1 in Uganda. Yet, it is worrying that desired fertility rates remain very high, despite many family planning campaigns in both countries. Second, we nd that women prefer fewer children than men, which has been validated by other studies in many different settings, including SSA (Bankole and Singh, 1998;Westoff, 2010;Matovu et al., 2017). In addition, we nd that women have stronger preferences for education and health care. This nding supports a recent experimental study in Uganda that shows that health care training for mothers has a greater impact on children's health compared to health care training for fathers (Nyqvist and Jayachandran, 2017). While this study notes that their results could be driven by norms about whose domain child health is, and not necessarily re ect a stronger preference of the mother concerning their children's health, our results put the ndings of Nyqvist and Jayachandran (2017) in another perspective. Third, we do not nd proof of a negative correlation between parents' education and preferences for more children. This contradicts various existing studies (e.g. Bongaarts, 2010;Muhoza, 2019;Masuda and Yamauchi, 2020). Yet, our results are in line with a study that nds lower fertility rates among more educated women in Egypt, and attribute this to an older age at marriage, and not to changes in fertility preferences (Ali and Gurmu, 2018). The results show that poorer households prefer to have more children, which supports other studies that nd proof of higher fertility rates among the poor (Schoumaker, 2004;Gupta and Dubey, 2006;Gillespie et al., 2007). We need to note that, in our sample, socio-demographic characteristics such as education and poverty status are correlated with ethnicity. In the Senegalese sample, Wolof respondents are more likely to be secondary educated compared to respondents from other ethnicities (9% versus 4% respectively) [11]. In the Ugandan sample, Bagisu respondents are less likely to be secondary educated (14% versus 26% respectively) and more likely to be poor (78% versus 36% respectively) compared to Sabiny respondents [12]. In addition, there is a possible link between reproductive characteristics (including fertility and fertility preferences) and ethnicity (or religion), especially in rural SSA, where ethnic or religious identity continues to have major implications for social mechanisms. The empirical literature nds ambiguous evidence for the role of ethnicity on reproductive characteristics. While Kollehlon (2003) shows signi cant fertility differentials across different ethnicities in Nigeria, even when controlling for other socio-economic characteristics, other scholars nd that the onset of parenthood (which is highly linked with fertility) and contraceptive use in Ghana can mainly be explained by socio-economic characteristics such as education and age, while ethnicity (controlling for socio-economic characteristics) does not have a signi cant explanatory power (Addai, 1999;Takyi and Addai, 2002). Given the correlation between ethnicity and other socio-demographic characteristics in our sample and the possible link between ethnicity and fertility, it would be interesting to further unravel fertility preferences and the Q-Q trade-off by jointly controlling for multiple sociodemographic characteristics such as ethnicity, education and poverty status. This was however not possible in our analysis due to modelling restrictions and the large number of interaction terms this requires. Fourth, our results imply that the Q-Q trade-off is a two-sided story. On the one hand, for most of the quality attributes, we nd evidence of the existence of a trade-off with the quantity of children. The preference to have many children are found to decrease with access to better education, nutrition and health care. These results add a fertility preferences perspective to the many empirical studies that prove the existence of the Q-Q trade-off (Lee, 2008; Rosenzweig and Zhang, 2009; Kang, 2011; Mogstad and Wiswall, 2016; Liang and Gibson, 2018; Argys and Averett, 2019; Dumas and Lefranc, 2019). On the other hand, quantity and quality are found to be complementary to some extent. When all children in the household can attain a lower secondary school diploma, parents in our sample prefer more children. These ndings could be explained by the type of human capital investments associated with the quality of child-raising. Short-run investments for more basic quality aspects, like better nutrition and health care, seem to induce parents to prefer fewer children, while investments with a higher return in the long run, like education, seem to increase parents' desired number of children. This dual nding can to some extent explain why previous studies nd ambiguous effects concerning the existence of the Q-Q trade-off (Alidou and Verpoorten, 2019), and shows why a focussed analysis considering preferences and multiple aspects of child-raising can bring important nuances in the study of the Q-Q trade-off in fertility. We can deduce three important policy implications. First, our results support the rationale that resources put in the hands of women will be used more for the bene t of their children (Du o, 2003;Schady and Rosero, 2008;Armand et al., 2020). Second, our results imply that policies, aiming at increasing access to education free of charge or based on a general taxation system, can unintentionally result in increased household fertility, as parents do not have to make the Q-Q trade-off anymore. Such unintended fertility effects have been modelled by Azarnert (2010), de la Croix and Doepke (2004), Palivos and Scotese (1996) and Rosenzweig (1982), and validated empirically for Vietnam (Keng and Sheu, 2011), Mexico (Todd and Wolpin, 2006) and India (Rosenzweig, 1982). Third, family planning programs are considered to be an important instrument for population reduction and control, and are often centred around the availability of and awareness on contraceptives (May, 2017; Singh, Bankole and Darroch, 2017). Our results show that fertility preferences in SSA continue to be biased towards large families but that important differences exist in fertility preferences related to gender and poverty status. A narrow focus on contraception in family planning programs might reduce the effectiveness of such programs in regions with high fertility preferences. A broader focus of family planning programs, such that also changes in fertility preferences are triggered, and more speci c targeting of these programs might strengthen their effectiveness. Care is needed in generalizing our ndings, as they are limited by the choice experimental method used, and the case-study speci city at country and temporal level. We will discuss these limitations consecutively. First, we use a choice experiment, which is an attractive economic tool to elicit the stated preference of respondents. It makes use, however, of hypothetical scenarios which reduce complex real-life situations such as fertility and child-raising to very basic situations described by a few attributes and attribute levels, and is therefore inherently subject to hypothetical and social desirability bias. While the enumerators were trained to emphasize the hypothetical character of the choice experiment, focussing on the importance of considering the hypothetical household as presented without taking into account possible monetary, social or physical limitations of the actual household, we cannot be 100% sure that respondents did not consider their actual reality when choosing between scenarios. As children remain an important asset in SSA, both from a labour perspective and from an old-age support and insurance perspective (Hoddinott, 1992;Garenne, 2015;Lambert and Rossi, 2016), preferences with respect to the quantity of children could have been in ated because of these considerations, especially for (poor) households who do not have savings in other forms (monetary, land, livestock, etc.). This hypothetical bias could explain the disparity between revealed and stated preferences. Second, our ndings on fertility preferences and the Q-Q trade-off are derived from speci c regions in rural Uganda and Senegal. While there are some differences between preferences among Ugandan and Senegalese respondents, most preferences remain robust across the different regions. Still, care is needed in generalising results, as preferences could differ in diverging cultural, ethnic, spatial, and temporal contexts. Studies have proven that fertility preferences can be highly variable over the course of a lifetime, but we did not speci cally target young people who have not started their child-bearing years yet (Kodzi, Casterline and Aglobitse, 2010;Yeatman, Sennott and Culpepper, 2013;Trinitapoli and Yeatman, 2018). Moreover, as described by Libois and Somville (2018) in Nepal, when nucleus households are strongly rooted in the extended family -as is often the case in SSAsocial norms can counter the Q-Q trade-off, as households with a lower number of children may be morally required to host kin. The analysis of fertility preferences and the Q-Q trade-off in light of these life-time dynamics and extended kinship networks could be an interesting avenue for further research. [11] Tested with two-sided t-test. Difference signi cant at the 10% signi cance level. [12] Tested with two-sided t-test. All differences signi cant at the 1% signi cance level. --- Declarations --- Competing Interests: The authors have no relevant nancial or non-nancial interests to disclose. Author Contributions: All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Kaat Van Hoyweghen, Goedele Van den Broeck, Janne Bemelmans, and Hendrik Feyaerts. The rst draft of the manuscript was written by Kaat Van Hoyweghen and all authors commented on previous versions of the manuscript. All authors read and approved the nal manuscript. --- Supplementary Files This is a list of supplementary les associated with this preprint. Click to download. --- RevEconHouseholdSuppl.docx Appendix.docx
To attenuate the adverse effects of high population growth in low-income countries and to achieve the Sustainable Development Goals, knowledge on rural fertility preferences and the existence of a quantity-quality trade-off between the number of children and child-raising quality is key. To tackle this, we implement a choice experiment in Senegal and Uganda. We include three quality and one quantity aspect of child-raising, and three socio-economic drivers of fertility, resulting in a comprehensive assessment. We show that rural households prefer to have many children, but women and non-poor respondents demonstrate a lower preference for many children than men and poor respondents. We nd that the quantity-quality trade-off is a two-sided story. On the one hand, for most of the quality attributes, we con rm the existence of a trade-off. On the other hand, quantity and quality are complementary when all children in the household can attain a lower secondary school diploma. Our results imply that broadening the currently narrow focus on contraceptive uptake in family planning programs, and more speci c targeting of such programs to people with low fertility preferences, could improve their effectiveness.
INTRODUCTION The issue of mistrust between medical patients, on the one hand, and medical providers and professionals on the other, remains a worldwide phenomenon that is arguably growing in recent decades. This issue has taken on an extremely pernicious dimension in the form of violent retaliative acts against doctors and nurses, as well as declining levels of public trust in healthcare institutions more generally. On the international scene, the former problem is especially pronounced in China (The Lancet, 2012, 2014), whereas the latter is especially pronounced in the United States (Wolfensberger and Wrigley, 2019). With the disastrous global impact of the COVID-19 pandemic, the issue of people's attitudes toward the healthcare system and healthcare workers has become more widely important than ever. Healthcare workers have been subjected to extreme and in many cases unprecedented stressors while dealing with the pandemic (Kröger, 2020), and trust that they will be protected is a key predictor of healthcare worker motivation and well-being during a pandemic (Imai, 2020). It is therefore critical to understand and interrogate how COVID-19 has influenced or failed to influence people's prior trust in and attributions about the healthcare system and healthcare workers. The pandemic also underscores the importance of addressing this imperative from a cross-cultural perspective. Of particular importance for the present project is the fact that, despite the apparent origination of the COVID-19 outbreak in China, the spread and consequences of the virus have been more severe to date in the United States relative to China (Hua, 2020;Lo and Shi, 2020). In the current project, we hope to shed light on how the pandemic may have exacerbated cross-cultural variation in attitudes toward healthcare as a function of medical uncertainty. We present the first systematic evidence to date concerning differences in how people in China and the United States respond to the anxiety of medical uncertainty with compensatory psychological defense mechanisms. We adopt a cultural perspective on scapegoating (Sullivan et al., 2014), which suggests that, universally, people may react to the anxious uncertainty of loss of personal control by scapegoating-disproportionately blaming and/or aggressing against-particular viable targets. However, the viability of a target is in large part determined by cultural factors. Specifically, we expected that whereas targeted aggression against specific healthcare workers may be a culturally afforded scapegoating mechanism in China, people in the United States may be comparatively more likely to blame the healthcare system as a whole in the face of medical uncertainty. We further expected these differences in culturally afforded scapegoating to be mediated by different patterns of control-seeking in the different cultural contexts. We tested these ideas in an initial exploratory study conducted prior to the outbreak of the COVID-19 pandemic (Study 1), and then performed a confirmatory study investigating the robustness of these relationships during the pandemic (Study 2). --- SCAPEGOATING IN THE FACE OF MEDICAL UNCERTAINTY The current research examines a specific psychological mechanism that we propose contributes to violence against doctors and nurses in China, and to healthcare system distrust in the United States: namely, lack of perceived personal control on the part of patients and their relatives in situations of heightened medical uncertainty. Our present model draws on current research and theory regarding the psychological process of scapegoating as a control maintenance mechanism (Sullivan et al., 2014). Studies show that when people are threatened by perceptions of uncontrollability in their lives, they evince an increased tendency to attribute blame and power to enemy individuals, groups, and organizations who may be scapegoated (Rothschild et al., 2012). Cognitively and motivationally, it is reassuring to see evil in the world not as due to random, unpredictable forces, but rather as stemming from focal individuals who can be controlled and on whom one can exact retribution, or from organizations and institutions that can be politically or economically held accountable. Undergoing experiences of illness, whether one's own or that of loved ones, can be a major threat to perceived personal control. Thus, it stands to reason that in situations of medical uncertainty (e.g., a chaotic disease course, or contracting COVID-19 in the midst of a global pandemic), people will be motivated to scapegoat particular targets to which blame for the illness and its effects may be attributed 1. However, we crucially propose that the cultural context in which individuals are immersed will influence both (a) the exact nature of the control-seeking motive they are seeking to satisfy in the uncertain situation, and (b) the nature of the target that will be afforded as most viable for blame and attendant aggression or distrust. --- CULTURAL PATHWAYS: CONTROL-SEEKING AND TRUST IN CHINA AND THE UNITED STATES Our research can be understood in terms of a cultural pathways approach, which suggests that relatively universal psychological processes-such as the motive to maintain perceived control over one's health, and to make attributions when that control is threatened-are shaped by particular cultural imperatives and affordances (Kitayama et al., 2010). We assert that different cultural patterns of control-seeking and trust in the United States and China are important in this regard. First, we emphasize the distinction between primary and secondary control-seeking. As originally defined by Rothbaum et al. (1982), primary controlseeking refers to attempts to influence one's environment to suit the desires of the self, and is a predominant cultural imperative in more historically independent settings such as the United States. On the other hand, secondary controlseeking refers to a set of strategies for adapting the self to fit environmental requirements, and is a more common imperative in historically interdependent settings such as China (Rothbaum et al., 1982). In particular, in the healthcare context, a form of secondary control-seeking labeled vicarious control-putting trust in powerful others and authority figures to control the self's outcomes (Rothbaum et al., 1982)-is of special relevance, given 1 Of course, we do not argue that scapegoating is the only, or even the most prominent, defensive psychological response to medical uncertainty. But given our interest in addressing the important applied phenomena of aggression and distrust against healthcare workers and the healthcare system, it is probably one of the most important responses to understand, and hence the focus of our empirical efforts. It is also important to acknowledge that scapegoating can have many important motivations and consequences (e.g., Rothschild et al., 2012), but we focused in the present context on its control maintenance function. the fact that patients are placing their well-being in the hands of healthcare professionals. It is also critical to take into account divergent cultural patterns of trust when it comes to understanding how lay people relate to the healthcare system and workers, particularly in context of medical uncertainty. In this regard, we must distinguish between different levels and types of trust, given that people's interactions with healthcare workers are of a local and interpersonal (albeit professional) nature, whereas their beliefs about the broader healthcare system represent a form of institutional or governmental trust. Generally, recent research on the cultural psychology of trust (Liu et al., 2018;Zhang et al., 2019) suggests that people in the United States and in China have relatively different patterns of trust at the interpersonal and institutional/political levels. To summarize this research cursorily, people in the United States have relatively high levels of interpersonal, but relatively low levels of institutional trust; whereas people in China tend to have more comparable levels of trust across persons and institutions. Indeed, Chinese people evidence a relatively unique, "topdown" structure of trust reflecting the centralized nature of the Chinese government, such that people tend to have high levels of trust in the overall governmental system, but lower levels of trust in local representatives of institutions (Zhang et al., 2019). In China, research suggests that traditionally people are oriented toward more passive forms of coping with stressors (such as illness) by adjusting the self to better fit the environment, or to restore a kind of imbalance between the person/body and the environment (Cheng et al., 2010;Unschuld, 2018). Thus, people in contemporary China may be oriented toward seeking secondary control when it comes to their health, and in particular toward vicarious control-for instance, they may wish to place their trust in physicians. By contrast, we expect people in the United States (particularly from higher SES backgrounds) to have more of a primary control-seeking orientation toward the health domain. People in the United States may be especially likely to view themselves as "consumers" of healthcare services, and expect that their needs for autonomy and full information will be honored when they consult with healthcare experts. For example, Alden et al. (2015) found that among U.S. (but not Japanese) participants, independence values were related to the desire for shared decision-making in medical situations. Surprisingly, cultural psychological research on trust has generally not assessed people's level of trust specifically in the healthcare domain (Liu et al., 2018;Zhang et al., 2019). But given the broader patterns of trust described above, it is reasonable to assume that in China, people may have relative trust in the national healthcare system overall, but less trust in local representatives of that system (healthcare workers); whereas in the United States, this relationship may take the opposite form. We now consider more applied research on developments in doctor-patient relationships and healthcare system trust in these two countries, applying the theoretical constructs of culturallypatterned scapegoating, control-seeking, and trust to illuminate these developments. --- AGGRESSION AGAINST HEALTHCARE WORKERS IN CHINA Prior to the COVID-19 pandemic, levels of aggression, and violence against healthcare professionals in China in recent years had nearly reached the state of a public emergency. These acts have a clear negative impact on the mental well-being of professionals in China. In a sample of nearly 2,500 medical providers from the Fujian and Henan Provinces, 50% reported at least one incident of patient-inflicted violence over the previous year, and experience of violence was a significant negative predictor of quality of life even controlling for other relevant factors (Wu et al., 2014). Indeed, many medical professionals in China now report regretting their choice of career, leading some to anticipate an impending crisis in the health services. Explanations for this phenomenon in China typically focus on social structural and economic causes. The troubled transition to the commodification of medical services in China since 1980 has led to widespread issues of mismatched expectations and insufficient funds and insurance for healthcare on the part of the public (Hesketh et al., 2012;The Lancet, 2014). From the side of medical providers, overwork and underpayment combines with a problematic incentive structure to generate over-prescription and a lack of face-time with patients (He, 2014). While such explanations and corresponding intervention recommendations are clearly important, we propose that it is also crucial to understand the psychological mechanism(s) underlying the rise in violence against medical professionals. Two assumptions from the preceding section may explain the cultural pathway to scapegoating of these professionals in the Chinese context. First, people in China are motivated to seek secondary, and particularly vicarious, forms of control in the healthcare context; and second, people in China have relatively high trust in central institutions but relatively low trust in local institutional representatives. This combination of factors suggests that, in the face of medical uncertainty or frustration, Chinese individuals will be relatively likely to aggress against the healthcare workers in whom they had hoped to place their trust, but who appear to have failed them. Beyond testing this empirical account, it is also important to understand if these same factors persist under the recent conditions of the COVID-19 pandemic. --- DISTRUST OF THE HEALTHCARE SYSTEM IN THE UNITED STATES Attitudes toward healthcare on the part of the public are also becoming increasingly negative in the United States in recent decades. This shift has happened less on the terrain of attitudes toward and aggression against individual healthcare workers, and more on the level of institutional trust toward the healthcare system, which has declined in the United States over the past half-century (Wolfensberger and Wrigley, 2019). For example, a variety of studies have documented variation in healthcare system trust as an important determinant of use of medical care and health-relevant outcomes in the United States (Shea et al., 2008). It is important to acknowledge that at least some data suggest these general declines in institutional trust are independent of people's interpersonal trust in their own physicians (Hall, 2005). A number of sociological explanations have been proposed for this decline in healthcare system trust. Prominent among these is the general commercialization and privatization of healthcare in the United States, which prompts individuals to suspect the healthcare system and "Big Pharma" of exploiting people's health problems for profit (Wolfensberger and Wrigley, 2019). Healthcare issues have also become heavily politicized in the United States in recent years, with global trends toward political polarization finding one lightning rod in debates around the Affordable Care Act (Béland et al., 2016). The issue of public trust in the healthcare system, professionals, and epidemiologists clearly played a role in the U.S. national response to the COVID-19 pandemic. To be specific, high public levels of distrust in medical professionals, which could be strategically stoked by the Trump Administration, almost certainly contributed to this nation's relatively costly and ineffective public health response (Lo and Shi, 2020). As in the case of the rise of aggression against healthcare workers in China, we believe it is important to understand patterns in healthcare system (dis-)trust in the United States from a psychological vantage. The cultural pathway to scapegoating of the healthcare system in the United States may be explained by our assumptions about control-seeking and the cultural psychology of trust. Many people in the United States may find their motives for primary control-seeking frustrated in the health domain, particularly in light of rising costs of medical care, lack of insurance for many residents, and the current seriousness of the COVID-19 outbreak (Shi and Stevens, 2010;Burton et al., 2020). But given that U.S. residents typically show a combination of low governmental/institutional and high interpersonal trust, they would likely respond to these threats not primarily by aggressing against their local healthcare providers, but rather with increasing distrust of the healthcare system. This novel account has not yet been tested due to a lack of attention to healthcare trust in the cultural psychology literature. In sum, our framework makes the following predictions: Hypothesis 1: People in China (vs. the United States) will have a greater tendency to aggress against specific healthcare workers in situations of medical uncertainty; whereas people in the United States (vs. China) will show greater tendencies to distrust the healthcare system as a whole. Hypothesis 2: These culture-level differences in scapegoating mechanisms will be partially mediated by different patterns of control-seeking, such that primary control-seeking will partially explain U.S. individuals' greater health system distrust, and secondary control-seeking will partially explain Chinese individuals' greater aggression against doctors. --- PRIOR RESEARCH SUPPORTING THE FRAMEWORK IN CHINA Some prior evidence supports the first half of our framework, namely, that threats to control in the medical context are associated with greater aggression against doctors among Chinese participants. Yang et al. (under review) demonstrated that Chinese people tend to blame doctors for the outcomes of uncertain medical scenarios to a greater extent when they dispositionally lack control. An additional study examined whether a situational threat to control would make participants more likely to blame doctors. Yang et al. (under review) asked participants to read scenarios about a patient's experience in the hospital. They manipulated whether the disease course was chaotic (and thus control-threatening) or not, and whether the patient's condition improves or worsens at the end of the narrative. They predicted that participants would attribute more responsibility to doctors when the patient's condition turned worse and the disease course was chaotic; i.e., doctor blaming would serve the psychological need to make sense of uncontrollable suffering by scapegoating a focal human agent. Importantly, this study recruited participants from both China and the United States. Consistent with the current model, among Chinese participants, there was a strong interaction effect such that, when a patient's condition worsened in a scenario, attribution of blame to doctors was especially high when the disease course was chaotic. While a similar effect was observed among U.S. participants, it was much less pronounced, and overall U.S. participants tended to attribute more responsibility to doctors when the hypothetical course of a patient's illness was positive (a main effect not observed in Chinese participants). These suggestive prior studies leave questions unanswered when it comes to our theoretical framework. Specifically, they failed to distinguish between motives for primary and secondary control, they did not assess healthcare system distrust, andmost important in the present context-they were conducted prior to the COVID-19 pandemic, and so did not examine these important processes in light of this historical event. To address these issues, we conducted two surveys comparing Chinese and U.S. samples. Study 1 was conducted prior to the COVID-19 pandemic, and represented an exploratory first attempt to test Hypothesis 1 of our framework, as well as the suitability of different measures of our variables for testing the model. After the pandemic began, we carried out Study 2 as a confirmatory test of Hypotheses 1 and 2. We did not have a strong a priori rationale to expect that the experience of COVID-19 would change the processes specified by our theoretical account; if anything, we expected the strong threat to control posed by the pandemic to exacerbate these culturally specific processes. --- STUDY 1 Method Participants first responded to a series of measures localized to the healthcare context, including health-specific LOC (Wallston et al., 1978), health system distrust (Shea et al., 2008), and fatalism in personal health (Shen et al., 2009) 2. Participants next responded to general measures of perceived control, specifically the personal mastery and perceived constraint subscales developed by Michinov (2005). Finally, participants responded to a series of vignettes that described uncertaintyinducing healthcare experiences (e.g., waiting for days in a hospital for an operation, being prescribed an expensive medication, and being sent home with a different diagnosis the day before a scheduled surgery). They were asked about their level of frustration, and their desire to aggress against the healthcare provider in each scenario. --- Participants To assess culturally shaped responses to healthcare, Study 1 administered measures to Chinese and U.S. participants. In both the U.S. and China, data were collected from online participant recruitment platforms (Amazon Mechanical Turk and Zhubajie, respectively). Data collection initially resulted in a total of 692 responses (363 U.S., 329 Chinese), but the elimination of participants who failed to correctly respond to attention checks resulted in final samples of 317 American and 329 Chinese respondents. Participants were compensated with $1.50 in the U.S. and 10RMB in China for their time and effort. Though the samples are roughly comparable in terms of being drawn from online participant populations, there were demographic differences in terms of age [M U.S. = 35.72, SD U.S. = 11.73; M China = 31.46, SD China = 7.47; t (644) = 5.53, p <unk> 0.001] and gender (for U.S., 59% male and 40% female; for China, 41% male and 59% female; <unk> 2 (2) = 23.74, p <unk> 0.001) 3. --- Materials When possible, existing and validated translations of measures were used for the Chinese participants. When this was not possible, a back translation process was utilized, in which a native Chinese speaker not involved with the research process translated into English the items that had been translated by the researchers, and any discrepancies with respect to the original English-language items were resolved. --- Healthcare-Specific Control Measures Participants first completed measures assessing perceptions of control and control-seeking tendencies specifically in the context of healthcare and personal health. The first of these was the health-specific LOC measure (Form A; Wallston et al., 1978), to which participants responded on a 6-point scale (higher scores indicating greater agreement with a target statement). associated with aggression against doctors. However, because this is a culturespecific effect independent of our broader theoretical, cross-cultural model, we did not include this measure in Study 2, and do not focus on the results from this measure in our reporting of Study 1. 3 While gender was not a focus of this investigation, we conducted additional analyses in which we controlled for gender in order to rule out the possibility that the uneven gender representation could be driving nation-level differences. Controlling for gender did not affect any of the nation-level differences reported below. Further, the only variables that displayed main effects for gender were Internal Health LOC [t (642) = 3.55, p <unk> 0.001, d = 0.79] and frustration at the healthcare scenarios [t (642) = -3.26, p <unk> 0.001, d = 0.96], such that males scored higher than females on Internal Health LOC and females reported more frustration than males. This 18-item measure breaks into 3 subscales. Internal Health LOC (HLOC; <unk> = 0.65) consists of items such as "I am in control of my health." Powerful Others HLOC (<unk> = 0.59) consists of items such as "Health professionals control my health." Chance HLOC (<unk> = 0.66) consists of items such as "Most things that affect my health happen to me by accident." Participants also completed a measure of healthspecific fatalism, the "Predetermination" subscale from the Shen et al. (2009) measure, to which participants responded on a 5point scale (higher scores indicating greater agreement with a target statement). This 10-item scale (<unk> = 0.88) consists of items such as "My health is determined by fate." --- Global Control Measures Participants also completed Michinov's (2005) --- Outcome Measures Participants also completed measures of our primary theorized outcomes of interest (note that this is an initial cross-sectional and exploratory investigation). The first was Health System Distrust, assessed with the scale developed by Shea et al. (2008), to which participants responded on a 5-point scale (higher scores indicating greater agreement with a target statement, and thus greater distrust of the health system). This 9-item measure (<unk> = 0.80) consists of items such as "The Health Care System lies to make money." The second outcome measure was aggression against doctors. This measure was validated in prior research in China (Yang et al., under review). Participants responded to 3 vignettes that described uncertainty-inducing healthcare experiences (e.g., waiting for days in a hospital for an operation, being prescribed an expensive medication). For each scenario, participants responded to 2 items. The first indexed frustration with the scenario and the healthcare provider: "To what extent are you frustrated with the doctor's behavior?" (1 = no frustration at all; 5 = a lot of frustration). The second indexed the primary theorized outcome of aggression against doctors: "To what extent do you have the urge to hit the doctor?" (1 = have no intention at all; 5 = a very strong intention). We created composite indices by averaging responses to each item type across the 3 scenarios (for frustration, <unk> = 0.57; for aggression against doctors, <unk> = 0.75). --- Results --- Culture Mean-Level Differences The current study was conducted in an exploratory fashion. Nevertheless, we hypothesized that there would be certain meanlevel differences between the two cultural groups. Specifically, we expected that U.S. participants would score relatively higher on measures of primary control-seeking and Chinese participants would score relatively higher on measures of secondary controlseeking. We also expected that whereas Chinese participants would score relatively higher on aggression against doctors, U.S. participants would score relatively higher on health system distrust. All descriptive statistics are presented in Table 1. --- Primary Control-Seeking We had one health-specific measure (Internal HLOC) and one global measure (Personal Mastery) of primary control-seeking. As expected, U.S. participants scored higher on Internal HLOC, t (644) = 5.65, p <unk> 0.001, d = 0.45. However, contrary to expectations, Chinese participants scored higher on Personal Mastery, t (644) = -2.33, p = 0.02, d = 0.19. --- Secondary Control-Seeking We had three health-specific measures (Powerful Others and Chance HLOC; Fatalism) and one global measure (Perceived Constraint) of secondary control-seeking. As expected, Chinese participants scored higher on Powerful Others HLOC, t (644) = -11.01, p <unk> 0.001, d = 0.87, Fatalism, t (644) = -3.74, p <unk> 0.001, d = 0.30, and Perceived Constraint, t (644) = -5.38, p <unk> 0.001, d = 0.42. However, contrary to expectations, there was no observed culture difference on Chance HLOC, t (644) = 0.71, p = 0.48. --- Outcome Measures As expected, U.S. participants scored higher overall in health system distrust, t (644) = 8.86, p <unk> 0.001, d = 0.70, while Chinese participants scored higher in aggression against doctors, t (644) = -7.41, p <unk> 0.001, d = 0.58. Interestingly, participants from the two cultures did not differ in their expressed level of frustration at the medical uncertainty scenarios, t (644) = -1.21, p = 0.23. --- Patterns of Association This exploratory study had two primary purposes. The first was to test our expectations concerning culture mean-level differences. The second was to examine patterns of association among the variables, in order to determine which operationalizations of primary and secondary control-seeking might be most effective to use in a subsequent confirmatory study testing our multiple mediator path model. To reiterate, our guiding model suggests that relative tendencies toward health system distrust in the United States should be driven by primary control-seeking, whereas relative tendencies toward aggression against doctors in China should be driven by secondary control-seeking. Within-country correlations are presented in Table 1; however, we examined associations across the entire dataset in order to determine which variables would be most important to include in a subsequent confirmatory study (Table 2). We eliminated Chance HLOC from our deliberations, because there was no culture mean-level difference on this variable, suggesting it would be unlikely to be a useful indicator for our model in a subsequent study. We noted that our measure of health system distrust was related to our measures of primary control-seeking. However, in both cases these relationships were negative, rather than positive as our theoretical model would suggest. In other words, participants who scored higher in Internal HLOC or Personal Mastery reported less health system distrust. We noted that our measure of aggression against doctors was not related to our primary control-seeking measures, and instead consistently positively related to our secondary controlseeking measures, as our model would suggest. However, we additionally noted that among the secondary control-seeking measures, Powerful Others HLOC was best able to discriminate between the outcome measures, because it was negatively related to health system distrust, but positively related to aggression against doctors. On the other hand, the other secondary controlseeking measures (Fatalism and Perceived Constraint) seemed to be associated with general negativity toward healthcare (i.e., higher health system distrust and aggression against doctors). --- Discussion Our initial exploratory study yielded several preliminary conclusions that helped shape our subsequent confirmatory study designed to test our multiple mediator path model. First, mean-level comparisons generally supported our expectations for cross-cultural differences: U.S. participants scored higher on health system distrust, whereas Chinese participants scored higher on aggression against doctors. In addition, Chinese participants scored higher on our secondary control-seeking measures. Given that participants from the two countries scored similarly in the level of frustration they expressed at the medical uncertainty scenarios, this provides initial support for our guiding framework, which suggests that people in China and the United States have relative tendencies to resolve tensions in the healthcare domain using different culturally afforded defenses. Given cross-cultural differences in these important applied phenomena (aggression against doctors and healthcare system distrust), a critical task is to determine the cultural pathways that afford these divergent responses across national settings. Examination of the mean-level differences and overall patterns of association yielded additional useful information. We were particularly interested in distinguishing between our different measures of primary and secondary control-seeking to prepare our subsequent confirmatory study. When it came to primary control-seeking, the measures did not perform in expected ways for two apparent reasons. First, contrary to expectations and the prior literature, Chinese (relative to U.S.) participants scored higher on the Personal Mastery measure. Second, these measures were associated with health system distrust, but in a negative direction. In hindsight, these patterns were not surprising given the important distinction between presence of control and desire for control, which has been noted in prior literature, but to which we paid insufficient attention in designing Study 1 (Burger and Cooper, 1979). The Study 1 results suggest that if a patient already has their needs for primary control satisfied, they do not need to invoke culturally afforded defenses in connection with the healthcare system. And indeed, our theoretical account only suggests that desire for, rather than presence of, primary control should be associated with scapegoating defenses. This indicated to us that we should select a new measure of primary control-seeking for Study 2, specifically a measure that indicated not presence of but desire for primary control in the medical domain. If we could operationalize participants' desire for a primary control that they currently lack, this might be positively associated with use of health system distrust as a defense mechanism, at least among U.S. participants. When it came to secondary control-seeking, the measure of Powerful Others HLOC seemed most promising for a subsequent study. Of the secondary-control seeking measures, this was the only one to show a culture mean-level difference with a large effect size (in the expected direction). In addition, this measure distinguished well between our two outcomes, in that it was negatively associated with health system distrust, but positively associated with aggression against doctors. This suggests that specifically seeking secondary control in the health domain by yielding power to others may be associated with the culturally afforded defense of violence against healthcare workers, at least among Chinese participants. These findings fit with our theoretical account given the importance of vicarious control as a specific form of secondary control-seeking (Rothbaum et al., 1982) in the medical domain (e.g., Goodyear-Smith and Buetow, 2001). --- STUDY 2 We had two primary goals for Study 2. First, we planned to replicate and extend our exploratory Study 1 findings in light of our guiding hypotheses. Hypothesis 1 was supported in Study 1, but we wanted to confirm this pattern in a second sample. In addition, we wanted to test Hypothesis 2 using a confirmatory approach and applying multiple-mediator path models. We planned to use the information from Study 1 regarding which operationalizations were most effective and consistent with our theoretical framework to update the materials for Study 2. Specifically, we observed that Powerful Others HLOC was a promising operationalization of vicarious control as a relevant form of secondary control-seeking in the healthcare context; and we also felt the need to develop a new measure of primary control-seeking that would indicate desire for, rather than presence of, primary personal control in the healthcare context. But second, the COVID-19 pandemic occurred before we were able to follow up on our Study 1 results. Due to the obvious importance of the pandemic for people's experiences of medical uncertainty, we additionally modified the Study 1 materials to include vignettes pertaining to the COVID-19 situation. Given the historic moment, an additional goal of Study 2 became determining whether the Study 1 findings, and our original hypothesized relationships, would be observable during the pandemic. We had no strong reason to believe a priori that the basic pattern of results would change, and therefore retained our original hypotheses. --- Method Data were collected at the beginning of May, 2020. Similar to the procedure of Study 1, participants first responded to a series of vignettes that described uncertainty-inducing healthcare experiences. However, Study 2 also included scenarios related to the COVID-19 pandemic. Following the healthcare vignettes, participants responded to measures of primary control-seeking (shared decision-making), health system distrust, secondary control-seeking, and positive cognitive reframing. For descriptive statistics and zero-order correlations for all the variables reported below, see Table 3. Finally, because the threat of COVID-19 may have been experienced by participants as more distal or proximal depending on whether they lived in an area that was heavily impacted by the virus, a single item was included to assess Results for the Chinese sample are reported below, the U.S. sample above the diagonal. *Indicates significant mean-level differences between countries at p <unk> 0.001. For correlations, **p <unk> 0.01. whether participants had lived or stayed in a region impacted by COVID-19. --- Participants To assess culturally shaped responses to healthcare in the era of the COVID-19 pandemic, Study 2 administered several measures to Chinese and the American participants. In both the U.S. and China, data were collected from online participant recruitment platforms (Amazon Mechanical Turk and Zhubajie, respectively). Post-hoc power analyses of primary dependent variables' from Study 1 suggest that study's sample size resulted in sufficient power (power = 1.00). Based on the Cohen's ds from Study 1 for health system distrust (0.70) and aggression toward doctors (0.58), a priori power analyses suggest that a sample size between 68 and 96 is necessary to achieve power of 0.80 for detecting these differences again. However, in order to examine mediational pathways by which nation-level differences, we sought to maximize the sample size within constraints of available resources. (1) = 13.61, p <unk> 0.001) 4. In addition, an examination of the item probing whether participants lived in an area impacted by 4 Because the gender distribution between the U.S. and China was not even, we again examined whether all the nation-level differences reported below persist when controlling for gender. Controlling for gender did not eliminate any of the effects reported below. Further, main effects of gender were only observed for general aggression toward doctors [t (914) = 2.70, p = 0.007, d = 0.78] and COVID aggression toward doctors [t (914) = 2.52, p = 0.012, d = 1.05], such that males reported greater desires to aggress in both sets of scenarios. Because gender differences were not a focal point of this research, we do not further report further analyses of gender. the virus revealed that significantly more American (compared to Chinese) participants reported living in a virus-affected area (for U.S., 62.8% lived in unaffected areas and 37.2% lived in affected areas; for China, 85.7% lived in unaffected areas and 14.3% lived in affected areas; <unk> 2 (1) = 64.30, p <unk> 0.001). --- Materials --- Healthcare Uncertainty Vignettes Participants first reported their frustration and desire to aggress in response to the series of scenarios reported in Study 1. Then, participants read and responded to scenarios that related to potential healthcare situations involving the COVID-19 pandemic. For example, in one vignette, participants read about the following scenario: "Imagine your grandfather has had a high fever for 5 days at this time. After going to the hospital for
For years, violence against doctors and healthcare workers has been a growing social issue in China. In a recent series of studies, we provided evidence for a motivated scapegoating account of this violence. Specifically, individuals who feel that the course of their (or their family member's) illness is a threat to their sense of control are more likely to express motivation to aggress against healthcare providers. Drawing on existential theory, we propose that blaming and aggressing against a single individual represents a culturally afforded scapegoating mechanism in China. However, in an era of healthcare crisis (i.e., the global COVID-19 pandemic), it is essential to understand cultural variation in scapegoating in the context of healthcare. We therefore undertook two cross-cultural studies examining how people in the United States and China use different scapegoating responses to re-assert a sense of control during medical uncertainty. One study was conducted prior to the pandemic and allowed us to make an initial validating and exploratory investigation of the constructs of interest. The second study, conducted during the pandemic, was confirmatory and investigated mediation path models. Across the two studies, consistent evidence emerged that, both in response to COVID-related and non-COVID-related illness scenarios, Chinese (relative to U.S.) individuals are more likely to respond by aggressing against an individual doctor, while U.S. (relative to Chinese) individuals are more likely to respond by scapegoating the medical industry/system. Further, Study 2 suggests these culture effects are mediated by differential patterns of primary and secondary control-seeking.
VID aggression toward doctors [t (914) = 2.52, p = 0.012, d = 1.05], such that males reported greater desires to aggress in both sets of scenarios. Because gender differences were not a focal point of this research, we do not further report further analyses of gender. the virus revealed that significantly more American (compared to Chinese) participants reported living in a virus-affected area (for U.S., 62.8% lived in unaffected areas and 37.2% lived in affected areas; for China, 85.7% lived in unaffected areas and 14.3% lived in affected areas; <unk> 2 (1) = 64.30, p <unk> 0.001). --- Materials --- Healthcare Uncertainty Vignettes Participants first reported their frustration and desire to aggress in response to the series of scenarios reported in Study 1. Then, participants read and responded to scenarios that related to potential healthcare situations involving the COVID-19 pandemic. For example, in one vignette, participants read about the following scenario: "Imagine your grandfather has had a high fever for 5 days at this time. After going to the hospital for a blood test and CT test, he was highly suspected of having new coronavirus pneumonia. Since there were no vacant ward beds in the hospital, the doctor prescribed medicine and let the patient go home for isolation." Similar to the general vignettes, participants reported their predicted frustration and desire to aggress against the doctor based on each scenario. Responses were provided on 5-point Likert scales. --- Primary Control-Seeking To assess participants' desire for personal control in their healthcare, participants responded to a modified version of the Desirability for Control scale (Gebhardt and Brosschot, 2002). This scale includes three subscales, all of which were modified to reflect decision-making in healthcare contexts, including desire for leadership (e.g., "I enjoy participating in medical decisions, because I want to have as much of a say in treatment options as possible"), willingness to relinquish control (reverse coded, e.g., "I wish I could push the medical decisions off on my doctor"), and desire for determining one's own life (e.g., "I enjoy making my own decisions"; across all subscales, a = 0.82). --- Secondary Control-Seeking The full health-specific locus of control scale (Wallston et al., 1978) was again included, but based on the exploratory Study 1 results and our theoretical framework the subscale measuring trust in powerful others (vicarious control-seeking) was the focus for the present study (a = 0.77). --- Health System Distrust Health system distrust was assessed with the same measure used in Study 1 (a = 0.89). --- Positive Cognitive Reframing As an exploratory measure, a measure of positive cognitive reframing was included to assess the degree to which individuals positively reinterpret their healthcare experience. We included this measure because recent evidence suggests that people in China have shown more positive forms of coping with the COVID-19 pandemic compared to U.S. residents (Ji et al., 2020). Accordingly, while we did not formulate new hypotheses for Study 2, we wanted to explore the possibility that Chinese residents might show more positive coping in the COVID-19 context, rather than aggression against doctors. The 4-item measure was taken from the COPE inventory (a = 0.84; Carver et al., 1989). --- Results --- Invariance Analyses of Primary Outcomes In order to determine the degree of factor structure similarity between the U.S. and China for the primary dependent variables, invariance analyses of health system distrust and aggression toward doctors (both the general and COVID-specific scenarios) were conducted. A confirmatory factor analysis (CFA) model was specified in which health system distrust and aggression toward doctors were treated as latent factors with their respective items serving as the indicators. By adding constraints to these models, we can determine whether the items are capturing the same underlying construct (configural invariance, established through a multigroup CFA), whether participants in both nations are similarly responding to the items (metric invariance, established by constraining factor loadings to be equivalent between groups), and whether the means are comparable (scalar invariance, established by constraining intercepts to be equivalent between groups). These analyses were conducted in the R software package and utilized weighted least squares estimators and robust fit indices. The acceptability of different levels of invariance can be determined by examining changes in fit statistics. While chisquare changes can be overly sensitive, CFI and Gamma-hat can be examined for changes to determine whether each consecutive model should be rejected, with changes of <unk>0.01 indicating that the more constrained model is acceptable (Milfont and Fischer, 2010). Fit statistics for these CFAs are presented in Table 4. In the case of both sets of models-one examining health system distrust and aggression in the general healthcare scenarios and the other examining health system distrust and aggression in the COVID-19 specific scenarios-the configural metric models had acceptable fit and all factor loadings were significant (p <unk> 0.001). Further, the constraints added to the metric models did not lead to a substantial decrease in the model fit (i.e., CFI and Gamma-hat <unk>0.01). In both cases, the implementation of additional constraints in the scalar models resulted in worse model fit (though still acceptable with more liberal fit cutoffs; e.g., RMSEA <unk>0.10). This is not surprising as scalar invariance is a high psychometric standard for between-country comparisons (e.g., Davidov et al., 2018). Yet, the lack of support for scalar invariance demands a degree of caution in interpreting the findings reported below. We think that the present research addresses an applied issue of significance and, given the relative absence of violence against doctors as a social issue in the U.S., these differences are unlikely to be entirely the result of response biases or other sources of error. --- General Healthcare Uncertainty Scenarios To assess the hypothesized mediation model, the data were fit to a structural equation model in which personal and external control were specified as mediators of national differences in the tendency to blame the health system vs. aggress against medical providers. In addition, given the likely relationship between the mediating (primary and secondary control-seeking) and outcome (health system distrust and aggression against doctors) variables, these pairs of factors were allowed to covary. Because the purpose of these analyses is to understand the relationship between the underlying latent factors, rather than the relationship between item-level, we applied a parceling method to increase model parsimony and improve the participant to parameter estimate ratio (Little et al., 2002). Thus, three parcels were calculated for shared decision-making, external locus of control, and health system distrust by randomly sorting and averaging items into three indicators per latent factor. The resultant model, along with factor loadings and standardized path weight estimates, is depicted in Figure 1. Though the Chi-square fit index was significant (<unk> 2 (56) = 458.95, p <unk> 0.001), other fit indices that are less impacted by sample size suggest that the model's fit is within acceptable limits (CFI = 0.922; SRMR = 0.058; RMSEA = 0.088 [90% CI:0.081, 0.096]). In addition to having acceptable fit, all of the latent factor loadings and path weights in the model depicted in Figure 1 were significant (p <unk> 0.001). Generally, this model offers support for the present predictions, as Chinese participants (relative to Americans) reported greater levels of secondary control-seeking and aggression against doctors. In contrast, Americans (relative to Chinese participants) reported greater primary control-seeking and health system distrust. Further, the relationships between primary control-seeking and health system distrust on the one hand, and secondary control-seeking and aggression against doctors on the other hand, were both positive and significant. To more precisely test whether national differences in responses to medical uncertainty were mediated by the proposed constructs, a second model was examined in which crossmediating pathway loadings (i.e., paths between primary controlseeking and aggression against doctors, and secondary controlseeking and health system distrust) were eliminated (see Figure 2). This model configuration allows for the examination of indirect effects through the hypothesized mediators by themselves. The mediation model also displayed acceptable, though less ideal, fit (<unk> 2 (58) = 530.44, p <unk> 0.001; CFI = 0.908; SRMR = 0.078; RMSEA = 0.094 [90%CI:0.087, 0.102]). To examine the hypothesized mediating role of control preferences and to calculate bootstrap-based confidence intervals, the model was run with a bootstrapping approach utilizing 5,000 resamples. See Table 5 for indirect effects and confidence intervals. As indicated by the results reported in Table 5, the effects of country on both outcomes were partially mediated by the hypothesized constructs. In other words, while both of the direct relationships between country and health system distrust (p <unk>0.001) and aggression against doctors (p = 0.011) were significant, part of the national differences in these outcomes were accounted for by the proposed control-seeking preferences. --- COVID-19 Specific Healthcare Scenarios Importantly for the present purposes, we also sought to determine whether the models could be replicated when considering the COVID-19 scenarios. Specifically, we examined the same models as above, but substituted the COVID-19-specific scenarios for the general uncertainty scenarios. The exact same analysis sequence was conducted, with a full path model being tested first (Figure 3), followed by a test that focused on the hypothesized mediating pathways (Figure 4). Analyses of the full model suggest an adequate fit to the data (<unk> 2 (56) = 485.97, p <unk> 0.001; CFI = 0.916; SRMR = 0.058; RMSEA = 0.091 [90% CI:0.084, 0.099]), with all factor loadings and predicted paths yielding significant relationships (ps <unk> 0.001). Again, to explore the predicted mediational pathways more directly, we analyzed models in which the cross-mediating pathways were eliminated (Figure 4). This model again yielded adequate fit indices (<unk> 2 (58) = 547.86, p <unk> 0.001; CFI = 0.905; SRMR = 0.077; RMSEA = 0.096 [90%CI:0.089, 0.103]). To assess the indirect relation between country and outcomes, through the hypothesized control-seeking mechanisms, we assessed those indirect effects with a bootstrapping method utilizing 5,000 resamples. The results of these analyses are depicted in Table 6. Once again, the confidence intervals for both indirect effects did not contain zero, suggesting that the national differences in health system distrust and violence against doctors (this time in COVID-19 scenarios) were partially mediated by the proposed control-seeking tendences. --- COVID-Affected vs. Unaffected Areas and Positive Cognitive Reframing To explore whether individuals' control-seeking and scapegoating tendencies were moderated by living in COVIDaffected (vs. unaffected) areas, between-subjects ANOVAs were conducted in which the effects of nation, COVID-affected (vs. unaffected) area, and the interaction of these two factors were assessed on all measures included in the study. These analyses yielded non-significant main effects of COVID-affected area and country by area interactions (all ps > 0.05) for frustration and aggression in the general healthcare scenarios, frustration in the COVID-specific scenarios, primary control seeking, and health system distrust. There were, however, significant effects of living in a COVID-affected area for secondary control-seeking, positive cognitive reframing, and aggression toward doctors, though the latter main effect was qualified by a country by COVID-affected area interaction. See Table 7 for the full statistical results of ANOVAs that yielded significant results. The analyses depicted in Table 7 suggest that, in addition to national differences in most of the variables in Study 2 (see Table 3), whether or not participants lived in an area affected by COVID-19 was related to greater secondary control-seeking, positive cognitive reframing, and aggression toward doctors in the scenarios specific to COVID-19. This latter finding was qualified by a country by COVID-19-affected area interaction, such that the tendency for Chinese participants to want to aggress toward doctors (relative to American participants) was more extreme among Chinese living in COVID-19-affected areas (see Figure 5). In terms of our exploratory variable of positive cognitive reframing, it was in fact the case that people in China engaged in this form of coping to a relatively greater extent. However, examination of mean levels of aggression against doctors in China between Studies 1 and 2 suggests that use of this coping --- Discussion A high-powered confirmatory study, Study 2 added several important pieces of information to the initial exploratory results obtained in Study 1. First, cross-cultural mean differences and cross-sectional patterns of association offered confirmatory support for our theoretical model. Replicating Study 1, Chinese (compared to U.S.) participants showed a relatively greater tendency to aggress against doctors in hypothetical scenarios involving both general medical uncertainty and COVID-19. Also replicating Study 1, U.S. (compared to Chinese) participants showed higher levels of distrust in the health system. Importantly, extending on Study 1's initial findings, we also found support for our multiple mediation model, such that the cross-cultural differences in outcomes were partly mediated by variation in control-seeking. U.S. (compared to Chinese) participants seek primary control to a greater extent, which is related to their relative tendency toward health system distrust; and Chinese (compared to U.S.) participants seek secondary control to a greater extent, which is related to their relative tendency toward aggression against doctors. Importantly, this model replicated (for aggression against doctors) in both the context of general medical uncertainty, and COVID-19 specific, scenarios. Relevant to the current necessity for understanding how people respond to global pandemics, there were interesting patterns related to COVID-19 in the data, some of which appeared culturally generalizable, and one that was culture-specific. In particular, in both countries, reporting living in an area that was severely impacted by COVID-19 was associated with secondary control strategies, in particular more secondary control-seeking in the medical context (Powerful Others HLOC) as well as positive cognitive reframing. Finally, and attesting to the importance of our scapegoating conceptualization, we found that the cross-cultural difference in tendencies to aggress against doctors (in the COVID-19 scenarios) was moderated by living in a COVID-impacted environment, such that, among Chinese participants, greater tendencies to aggress were observed among participants living in more impacted areas. --- GENERAL DISCUSSION Distrust and discord between patients, physicians, and the healthcare system is a major and growing international problem. The present paper applies a novel explanation for this phenomenon drawing on a conceptualization of cultural pathways to scapegoating in the face of medical uncertainty. It draws on prior work addressing the specific issue of violence against doctors in China from a scapegoating perspective (Yang et al., under review) to propose and test a theory of how Chinese and U.S. culture afford different viable scapegoating targets in the health domain, in order to satisfy varying needs for primary and secondary control. This work therefore importantly extends our understanding of the psychology of control and trust to a prominent applied context, one that has more relevance than ever before in light of the massive health-related uncertainty caused by the COVID-19 pandemic. From one vantage point, our findings speak to processes that generalize across cultures, even if they manifest in slightly different ways (Kitayama et al., 2010). People living in both China and the United States tend to scapegoat certain viable targets when encountering medical uncertainty for themselves or their relatives. It is significant that our confirmatory Study 2-conducted under conditions of a global pandemic-yielded essentially similar support for these general tendencies as was observed in Study 1 (pre-pandemic), suggesting a degree of both cross-cultural and historical stability. On the other hand, we observe consistent cultural variation in the specific manifestation of scapegoating tendencies in the face of medical uncertainty, as well as the processes driving these tendencies. Replicating prior research on scapegoating (Yang et al., under review) as well as the cultural psychology of trust (Zhang et al., 2019), people in China (vs. the United States) had a greater tendency to aggress against local healthcare workers in situations of medical uncertainty. By contrast, people in the United States (vs. China) showed relative tendencies to distrust the healthcare system as a whole. Further, these culture-level differences in scapegoating mechanisms were partially mediated by different patterns of control-seeking. The observed cultural differences in primary and secondary control-seeking are consistent with previous findings. Historic conditions favorable to individualism have given rise to strong motives for primary personal control in the United States, but people in China and other Asian cultures have historically favored patterns of acceptance and adjustment to the status quo (Kay and Sullivan, 2013). At the same time, the state of illness itself forces upon the patient a strong sense of uncertainty and lack of control. The COVID-19 pandemic in particular has posed a strong threat to people's sense of control in many settings around the world; but just as socio-political, public health, and economic responses to the crisis have varied as a function of cultural context, so too will the psychological defenses people employ against the threat to control posed by this tidal wave of medical uncertainty. --- Limitations Given that this research stemmed from prior applied work on the phenomenon of violence against doctors in China (Yang et al., under review), and additionally sought to examine a second important applied phenomenon-healthcare system distrust in the COVID-19 context-we approached study design from a more applied perspective. In other words, we prioritized operationalizing our theoretical constructs in ways that were highly germane to the context of healthcare and the doctorpatient relationship, as well as not including additional, more abstract measures in order to avoid participant fatigue. This was particularly the case for our confirmatory Study 2 design. These decisions came at a cost to the theoretical clarity of our data. For example, although we used a scapegoating framework to develop our hypotheses, we did not directly measure attributions of blame in the current studies, an important component of scapegoating that we have in fact measured in earlier studies of aggression against doctors (Yang et al., under review). And although there are more direct measures of primary and secondary control available (e.g., Heckhausen et al., 1998), we elected instead to use measures specifically intended for the way these processes manifest in the healthcare domain, e.g., in terms of vicarious control-seeking in the doctor-patient relationship. Ultimately, these decisions limited our ability to definitively test our theoretical framework in this applied context. Nevertheless, given that the patterns of data support our hypotheses, and that we developed these hypotheses from an underlying framework, the findings are at least consistent with a theory of cultural pathways to scapegoating. Some researchers might also consider the fact that we selected measures for inclusion in our confirmatory Study 2 based partly on their performance in our exploratory Study 1 to be another limitation of the present research. From this perspective, it could be argued that we selected the measures that were most likely to support our theoretical account, while ignoring relevant measures that might have cast doubt on the framework. While we concede that some researchers may view our approach in this light, we personally feel that this represents a confusion between exploratory data analysis and what are referred to as "questionable research practices" (Jebb et al., 2017). Because we have openly acknowledged that Study 1 was conducted in an exploratory spirit, any conclusions from that study need to be interpreted with due caution. However, the aim of exploratory data analysis is often to develop theory and methods for future confirmatory study (Jebb et al., 2017), which is exactly the approach we adopted here. We did not focus on new or specific measures of primary and secondary control-seeking in Study 2 simply because they "performed" in Study 1, but also because the patterns were consistent with prior research and our theoretical account. For instance, in hindsight, the choice to operationalize primary and secondary control in Study 1 using measures of presence rather than desire for control was a poor design choice based on our theoretical framework. Accordingly, we selected different measures for inclusion in Study 2, and these data provided confirmatory evidence for our account. Nevertheless, it is important for future research to attempt to further replicate the pattern of results seen in these studies, which remain applied and somewhat preliminary in nature. Beyond the outcome variables, our studies also attest to the ongoing need for further examination of the relationship between need for and presence of primary and secondary control. Ideally, future work would investigate these phenomena from a more purely theorydriven perspective; as stated, the applied nature of our work in the healthcare context limited our ability for theory refinement. --- Practical Implications The concept of "uncertainty in illness" (Mishel, 1988) explains the patient's treatment of disease-related stimuli. Patients often (1) do not know the precise symptoms of the disease; (2) do not understand the generally complicated methods of treatment and care; (3) lack information related to the diagnosis and severity of the disease; and (4) recognize that the course and prognosis of the disease cannot be predicted with certainty (Mishel, 1988;Maikranz et al., 2007). The COVID-19 pandemic has exacerbated these processes of uncertainty in illness for many people, given the highly contagious nature of the disease, its disproportionate impact on certain vulnerable individuals, and a general lack of certainty about the disease among health professionals, particularly in the early days of the pandemic (Rettie and Daniels, 2020). Within this general context of uncertainty in illness, it is important to consider the nature of the doctor-patient relationship. The patient is at a disadvantage when it comes to information and resources (Goodyear-Smith and Buetow, 2001). Being ill results in a sense of uncontrollability focused on the possible future threat, danger, or other upcoming, potentially harmful events (Beisecker, 1990). According to our framework and present pattern of results, Chinese individuals are motivated to adopt secondary control strategies to compensate for lack of personal control attendant on the experience of illness. Perhaps unsurprisingly, because Chinese individuals wish to place their faith in powerful others (healthcare workers) to control and resolve their illness experience, they resolve continued frustrations and uncertainties by blaming, and even aggressing against, these local representatives of the healthcare system. In comparison, U.S. residents seem motivated to maintain a sense of primary control despite the inherent uncertainties of the illness experience. However, in this cultural context of trust, aggression against doctors is not an afforded response; rather, those seeking greater primary control blame the broader healthcare system for their negative illness experiences. This attributional style may allow these individuals to maintain the perception that they can locally control their health (e.g., through lifestyle choices or asserting agency in the doctor-patient relationship), at the same time that they trace their health problems to broader systemic factors. While the current research has focused on investigating problematic tendencies (i.e., scapegoating motivations) within the two cultural settings, this comparative research also highlights the fact that national leaders and healthcare professionals stand to learn from each other by recognizing divergent cultural strengths. For instance, the Chinese government has continued political support for its healthcare reform from 2009 until now, enabling conditions to achieve national universal health coverage (Tao et al., 2020). The health insurance system has been reformed and different kinds of medical insurance have combined to promote health equity (Meng et al., 2015). It is possible that these recent efforts on the part of the Chinese government contribute to laypeople's relative trust in the healthcare system as a whole. Given the calamity posed by COVID-19 in the United States, and the role that was likely played by distrust in the healthcare system, it is important to recognize the potentially pernicious consequences of this distrust. At the same time, in the United States people seem to maintain a general respect for the healthcare professions, and tend to respect and trust their individual doctors even if they devalue the healthcare system as a whole (Hall, 2005). Given the ongoing dilemma of violence against doctors in China, social leaders and public health professionals might look to the structure of doctorpatient relationships in the United States for insight into how to restore a sense of trust between individual patients and their local providers. Generally speaking, our data underscore the importance of considering unique cultural pathways to trust and scapegoating in the context of medical uncertainty, especially when it comes to the important questions of what local practitioners and state/federal policymakers can do to improve trust and decrease scapegoating. For instance, in the United States, relative levels of trust in and aggression against local practitioners is not the most pressing issue; instead, trust in the healthcare system as a whole needs to be addressed. This suggests the importance of policy, regulation, transparency, and clear communication regarding issues such as insurance, pharmaceuticals, and vaccines at the broader federal level in the United States. However, the opposite pattern in China may prevail, which suggests that local healthcare workers may be well-advised to pursue individuallevel solutions to establish and maintain patient trust (see Wolfensberger and Wrigley, 2019). In both cultures, however, our data also point to the importance of meeting patient needs for control in this context, in whatever manner those needs may be culturally shaped. provided their written informed consent to participate in this study. --- DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. --- ETHICS STATEMENT The studies involving human participants were reviewed and approved by University of Arizona IRB. The patients/participants AUTHOR CONTRIBUTIONS QY and DS designed the studies and facilitated data collection. IY performed primary data analysis. JW facilitated data collection and completion of the studies. QY, DS, and IY drafted the manuscript. All authors approved revisions and final version of the manuscript. --- Conflict of Interest: The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
For years, violence against doctors and healthcare workers has been a growing social issue in China. In a recent series of studies, we provided evidence for a motivated scapegoating account of this violence. Specifically, individuals who feel that the course of their (or their family member's) illness is a threat to their sense of control are more likely to express motivation to aggress against healthcare providers. Drawing on existential theory, we propose that blaming and aggressing against a single individual represents a culturally afforded scapegoating mechanism in China. However, in an era of healthcare crisis (i.e., the global COVID-19 pandemic), it is essential to understand cultural variation in scapegoating in the context of healthcare. We therefore undertook two cross-cultural studies examining how people in the United States and China use different scapegoating responses to re-assert a sense of control during medical uncertainty. One study was conducted prior to the pandemic and allowed us to make an initial validating and exploratory investigation of the constructs of interest. The second study, conducted during the pandemic, was confirmatory and investigated mediation path models. Across the two studies, consistent evidence emerged that, both in response to COVID-related and non-COVID-related illness scenarios, Chinese (relative to U.S.) individuals are more likely to respond by aggressing against an individual doctor, while U.S. (relative to Chinese) individuals are more likely to respond by scapegoating the medical industry/system. Further, Study 2 suggests these culture effects are mediated by differential patterns of primary and secondary control-seeking.
Introduction Adolescents face many sexual and reproductive health problems worldwide, including unplanned pregnancy, sexually transmitted infections, and human immunodeficiency virus (HIV) infections (1). Adolescents account for 42% of new HIV infections globally, and four in five young people with HIV live in sub-Saharan Africa (2). Lesotho has the second-highest HIV prevalence in the world, accounting for 22.7%, and one of the highest HIV incidence rates among adolescent girls and young women accounting for 0.33% (3). The maternal mortality rate in Lesotho of 544/100,000 live births is the second highest in Southern African Development Community countries (4). In Lesotho, adolescent girls and young women frequently report limited knowledge in sexual and reproductive health issues and engage in risky sexual behaviors. They are at increased risk of early sexual debut (14 years), unprotected sex, and multiple sexual partners, exposing them at risk of acquiring sexually transmitted infections, including HIV and unintended pregnancy (5). Providing sexual and reproductive health education to young women is challenging in this rural, mountainous country (6,7) and where health professionals face constraints related to time and resources. New tools to deliver health education are needed. In Lesotho, 94% of people aged 18-29 years use smartphones, and 3G data coverage is available in almost 90% of the country (8,9). The high penetration of mobile technologies provides an opportunity to explore the use of new tools delivered on mobile phones as an alternative to the traditional face-to-face provision of health education (10). Embodied conversational agents (ECAs) are computer-based animated characters designed to simulate face-to-face human interactions. They are an effective medium to educate patients with limited health or computer literacy, as the human-computer interface relies only minimally on text comprehension and prioritizes conversation making it more accessible to patients with limited literacy skills (11,12). Non-verbal conversational behaviors, such as hand gestures that convey specific information through pointing, shape, or motion, are channels for conveying information on semantic content that enhances message comprehension (11). An ECA called Gabby was designed to deliver sexual and reproductive health information to African American women of reproductive age in the United States that demonstrated significant improvement in addressing reproductive health risks (see Figure 1) (13,14). This paper describes the process used to adapt Gabby for use in Lesotho, and provides qualitative data regarding the success of the adaptation, collected from potential end users and Ministry of Health (MOH) leadership. --- Methods --- Conceptual model The PEN-3 cultural utility model guided the adaptations, which provides valuable guidelines for ensuring a culture-specific intervention by identifying and organizing a community's culture in the planning processes (15)(16)(17). The model includes the cultural identity domain (person, extended family, and neighborhood), the relationship and expectation domain (perceptions, enablers, and nurturers), and the cultural empowerment domain (positive, existential, and negative) (18)(19)(20). Adaptations were also guided by the heuristic framework for cultural adaptations of Barrera et al. (21), which includes information gathering, preliminary adaptation design, preliminary adaptation tests, adaptation refinement, and cultural adaptation trial. --- Information gathering for adaptation To understand the adaptations needed, the Northeastern University team leader visited Lesotho in January 2020. Meetings were held with MOH managers for adolescent health, HIV, family planning, and nursing to elicit recommendations with regard to potential adaptations. Meetings were also conducted with district nurses and with young women seeking healthcare services. The meetings elicited details with regard to how Basotho use their hands, facial gestures, and body language in conversation and ideas regarding the use of Basotho language and cultural references, such as idioms, that could be incorporated into the new system. The persona of the new character was explored to identify those that would lead to greater trust in the health information being delivered. Interviews highlighted the importance of promoting engagement with the new system. Images of Basotho women were used to create the character. Decisions were made regarding the character's appearance (e.g., hair and clothing), behavior (speech pattern), nationality, sex, age, occupation, and name. The MOH officials were asked to recommend topics that they believed to be critical for improving sexual and reproductive health for adolescents and young women. The topics agreed upon for Gabby was developed in the United States to deliver health education that women could access on desktop computers. Nkabane-Nkholongo et al. 10.3389/fdgth.2023.1224429 inclusion in the system were family planning, HIV and AIDS, tuberculosis, healthy eating, and folic acid supplementation. Boston University and the Lesotho team then used the Lesotho national guidelines on these topics to prepare the dialogs to be delivered by Nthabi. Technical adaptations were required to deploy the prototype Gabby interface (designed to be displayed on a computer screen) to now be displayed on smartphone screens. The new Nthabi system interface was designed to display only the face and response options of the characters. The development team sourced Lesotho mobile phones to gain information regarding the specifications penetration of devices. To increase the accessibility and use of the system, a decision was made to ensure that the app can be completely downloaded into the user's mobile phone, thereby enabling use even outside of WiFi-enabled environments. Data pertaining to user usage and the content discussed would be downloaded when the user is next in a WiFi-enabled environment. --- Recruitment and enrollment of participants The participants were recruited to use Nthabi in accordance with predetermined eligibility criteria: aged 18-28, owned a smartphone, spoke English, and lived in the Leribe or Berea districts of Lesotho. A purposive sampling technique was employed to recruit participants when they accessed services at the adolescent and maternal and child services at district hospitals. The Nthabi app was downloaded onto the participants' mobile phones after they provided informed consent. The participants who were unable to download the app were provided with an internet-accessible tablet device. All participants were asked to use the system daily for 4 weeks. --- Focus groups of participants using Nthabi Young women who used the Nthabi system were contacted to arrange for their participation in focus groups to elicit their perceptions of the system. Four focus groups were conducted between July and August 2022. The groups were facilitated by the first author using an interview guide with open-ended questions designed to explore the cultural and clinical adaptation, ease of use, problems encountered, willingness to continue use, and possible future use. Focus group participants were provided a stipend of 50 Maloti (<unk>US $3). --- Key informant interviews with MOH leaders Purposive sampling was employed to recruit MOH program managers and adolescent health nurses to participate in key informant interviews. They were asked to review the content of a video shared on WhatsApp of the key parts of the Nthabi interactions for 3-5 days. In June and July 2022, interviews to elicit their perceptions were conducted using an interview guide with open-ended questions used to explore cultural and clinical adaptation and perceptions of use to deliver health education in the country. These participants did not receive a stipend. --- Data analysis Data analysis was headed by the Sefako Makgatho University team. All interviews and focus groups were approximately 1 h and were audio recorded. The study team members transcribed semi-verbatim all audio recordings into Microsoft ® Word and checked for accuracy, and extraneous sounds, remarks, or repetitions were omitted in the transcription. In addition, words were added, as appropriate, for clarity. They were not returned to participants for correction. Before analysis, all identifying data were removed, and Sesotho words were translated into English. All qualitative data were imported into QSR International's NVIVO v12 software for coding and analysis. We conducted a thematic analysis using a combined inductive and deductive approach to coding, starting with broad codes from the interview guide and allowing room for new codes to emerge. Given the heterogeneity of respondent demographics, we coded all interviews instead of stopping when saturation was researched within particular thematic areas. A coding tree was produced that contained emergent categories of barriers and facilitators and re-coded the data. The core major and minor themes were determined through iterative inputs from authors on the resultant thematic map (22). The demographic data were collected from the participants at enrollment and were presented as counts, frequencies, and means. All information was stored on encrypted tablets. --- Results --- Initial consultative meetings Initial consultative meetings were held with eight MOH directors or program managers, four district nurses who were aged 25-50 years, and nine women aged 18-28 years. Based on these discussions, it was recommended that the Lesotho version of Gabby would be a young female nurse named "Nthabi", wearing a Lesotho nurse's uniform and using Sesotho words and idioms. Her hairstyle (braids), complexion (medium, similar to the local population), use of gestures (calm and gentle), and mannerisms (a humble professional with a sense of humor) would be relatable to young women in Lesotho (see Figure 2). English was chosen as the preferred language for Nthabi, since a Sesotho speech synthesizer was unavailable. To promote engagement, a professional Mosotho woman artist and storyteller was engaged to write 60 daily installments of a serial story, each ending with a cliff-hanger that could motivate users to use the app daily. Such serialized stories are popular in Lesotho. Nkabane-Nkholongo et al. 10.3389/fdgth.2023.1224429 --- Description of participants who used Nthabi A total of 41 participants who met the eligibility criteria were recruited to use the new app. After giving consent, the participants were assisted to download the Nthabi app onto their mobile phones. Of the 41 eligible participants, 16 (39%) participants were able to download the app onto their phones. If the app could not be downloaded, the participants were loaned a Lenovo<unk> Android 11 OS platform tablet to use. Overall, eight out of 16 (50%) participants who were able to download Nthabi app on their mobile phones used the app consistently. The other eight encountered challenges related to phone memory and freezing of the phone, which leads them to uninstall the app. In total, 25 participants used the Lenovo tablets provided by the research team and eight participants on their phones (total 33). --- Focus group discussions and key informant interviews All 33 participants who used Nthabi participated in focus groups. The mean age of the participants was 23 years, and 27 (82%) of them were single. All had completed high school or above, and 22 (76%) were unemployed (Table 1). All the 10 health leaders who participated in the key informant interviews (mean age of 37.5 years) had received tertiary education and were employed (Table 1). Table 2 summarizes the five themes and subthemes that emerged from the key informant interviews and focus group discussions. --- FIGURE 2 Lesotho version of Gabby (Nthabi) for use on mobile phones. --- Theme 1: appearance and mannerisms The participants described Nthabi as a Mosotho (e.g., singular Basotho) nurse who is friendly, wears her uniform neatly, and provides relevant health education to young women who do not have access to sexual and reproductive health content due to cultural and health provider barriers, such as judgmental attitudes and lack of confidentiality. The participants described Nthabi as relatable to Basotho young women. --- Dress code Both end users and MOH interview participants approved of the Nthabi character being dressed as a nurse. Wearing a nurse's uniform implies that the information shared by Nthabi is from a reliable source. I think she looks good, like the fact that she is a nurse, as this gives young people an assurance that the information that she is providing is credible (Key Informant 7, 43 years). --- Complexion and skin tone Most participants agreed that Nthabi's skin tone and complexion were relatable to Basotho young women: The skin tone indeed is relatable to Basotho. Immediately you see her you definitely can say that is a Mosotho woman (Key Informant 10, 49 years). --- Hairstyle Most participants liked Nthabi's hairstyle and said that it resembled hairstyles of young Basotho women, although there were suggestions to use other styles. I would prefer that the hair be short African hair... plaited in a way that is common in the country, it can be an essence, just a simple thing just to show that she is an African (Key Informant 8, aged 35 years). --- Gestures The gestures she used in conversation, such as using her hands, facial expressions, and the humility gesture, were viewed positively. End users and key informants agreed that Nthabi used her hands properly, and her facial expressions and the humble character that she portrayed in conversation were culturally appropriate and relatable to young Basotho women. I was also surprised with the way the application used gestures, this is a commendable innovation. Truly, she is relatable to Basotho young women (Key Informant 9, 47 years). --- Theme 2: acceptability of language used Both end users and key informants appreciated that Nthabi presented information in a tone that is not offensive to the Basotho culture. Yes, we can listen to Nthabi with our parents because she chooses her words well and that shows she is culturally sensitive. As Basotho girls, there are some words that cannot be used publicly, but Nthabi seems to know that as well (Focus group 3, participant 6, 24 years). Nthabi is able to provide information about everything even those that a parent or elderly person isn't comfortable talking about because they are embarrassing (Focus group 4, participant 4, 21 years). The participants also believed that they would be comfortable sharing their healthcare needs freely with Nthabi, rather than with healthcare providers because Nthabi is non-judgmental, and they often feel judged by nurses. I found Nthabi to be non-judgemental. For instance, say I am 18 years old and I am pregnant, there are certain things I will be told [about] how much I have been dating, etc. This results is feeling discriminated and being uncomfortable to visit the facility again... Nthabi, on the other hand, is open (Focus group 3, participant 5, 26 years). --- Theme 3: accessibility, relevance, and engagement Users applauded the Nthabi app since they could access it at times convenient to them rather than relying on having to go to the health center or the hospital. End users said the information provided was important, and they did not feel rushed when talking to Nthabi. It was interesting to have a nurse in one's pocket, who is accessible anytime you want to reach out, rather than having to go through long queues at the health facilities to get health information (Focus group 1, participant 7, 22 years). Nthabi is able to ask whether you would like to continue talking to her or if it is enough for the day, meaning she has time for us, not rushing like Nurses (Focus group 3, participant 5, 26 years). The participants also reported that the serialized local stories motivated them to use the system every day. The story about Thabo and Mpho made [me] more excited. I will always want to know what happens next with Mpho (Focus group 3, participant 5, 26 years). --- Theme 4: relevance of health content End users and key informants reported that the five health topics covered by Nthabi are relevant in Lesotho. I learned so much about family planning. I did not know about additional methods that I learned from Nthabi (Focus group 1, participant 3, 21 years). --- Theme 5: suggested modification End users and key informants suggested several recommendations for additional content including HIV preexposure prophylaxis (PrEP), sexual reproduction, teen pregnancy, cervical cancer, and sexually transmitted infections. It is very important to actually mention something about cervical cancer, because we know our adolescents actually start to have their sexual debut even before they reach mature age (Key informant 8, 35 years). The majority of participants suggested that Nthabi should use the local language, Sesotho, so that information can reach the non-English speaking population. There were also suggestions with regard to her accent. Please consider translating all this information to Sesotho, so that young people can have language options (Key informant 9, 47 years). Nthabi's accent should also be rectified because she rolls her tongue a lot and one needs headsets in order to hear other words properly. She isn't audible enough when you play her on speakerphone (Focus group 3, participant 3, 23 years). Finally, the participants suggested that Nthabi could be adapted further for use by boys and young men. I also thought the information can be appropriate for boys, please consider an application for boys as well (Key informant 9, 47 years). --- Discussion This study found that young women and MOH key informants in the low-middle-income country of Lesotho considered the cultural and clinical adaptation of an evidence-based embodied conversational agent system to have been successful. The clinically tailored, culturally sensitive, and trustworthy content of the Nthabi system has the potential to improve accessibility of sexual and reproductive health information in the rural, mountainous country of Lesotho in southern Africa. Lesotho faces significant challenges in terms of its healthcare workforce capacity and the effective dissemination of health education (23). There is an urgent need to develop ways to provide trustworthy information regarding sexual and reproductive health, including family planning and condom use to reduce sexually transmitted infections, HIV, unplanned pregnancy, and unsafe abortion in Lesotho. New technologies are now available to provide evidence-based health education to remote settings with fidelity. Conversational agent technology interactions have been shown to be an effective medium to deliver health education in a variety of topic areas, including to users with limited health literacy (24,25). Nthabi provides a new opportunity to deliver health education, possibly as an alternative to the traditional face-to-face provision of health education (26,27). The study highlights the importance of adaptations of new technologies that represent the unique way of life, behaviors, beliefs, values, and symbols in the Basotho context. Once interactions are defined in relation to appropriate cultural cues, such as language, appearance, gestures, and humility, there is a higher probability of acceptability and usability (28). Substantial testing regarding Nthabi's physical appearance, age, and name was undertaken by the research teams (29). The study demonstrates that involving stakeholders in the adaptation process can increase the acceptability of systems such as Nthabi. Community participation allowed the system to take on characteristics of the local environment in which it was developed (15,30). The adaptations considered the language, cultural appropriateness, and context in such a way that is compatible with cultural patterns, meanings, and values of young women (21). This involvement led to incorporation of culturally persuasive features (e.g., physical characteristics, profession, use of Sesotho idioms, storytelling) and addressed issues of potential misfit among the technological, human, and contextual factors. In this way, our findings align with the findings of other studies that utilized the PEN-3 cultural utility model that promoted acceptability (31). Nthabi's persona as a nurse, as well as the incorporation of storytelling as a persuasive feature to promote engagement, was a new innovation in the Nthabi adaptation. These characteristics were not features of the American Gabby system; however, these were recommended in Lesotho as a way to provide assurance that the health education she delivers is reliable and to promote its utilization. These stories of engagement appear to be important in enhancing user experience and encouraging longterm usage, as indicated by the majority of research findings (32). The MOH leaders who were involved recognized the potential of this technology to provide widely scalable health education, including in population health efforts at the district or national level. Increased awareness among government officials could lead to further research, development, and facilitation of implementation assistance in the country. Focus group and interview participants expressed that expanding sexual and reproductive health education for boys should be considered in the future. In an Australian adaptation, Gabby's ability to be sensitive to different cultures and languages seemed to be more important than her physical appearance and accent (33). In Lesotho, the participants had differing views on Nthabi's accent, as some felt it needed to be modified to sound more like the local accent. Furthermore, it was recommended that she uses the local language, which shows the necessity of harnessing local meanings and contextual factors in adapting evidence-based interventions (34). The participants in Lesotho emphasized the importance of physical characteristics, character, hand gestures, Sesotho name, and using Sesotho words and idioms. This finding complements the findings of the study conducted by Fendt-Newlin et al. (30), which emphasized that it must be acknowledged that straightforward extrapolation of the existing evidence base is not always appropriate. Despite the high penetration of smartphones among young women in Lesotho, only eight of the 33 participants succeeded in downloading the app onto their mobile phones, primarily due to limited memory on many phones and the unavailability of the app in the Google Play Store, especially for the latest Huawei phones, such as the P50 Pro. The research team opted to offer full downloadability of the app onto mobile phones, thereby allowing its use in non-WiFi environments in order to increase the accessibility and use of the system. The results showed that possible improved accessibility is balanced by the size of the app (particularly the speech synthesizer), which limits the downloading of the app to phones. The app could be accessed more easily if it were on the Cloud, and the participants were able to access the Internet; however, this would limit accessibility. Balancing offline accessibility and wide usage of mobile phones needs to be addressed and should be an important consideration in future developments. This study is limited in that it was an exploratory qualitative study of users' perceptions. The findings should be confirmed using quantitative methods, such as a survey instrument and improved knowledge acquisition. While the results reflect successful adaptation, it is not known whether the adapted Nthabi system will improve users' health knowledge or change attitudes and behaviors. Ultimately, systems such as Nthabi will need to be tested to measure impact on important clinical outcomes. --- Conclusion The culturally adapted Nthabi character and trustworthy relevant content has the potential in enhancing the accessibility of sexual and reproductive health information for young women in Lesotho. Furthermore, this approach has the potential to serve as an alternative to traditional face-to-face health education methods. Suggested modifications include adopting the local language and accent and adapting Nthabi for use by boys and young men. Balancing the size of the app and accessibility in non-WiFi-enabled environments is needed in future deployments. Health Alliance teams involved in this study. The authors are grateful for their participation. --- Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. --- Ethics statement Ethical clearance was obtained from the Boston University Institutional Review Board (H-40268), Sefako Makgatho University Health Sciences Ethics Review Committee (H/343/ 2021), and the Lesotho Ministry of Health Research Ethics Committee (145-2021). --- Author contributions EN-N conceptualized the research, led cultural adaptation, conducted interviews, analyzed data, and wrote the first draft of the manuscript. MM supervised the adaptation, data collection, and analysis and contributed to manuscript writing. TB conceptualized the research, assisted with funding acquisition, led adaptation efforts, assisted with evaluation and analysis, and contributed to manuscript writing. CJ assisted with the adaptation and evaluation methodology and contributed to manuscript writing. BJ led the funding acquisition, assisted with the adaptation and analysis and contributed to manuscript writing. All authors contributed to the article and approved the submitted version. --- Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. --- Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Introduction: Young women from the low-middle-income country of Lesotho in southern Africa frequently report limited knowledge regarding sexual and reproductive health issues and engage in risky sexual behaviors. The purpose of this study is to describe the adaptation of an evidence-based conversational agent system for implementation in Lesotho and provide qualitative data pertaining to the success of the said adaptation. Methods: An embodied conversational agent system used to provide preconception health advice in the United States was clinically and culturally adapted for use in the rural country of Lesotho in southern Africa. Inputs from potential end users, health leaders, and district nurses guided the adaptations. Focus group discussions with young women aged 18-28 years who had used the newly adapted system renamed "Nthabi" for 3-4 weeks and key informant interviews with Ministry of Health leadership were conducted to explore their views of the acceptability of the said adaptation. Data were analyzed using NVivo software, and a thematic content analysis approach was employed in the study. Results: A total of 33 women aged 18-28 years used Nthabi for 3-4 weeks; eight (24.2%) of them were able to download and use the app on their mobile phones and 25 (75.8%) of them used the app on a tablet provided to them. Focus group participants (n = 33) reported that adaptations were culturally appropriate and provided relevant clinical information. The participants emphasized that the physical characteristics, personal and non-verbal behaviors, utilization of Sesotho words and idioms, and sensitively delivered clinical content were culturally appropriate for Lesotho. The key informants from the Ministry leadership (n = 10) agreed that the adaptation was successful, and that the system holds great potential to improve the delivery of health education in Lesotho. Both groups suggested modifications, such as using the local language and adapting Nthabi for use by boys and young men. Conclusions: Clinically tailored, culturally sensitive, and trustworthy content provided by Nthabi has the potential to improve accessibility of sexual and reproductive health information to young women in the low-middle-income country of Lesotho.
Introduction Eastern Europe and Central Asia (EECA) remains the only region globally where HIV incidence and mortality continue to increase [1]. There are numerous economic, political, programmatic, and social reasons for the ongoing volatile epidemic in the region, including suboptimal HIV prevention and treatment in prisons [2]. Although diverse, EECA countries have commonalities in drug policy and addiction treatment practices rooted in shared post-Soviet value systems that prioritize collective needs over individual autonomy [3]. Harsh policies criminalizing drug use [4] result in the concentration of people with or at high risk for HIV in prisons [2,5], where high-risk behavior such as drug injection often continues [6,7]. Nationally representative surveys of prison populations show that HIV prevalence is 12- [8], 51- [9], and 37-times [10] greater in prisons than in the community in Ukraine, Kyrgyzstan, and Azerbaijan, respectively [2]. As drug use continues to remain largely criminalized, implementation and scale-up in prisons of evidence-based strategies for HIV prevention and treatment will be crucial tools for curbing the epidemic [2,7,11]. Nearly all prisoners, including those with substance use disorders, return to their communities, mostly in urban settings. Thus, transitioning prisoners contribute greatly to urban health and health service delivery. As incarcerated individuals (including recidivists) contemplate release, many experience a heightened sense of optimism about Brenewing<unk>their life, known as penal optimism [12], which extends to feeling optimistic about recovery and community reintegration [13]. Penal optimism is considered a psychological phenomenon of planning fallacy, when individuals have excessive optimism bias towards the future, exaggerating their abilities, underestimating their challenges, and avoiding difficult realities [14,15]. Similarly, during imprisonment, prisoners often minimize challenges that may impede their plans to make positive changes in their lives after release [16]. Qualitative research in the USA and elsewhere suggests that former prisoners often find themselves Bin a world of chaos<unk>characterized by competing demands on their limited resources, with basic needs like food and shelter taking priority [17]. For people who inject drugs (PWID), allowing one's health needs to fall off this list of key priorities has grave implications for transitional care, particularly for addiction treatment and continued recovery [18,19]. Discontinuity of care [20][21][22], relapse to drug use, overdose, and resultant death are common immediately after release [23]. Risk of death from opioid overdose increases more than sevenfold in the first 2 weeks after release [24], and 1 in 200 prisoners with a history of injecting opioids dies from overdose in the month following release [23,25,26]. Indeed, the only evidence-based therapy for opioid use disorder in prisoners is to use pharmacological treatment with methadone or buprenorphine within prison and continue it post-release. Aside from overdose risk, post-incarceration relapse increases exposure to HIV infection [27]. Studies [28] show that rates of engagement in HIV care and receipt of ART decline more than twofold after release [21,22,28]. A comprehensive review that included EECA countries reported that ART adherence drops after release, especially for women, due to relapse to substance use, unstable housing and unemployment, reduced access to health care, and inability to access ART in the community [29]. To develop effective transitional programs from prison to community care, there is a need to better prioritize health tasks in prison that may shape planning for continued healthcare after release. Despite evidence that while in prison, individuals underestimate the difficulty of meeting post-release health challenges, there are no data on whether individuals incorporate their health status into how they prioritize their postrelease needs, despite the overwhelming evidence that treatment for substance use disorders within prison and continued after release is associated with the best possible health, psychological, legal, and social integration outcomes. In this study, we examine a cross-sectional survey of prisoners within 6 months of release that met criteria for substance use disorders. Examining three EECA countries, we examine the relative importance of healthrelated tasks compared to tasks of everyday life, explore the correlates of prioritizing health-related tasks, and consider whether there are meaningful differences in findings by country. --- Methods --- Study Design The design for the parent study has been described previously in each of the three countries: Ukraine [8], Kyrgyzstan [9], and Azerbaijan [10]. Briefly, using a random sampling scheme [30], prisoners being released within 6 months were recruited to participate from all prisons, excluding juveniles and hospital prisons. Both first time and recidivist prisoners were included. The target size of the sample was based on estimates of the number of inmates in non-specialized facilities in each country meeting eligibility criteria, proportional to the number of prisoners within 6 months of release in each facility [8][9][10]. Following informed consent, respondents answered survey questions (<unk>45 min) using computer-assisted structured interviews (CASI) that included demographic characteristics; criminal justice history; social circumstances prior to incarceration; pre-incarceration substance use; self-perceived health status; sexual and drug risk behaviors prior to incarceration; validated measures of alcohol use disorder, depression, and social support; and reentry challenges and likelihood of recidivism. All instruments were translated and back-translated into both Russian and Ukrainian, Kyrgyz, and Azerbaijani, respectively [31]. All participants were then tested for HIV (followed by a second confirmatory HIV and CD4 testing), hepatitis C virus (HCV), hepatitis B virus (HBV), and syphilis, counseled, and referred for treatment. Among the combined sample of 1280 prisoners, 577 (45%) self-reported previous drug use aside from cannabis or alcohol, which was defined as having a substance use disorder, and were included in the current analysis; drug use in EECA is often under-reported unless it is non-recreational and regular. Drug use is the major risk factor for HIV in EECA and also for increased morbidity, mortality, and social harm after release in prisoner populations [27,32]. --- Study Settings Azerbaijan is an upper middle-income country [33] of 9.8 million people with <unk>40,000 prisoners. HIV prevalence is 37-fold higher in prisoners (3.7%) than in the community (0.1%). The predominant religion is Islam and there are estimated to be 71,283 PWID [34] with an HIV prevalence of 19 to 24% [35]. The coverage of OAT using methadone in Azerbaijan was 0.2% or 155 PWID in 2014, falling far below the WHOrecommended coverage of at least 20% [34]. Kyrgyzstan is a lower income country [33] of 6.1 million people with 10,195 prisoners [36]. About half of the population is Muslim. HIV prevalence in prisoners (10.3%) is 51-fold higher than in the community (0.2%) [9] and OAT is provided in prisons. The OAT program in Kyrgyzstan has about 1200 clients [37], with coverage at 18%. Ukraine is a lower middle-income country [38] that is the most secular among the three included countries. Ukraine has a population of about 42 million and a prisoner population of about 60,000 [39]. HIV prevalence among prisoners (19.4%) is 12 times higher than in the community (1.63%) [8]. It is estimated that there are 340,000 PWID [40], mostly of opioids, with high prevalence of substance use disorders among incarcerated individuals [8]. While in all three countries opioid agonist therapy (OAT) was introduced as part of HIV prevention and harm reduction efforts [41,42], the addiction treatment community has been slow to adopt it as evidence-based drug treatment. Azerbaijan has a small pilot OAT program in the community, and Ukraine offers OAT using buprenorphine and methadone only in the community with relatively low coverage, while Kyrgyzstan offers OAT both in the community and in prisons. --- Data Analysis Basic characteristics of study participants include: demographic characteristics; pre-incarceration income; recidivism; history of drug use and OAT; HIV, HCV, HBV, and syphilis test results; medical screening variables; and a set of validated screening instruments for alcohol use disorders using the AUDIT [43], depression using the CES-D 10 [44], health-related quality of life (HRQoL) using the MOS short form 36 (SF-36) [45], and social support [46]. An alcohol use disorder was met if the scores were 8 or higher for males and 4 or higher for females [47] and depression if the CES-D was 10 or greater [48]. The composite social support scale is an integer-valued measure and ranges from 1 (no support) to 5 (high support) [46]. To measure the general health status of the study participants, we constructed a multi-comorbidity index (MCI) as a weighted sum of the following conditions: asthma, skin ulcers, abscesses, arthritis/joint pain, gonorrhea and other STIs (except syphilis), seizures, high blood pressure, liver problems, pneumonia, cancer, heart disease, and tuberculosis. Weights were based on whether the condition is acute (1) or chronic (3) and symptomatic (1) versus asymptomatic (0). For example, an acute asymptomatic condition contributed a value of 1 to the multi-comorbidity index, and a presence of a chronic symptomatic condition added a value of 4 to the total. The total value of the MCI ranged from 0 to 26. HIV, HCV, HBV, and syphilis infections were analyzed separately and are not included in the index. The WHO Tuberculosis (TB) screening questionnaire captures the presence of TB symptoms based on self-report [49]. As recommended by the WHO for high prevalence settings, positive screening was defined as having a cough for at least 2 weeks or the presence of both sputum and unexplained weight loss (in the last 3 months) [49]. Because sensitivity is high for this symptom survey, those screening positive should undergo confirmatory testing to determine the need for treatment. Specificity, however, is low making this symptom survey not a true indicator of TB disease. The outcomes of our analysis included: (1) assessment of each post-release task individually as very easy/easy/hard/very hard or not applicable and (2) identification of one most important post-release task. For the first outcome, categories Beasy<unk>and Bvery easy<unk> ere collapsed into Beasy<unk>and Bhard<unk>and Bvery hardî nto Bhard.<unk>The list of potentially challenging postrelease tasks was compiled based on previous research in this area [19,50] and included a total of 18 items. Of these, health-related tasks included: getting access to HIV care, getting treatment for illnesses other than HIV, getting help staying off drugs, and getting OAT. Since the task of getting access to HIV care only applied to a small sub-sample, our analysis focused on the latter three potential challenges. To provide a comparison of how incarcerated individuals perceived the relative importance of their post-release tasks, we selected three Bcomparison<unk>tasks of everyday life, namely: finding a job or a stable source of income, reuniting with family and/or friends, and staying out of prison following release. We used a proxy measure for whether a task was perceived as important. This proxy measure identified a task as applicable (easy or hard) versus not applicable. Correlates of identifying a health-related task as applicable were analyzed using logistic regression, and this analysis was performed separately for the three healthrelated tasks of interest. A parsimonious model was derived using Bayesian lasso method [51]. This method provides a more conservative way to perform variable selection and estimation of regression coefficients compared to traditional stepwise methods [52]. Statistical analyses were performed in SPSS (version 22.0, Chicago, IL) and R (Foundation for Statistical Computing, Vienna, Austria). Significance of betweencountry differences was assessed using ANOVA and chi-squared test for continuous and categorical variables respectively. R Package BEBglmnet<unk> [53] was used to implement Bayesian lasso, and we used three-level hierarchical priors with normal/exponential/gamma distributions to perform variable selection and estimation of regression coefficients and their 95% credible intervals. --- Ethics Statement This study was approved by both the Institutional Review Boards at the Yale University School of Medicine and Institutional Review Boards in Ukraine, Azerbaijan, and Kyrgyzstan. Further safety assurances were provided by the Office for Human Research Protections. --- Results --- Characteristics of the Participant Population As seen in Table 1, there was substantial diversity between participants in Azerbaijan, Kyrgyzstan, and Ukraine for several characteristics: religion, education, rate of recidivism, history of OAT, physical and mental wellness scores, and results of screening for diseases such as tuberculosis and HIV. In Ukraine, almost 80% of participants completed high school or received higher education in comparison to just over half of participants in Kyrgyzstan and just over a third in Azerbaijan. The rate of recidivism was generally high in our sample but was the highest in Kyrgyzstan (80%), while in Ukraine and Azerbaijan, it was HRQoL for physical health was similar to the general population level, but for mental health, it was markedly lower. HIV prevalence was the highest in Ukraine at 23.8% (with 42% of HIV positive unaware of their status) and the lowest in Azerbaijan at 6.8%. The three countries were similar on other health indicators such as prevalence of depression, hepatitis B and C, and multicomorbidity index (Table 1). --- Participants' Perceptions of Reentry Challenges Figure 1 illustrates participants' perceptions of reentry challenges. In terms of health-related challenges, about half of participants considered that getting treatment for illnesses other than HIV and getting help staying off drugs is hard, and about two fifths of participants considered that getting methadone treatment is hard. Yet, a sizeable proportion of participants-between one fifth to over one third-considered that health-related challenges (especially initiating methadone treatment) were not applicable to them at all. This is particularly striking because all participants in our sample had a substance use disorder with 80% having an opioid use disorder. In terms of competing everyday life challenges, 61% of participants reported they thought finding a job and 54% indicated staying out of prison would be hard. Interestingly, while many participants considered health-related tasks as not applicable, very few (<unk> 5%) participants considered everyday life challenges as irrelevant (Fig. 1). Table 2 illustrates country differences in participants' perceptions of health-related reentry challenges. In Kyrgyzstan, where over two thirds of participants reported a history of OAT involvement, almost half of participants reported that getting help staying off drugs was not applicable to them and over two thirds of participants considered getting methadone treatment after release as non-applicable to them (Table 2). --- Participants' Perceptions of Importance of Reentry Challenges Figure 1 illustrates participants' perceptions regarding what in their view was the most important task upon reentry. Overall, about two thirds of participants identified that finding a job or a stable source of income was the most important task, while only 0.3% of participants thought that the most important task was getting methadone treatment. The overwhelming majority of participants (all of whom had a history of drug use, mostly of opioids, and most of whom had injected drugs often just prior to incarceration) did not consider health-related tasks associated with their addiction treatment to be most important post-release. Furthermore, many participants did not consider getting help staying off drugs and getting methadone treatment as applicable to them at all. Thus, instead of determining the correlates of stated reentry challenges, Beasy<unk>versus Bhard<unk>in our regression analyses, we assessed the correlates of stated reentry challenges as Bapplicable<unk>(which could be either easy or hard) versus Bnot applicable.<unk> orrelates of Considering Reentry Challenges as Applicable Regression analyses (Table 3) demonstrated that participants in Kyrgyzstan were least likely to consider any of the health-related post-release tasks to be applicable. Participants in Ukraine were less likely than those in Azerbaijan to consider methadone treatment as applicable, but the adjusted odds ratio (AOR) for Ukraine was not nearly as extreme as that for Kyrgyzstan (0.24 versus 0.03, respectively). Those with more education (completed high school) were less likely to consider getting help staying off drugs applicable. Conversely, those with a history of injecting drug use were more likely to consider getting help staying off drugs and getting methadone treatment applicable. Likewise, participants who reported moderate and especially heavy injection habits in the 30 days prior to incarceration were also more likely to consider getting help staying off drugs and getting methadone treatment relevant. Having had previous experience with OAT was a statistically significant correlate of higher likelihood of considering getting methadone treatment. Meeting screening criteria for moderate to severe depression also positively correlated with higher likelihood of considering methadone treatment; however, 95% credible interval for this covariate includes the null. Having higher levels of comorbidity was significantly correlated with choosing health-related reentry tasks as applicable. Participants, who were HIV positive and aware of their status, were more likely to consider treatment of illnesses other than HIV as applicable. While having a positive HIV status and being aware of it did not correlate with a higher likelihood to consider addiction-related reentry tasks as applicable, scoring higher on the multi-comorbidity index was associated with higher likelihood of considering all three healthrelated reentry tasks applicable. A number of candidate covariates were not found to be associated with considering any of the health-related post-release tasks applicable. Of those, the most notable are: age, recidivism, HRQoL, hepatitis C and B status, and positive symptomatic screening for TB (Table 3). --- Discussion Most prisoners with substance use disorders return to urban settings that are often unequipped to deal with the myriad of health and social needs of individuals who have spent considerable time outside the fabric of these communities. Results from this study of soon-to-bereleased prisoners with substance use disorders point to three main findings. First, overwhelmingly, prisoners prioritize basic needs over all else, including health, as central to the transitional process. To improve support and preparation for release and reintegration for transitioning prisoners, a clear understanding of why prisoners prioritize everyday life challenges over health needs is essential, especially since good health is crucial to overcoming everyday life challenges. Maslow's hierarchy of needs provides a useful framework for understanding these challenges [54], which posits that individuals prioritize basic needs (e.g., food, housing, safety) over secondary needs (e.g., healthcare and health safety). The findings here are similar to those reported elsewhere in prisoners in other settings where basic needs are prioritized over addiction treatment [18,19,21]. Unlike treatment for other conditions, addiction treatment with OAT results in improvements for most basic and secondary needs like improved family reintegration, employment, criminal activity, health-related quality of life, and other health benefits [2,55,56]. The challenges of soon-to-be released prisoners' social reintegration [57], rather than indifference to health, may explain why, despite the high prevalence of morbidity (including HCV, HBV, TB, syphilis, and various acute and chronic conditions included into the comorbidity index) and a universal history of drug use in our sample, in all three countries, study participants prioritized finding a source of income, reconnecting with family, and staying out of prison as most important. An alternate interpretation of this finding may be that tasks related to finding a job, reuniting with family, and staying out of prison are viewed by soon-to-be released prisoners as more essential to immediate survival than those related to health [58]. Rather than focusing on health issues, transitioning prisoners may prioritize Reuniting with family or friends Easy 338 ( Data is presented in the form N (%). Values may not sum up to totals due to missing values, and percentages may not sum up to 100 due to rounding AZ Azerbaijan, KYR Kyrgyzstan, UA Ukraine a p value for chi-square test or Fisher's exact test those things that are more meaningful to them, like work that is socially affirming and reintegrating with family as activities essential to the fulfillment of social roles and obligations. Qualitative research on people in EECA countries suggests that social and relational connectedness is central to a sense of purpose in life, and conversely, feeling unneeded by others who do not value what one has to give, may lead to worsening mental and physical health and even suicide [59]. While such socially affirming activities appear to be important to transitioning prisoners, what is concerning is how these individuals do not understand the positive role of addiction treatment (especially OAT) in family reintegration, employment, reduced criminal activity, and health-related quality of life, which is often required to meet these socially affirming activities. Interventions that connect the role of OAT and dispel myths related to its use will be crucial to realistically meet the expectation of transitioning prisoners with substance use disorders. It is concerning that over a third of participants considered starting methadone as BNot Applicable<unk>despite most of them injecting nearly daily before incarceration. One possible explanation why these participants considered these challenges as irrelevant could be that they did not consider their drug use a chronic, relapsing disease (or methadone treatment initiated in prison and continued after release as the only evidence-based addiction treatment). Despite data from multiple clinical trials in communities [60] and in prisoners documenting its efficacy and the high relapse rate after release from prison in its absence [61][62][63], data from two Eastern European countries suggest that methadone is not perceived an effective treatment [55,56,[64][65][66], including in prisons [13,67,68]. Russia retains a strong influence in the EECA region with its staunch ban on all OAT for treatment of opioid use disorder. Alternatively, prisoners might inaccurately believe in their willpower to remain off drugs or have future plans on using heroin as a more natural and healthy substance than methadone [69]. Either way, informed decision-making aids that provide culturally accurate information [70][71][72] should be considered to help prisoners with substance use disorders plan for their transition to the community. Second, for those prisoners with higher levels of medical comorbidity, this subset of participants was more willing to prioritize their health during transition to the community. While having HIV did not change the odds of seeking help for addiction problems, participants with poorer general health considered getting help to stay off drugs more frequently. This could indicate that individuals in poorer health may recognize the need to accomplish health-related tasks like staying off drugs. Alternatively, it could indicate these individuals perceived fewer available resources to pursue street drugs after release [73]. Third, in Kyrgyzstan where OAT coverage is highest, a significantly smaller portion of prisoners prioritized methadone treatment compared to the other two settings where OAT coverage is lower and unavailable in prisons. One explanation for this finding is that OAT was introduced in select EECA countries in the mid-2000s not as a treatment for addiction but for HIV prevention [74]. Thus, policy makers, providers, and even patients might perceive methadone as means to control and reduce the HIV epidemic rather than as an evidence-based addiction treatment. In some contexts, this approach might have shaped patients' attitudes towards the treatment. Research in other global settings revealed that individuals underprivileged by judicial and social regimes may experience OAT more as a tool wielded by those who wish to control them as the harm and less as a treatment meant to improve their health and wellness [75]. Another possible explanation may be that informal control of the prisons in post-Soviet settings by the prisoners themselves plays a strong role in shaping meanings of methadone in this context. Qualitative research is urgently needed to explore how the meanings of harm reduction and OAT intersect in daily experiences of PWID and prisoners in EECA countries. Besides these three main findings, we want to highlight several other interesting observations from our analyses. It was striking that recidivists did not prioritize post-release tasks any differently than participants incarcerated for the first time. This could signify the persistence of planning fallacy [14,15] during subsequent reincarcerations or misalign the impact of relapse to drug use as a contributor to re-incarceration on recidivism [76]. Further, understanding the impact of OAT on reducing recidivism may need emphasis in assisting prisoners with opioid use disorder in setting their priorities [76]. Interestingly, more educated participants were less likely to consider getting help staying off drugs. Our qualitative research in Ukraine showed that PWID perceive OAT engagement as a sign of deteriorating health [56]. Thus, higher education may indicate participants' higher socioeconomic status underlying more resourcefulness, but also higher drug use stigma, with methadone perceived as the last resort of those who are Breally sick.<unk> e noted several interesting country differences. None of the study participants in Azerbaijan identified any of the three health-related tasks as the most important. A few (2.9%) study participants in Kyrgyzstan said that getting help staying off drugs was the most important reentry task, but despite the availability of methadone, no one identified such treatment as a priority. Only in Ukraine a small number of participants chose getting methadone treatment and getting treatment for general health conditions as the main post-release task, although the severity of health problems was comparable in all three countries. It is possible that the perceived criminalization of drugs is more severe in Ukraine than in Kyrgyzstan and Azerbaijan. One limitation of our study was cross-sectional design as we only measured expectations for post-release challenges and not actual post-release behavior. During incarceration, participants could try to resolve cognitive dissonance [76,77] regarding competing post-release challenges by defining challenges over which they felt least control, such as drug use, as not applicable. Previous research showed that PWID perceive more HIV and drug-related stigma post-release than within prison, and intentions to make changes in drug use are stronger in prison than after release [13]. Also, participants in prison may have overvalued their own self-efficacy and thus not felt that Bhelp<unk>getting off drugs was necessary, as they believed they would be able to do so on their own [69]. Consequently, after release, our participants' little interest in health-related challenges may decrease further. Longitudinal research must examine how PWID's attitudes and behavior concerning addiction treatment change during post-prison transition. It would also be desirable for future surveys after release to ask participants to choose and rank their top three reentry challenges in order to expand analysis options. Our findings have implications for policy and practice. Concerning is the high exposure to methadone in Kyrgyzstan, yet the relatively low interest in this treatment. Multiple factors may have contributed to this finding including the perceived ineffectiveness of this treatment. In a setting where methadone coverage is more prevalent, feelings toward methadone by other prisoners are extraordinarily negative leading to bullying and ostracism, such as reported in Moldova [68]. In this context, prisoners in the EECA region would benefit from potentially multiple types of interventions, including those delivered by professionals and peers or through use of informed or shared decision aids. Such strategies would focus on how abstaining from illegal drugs and initiating OAT may help released prisoners reclaim jobs, reunite with family, avoid overdose and reincarceration. As the prison environment also shapes participants' values and norms, custodial and clinical prison staff may benefit from similar interventions to enhance their understandings of prisoners' treatment goals [13,78]. The key implication is that, despite the availability of OAT, prisoners may forego treatment. Future studies should explore effective methods to overcome barriers by using informed decision-making aids or delivery of effective motivational sessions using professionals or peers. Implementation science studies that overcome scale-up barriers are urgently needed to address patient-level factors. It is crucial to balance the evidence with prisoners' own priorities and design OAT programs in a way that would integrate support in addressing other social needs, like employment support or job training. --- Conclusions Prisoners in Ukraine, Kyrgyzstan, and Azerbaijan prioritized post-release everyday challenges like finding a source of income and reconnecting with family over health-related tasks, despite their history of drug use and multiple comorbidities. Methadone was not viewed as an effective strategy for staying off drugs. Understanding and addressing the disconnect between the evidence and the belief that devalues that addiction is a disease that can be effectively treated like other chronic diseases will be crucial for scale-up. In designing programs for released prisoners, national and international organizations in EECA must consider educating prisoners and prison staff in how prioritizing addiction treatment helps accomplish other community transition goals. Funding This work was supported by grants R01 DA029910 (Altice), R01 DA033679 (Altice), and R36 DA042643 (Morozova) from NIDA.
Facing competing demands with limited resources following release from prison, people who inject drugs (PWID) may neglect health needs, with grave implications including relapse, overdose, and noncontinuous care. We examined the relative importance of health-related tasks after release compared to tasks of everyday life among a total sample of 577 drug users incarcerated in Ukraine, Azerbaijan, and Kyrgyzstan. A proxy measure of whether participants identified a task as applicable (easy or hard) versus not applicable was used to determine the importance of each task. Correlates of the importance of health-related reentry tasks were analyzed using logistic regression, with a parsimonious model being derived using Bayesian lasso method. Despite all participants having substance use disorders and high prevalence of comorbidities, participants in all three countries prioritized finding a source of income, reconnecting with family, and staying out of prison over receiving treatment for substance use disorders, general health conditions, and initiating methadone treatment. Participants with poorer general health were more likely to prioritize treatment for substance use disorders. While prior drug injection and opioid agonist treatment (OAT) correlated with any interest in methadone in all countries, only in Ukraine did a small number of participants prioritize getting methadone as the most important post-release task. While communitybased OAT is available in all three countries and prison-
INTRODUCTION Rural development is a purposeful process to improve the life quality of the rural community and address issues related to society, the economy, and the environment (Muta'ali, 2016;Diartika & Pramono, 2021). As a tool, rural development is expected to build a prosperous, competitive rural community (Sururi, 2017;Nain, 2019;World Bank, 2021). This context requires support from multiple parties, including the government, private stakeholders, non-profit institutions, and the rural community (Badri, 2016). In their effort at rural development, the government plays a vital role in providing support and facilities while ensuring that the rural development program can run in an effective and efficient manner (Adisasmita, 2016;Riskasari & Tahir, 2018;UNDP, 2021). Friedman and Allonso (1978) stated that regional development is a strategy to utilize and combine internal factors (strengths and weaknesses) and external factors (opportunities and threats) as potentials and opportunities for improving the production of goods and services in respective regions. While internal factors include natural resources, human resources, and technology resources, external factors are opportunities and threats that often arise from interactions with other regions. Alkadri (2001) describes regional development as a harmonious relationship between natural resources, human resources, and technology resources that is nurtured by considering the environmental capacity in community empowerment. This current study is particularly significant in the field of village tourism development as it theoretically develops a model that connects important variables in rural development, including leadership, entrepreneurship, and the local community's ability to develop creative tourism products and sustain them as a business. The external role serves as a moderating variable that is crucial in determining whether it strengthens or weakens the potential of village communities (Dewi and Ginting, 2022). Andri (2006) stated that efforts to accomplish rural development goals are currently facing different challenges from the past. The first challenge is related to external factors, such as international development in the liberalization of investment flow and global trade. The next challenge is the internal conditions which include, to name a few, the transformation of economic structure, spatial migration and sectoral issues, food security, agricultural land availability, investment and capital problems, science and technology challenges, human resources, environmental issues, and others (Rosana, 2011;Imanullah et al. 2016). In spite of this, the government of Indonesia has attempted to address these challenges through regulations, such as the RPJMN 2015-2019 (National Medium-term Development Plan), which is parallel to the visions of President Joko Widodo in the third point of Nawacita, namely building Indonesia from the outskirts by strengthening the regional and rural areas in the vicinity of unitary state (Budiharso, 2018) have tourism objects but are difficult to develop autonomously, so there needs area-based development. the potentials of the three villages are presented in Table 1. These tourism potentials are not optimally managed by the apparatus and community in three villages. A study by Piani (2019) reported that although the Government of Banyumas Regency has formed the Coordinating Team for Development of Rural Area (TKPKP) of the Agrotourism in Kendeng Mountain Somagede Subdistrict using collaborative governance approaches and engaged the community, subdistrict government, and Universitas Jenderal Soedirman, there is lack of positive results for Tanggeran, Kliting, and Kemawi villages. This finding shows that the original plans for the tourism potential failed to achieve without apparent development. Upon evaluation of the development of rural tourism in Tangeran, Klinting, and Kemawi, it was found that there needs to be a synergy and solidarity between the apparatus and the community in each of the three villages. In addition, developing rural areas take collaborative actions between the village government and support from the regional government. Olberding (2002) in Harsanto (2012) stated that collaborative practice to achieve regional economic development may be undertaken in two ways. First, every village develops its agrotourism potential by competing with other villages. As a result, agrotourism can flourish in one village but fail in others, or worse, fail in all locations due to competition. Second, the collaboration between villages to develop area-based agrotourism. The second way is technically able to produce multiple benefits and profits so that tourism potentials in each area can develop. Therefore, it is the ideal option for the Government of Banyumas Regency to improve agrotourism through collaborative practice in three geographically close villages, namely Tanggeran, Klinting, and Kemawi. Area-based development approach and collaboration between villages are new concepts implemented in Banyumas Regency. Nevertheless, collaborative practices face multiple challenges. Some studies found issues related to the collaborative administration model that did not support synergy and partnership between the stakeholders (Kurniasih, Setyoko, & Imron, 2017), commitment, lack of coordination, lack of trust between stakeholders, and limited access to information (Muhammad et al., 2017). According to Huxham et al. (2000), collaborative competency determines the level of success of the collaboration. Based on these issues, this study aims to build a model of collaborative competency from actors engaged in area-based rural development. --- LITERATURE REVIEW Rural development is a concept implemented in many regional areas in Indonesia based on Law Number 5 of 2014 on Village aiming to put forward villages as one of the development objects. Andri (2006), in his study, concluded that villages need to be perceived as the potential basis of economic activities and expected to be the new paradigm in the overarching economic development programs in Indonesia. Meanwhile, Badri (2016) stated that the current development has undergone significant changes in both the concepts and the process. Today, the concept of development is no longer limited to the agriculture sector and basic infrastructure but instead moving towards the development of information, communication, and technology (ICT). Further, considering community participation as one of the vital elements in the development process, it is necessary for the rural government to first ignite community participation as one of the targets of rural development itself (Muslim, 2012). The concept of rural community development proposed in the present study is the collaborative competency model. The term competency was first introduced by Boyatzis (1982) as the capability of individuals embodied in their attitude parallel to work demand. Woodruffle (1992) stated that competency should not be considered an element but rather a concept to illustrate one's understanding of the relationship between the expected implementation and the desired implementation of a project based on the information of the previous implementation. Meanwhile, Wibowo (2016) mentions that competency is the ability to carry out or execute a job or a task by harnessing skills and knowledge supported by the appropriate attitude required at the respective workplace. Spencer & Spencer (1993) defined competency as the basic characteristics of an individual through the cause-and-effect relationship with the criterion-referenced effect or high & Spencer (1993) are Motive (something that is constantly pondered or wanted that makes someone act), Trait (characters that make individuals act or respond to something in particular manners), Self-concept (personal attitude and values), Knowledge (information on particular field collected by individuals), and Skill (the ability to carry out physical or mental tasks). Competency measurement involves evaluating individual or group capabilities in carrying out particular tasks or work, and how they demonstrate knowledge, skills, and attitudes required for the job. The measurement may take the form of tests, direct observations, or interviews (Boyatzis, 1982). Further, competency development would engage learning and improving necessary skills to achieve success at work or career. It may be carried out through training, formal education, or work experience obtained from work (McClelland, 1973). Competency is also related to human resource management which includes a series of selection, development, and performance evaluation of employees. The implementation of competencebased management of human resources can help organizations to identify aspiring employees who demonstrate a particular aptitude to achieve success in certain positions and enable the development of effective evaluation of work performance (Hart & Banbury, 1994). As the employment world gets more competitive, employees' capacity and skills become crucial factors in an organization's success. Accordingly, competency development and measurement are important to optimize organizational performance and individual careers (Parry, 1996). The term collaboration refers to a partnership between two or more individuals or groups to achieve common goals. Collaboration requires open communication, coordination, information and resource sharing, and enabling individuals and groups to harness each other's forces to reach shared goals. collaboration is a crucial practice in many sectors, including business, education, and research (Bresnen, Eldman, & Newell, 2015). Some types of collaboration include internal collaboration within an organization, inter-organization collaboration, inter-region collaboration, and global collaboration. While internal collaboration engages partnerships between individuals or units within one organization, inter-organization collaboration involves a partnership between multiple organizations. Inter-region collaboration refers to a partnership between different regions within one or different countries, and global collaboration is an interindividual or intergroup partnership between different countries to achieve shared goals (Cross & Cumming, 2004;Leenders & Gabbay, 1999). Collaboration can provide multiple benefits, such as improved creativity, better effectiveness, and enhanced capacity to address complex problems. However, collaborations are not without challenges which may include communication barriers, poor coordination, and different perception or goals that may hamper the improvement of collaboration (Huxham & Vangen, 2005;Rentsch & Klimoski, 2001). The implementation of information, communication, and technology also enables more effective collaboration. Technology tools like video conferencing platforms, online collaboration platforms and software for project management enable individuals and groups to collaborate effectively regardless of being in different locations (Kahn, 2004). Unambiguous and open communication can help ensure that every individual in the team shared common goals and work in parallel. In the collaboration, every individual also needs to consider individual differences and uniqueness, and find a way to integrate their contribution to the whole project (Vargas-Herhandez, 2021;Beyerlein & Beyerlein, 2016;Wageman & Donnenfeld, 2017). A previous study by Ernawati (2019) specifically discusses four types of collaborative competency: attitude (value and ethics in collaboration), Culture (competence in own professions), Knowledge (understanding the role and responsibility of others), and Skills (communications, coordination, leadership skills). Riggio (2017) stated that collaborative competency is the individual or group's capability to work together with other people to reach shared goals. this type of competency includes the ability to communicate effectively, build a positive working relationship, appreciate the difference and uniqueness of others, and handle conflict well (Hackman, 2011;Oliver, 2018). The study also demonstrated the importance of collaboration among various stakeholders, such as government, local communities, and the private sector (Alim, et.al, 2023) Furthermore, transformational leadership has a positive impact on creative behavior, whereas transactional and laissez-faire leadership styles have negative effects. Moreover, effective succession planning and management can enhance the positive impact of transformational leadership on creative behavior. Therefore, the study recommends that leaders in Jordanian medium and small companies adopt transformational leadership styles and implement effective succession planning to foster a creative work environment and achieve better organizational outcomes (Hamour, 2023). In a more complex and dynamic working environment, collaborative competency has never been more important to create a productive and innovative working environment. In collaboration, individuals with different backgrounds and employment experiences come together to create better and more innovative solutions. Additionally, collaborative competency is the key solution to complicated problems that cannot be addressed single-handedly. Some crucial elements of collaborative competency according to Ribbers and Wijnhoven (2018) are as follows: 1. The capacity of effective communication. Individuals must be able to listen and talk clearly and effectively to ensure that the intended message is delivered and understood correctly by other people. --- 2. The capacity to build a positive working relationship. Individuals must be able to build a positive relationship with other people, including the ability to appreciate the differences and uniqueness of others. --- 3. The capacity to work as a team. Individuals must be able to work in a team and contribute their skills and knowledge to come up with better and more innovative solutions. --- 4. The capacity to mitigate conflict correctly. Individuals must be able to address conflict well and solve problems constructively. --- DATA AND METHODOLOGY --- Type of Research This research applied a qualitative approach which, according to Marshal and Rosman (1989:46), leans towards describing and emphasizing contexts, research backgrounds, and subjective references. To obtain reliable data, the researchers delved into the vast amount of information related to the experience, knowledge, and facts from the informants about the collaborative development programs between the villages of Klinting, Tanggeran, and Kemawi. --- Location and Research Sampling Techniques This study was conducted in Banyumas Regency, focusing on three villages as the research loci: Klinting, Tanggeran, and Kemawi. The location was selected purposively because the three villages were in the middle of developing inter-village collaborative programs for agrotourism in their area. These villages have great potential for tourism to flourish. --- Research Data Collection Methods Data collection was performed through three techniques. First, focus group discussion with people whom we perceived capable of discussing the topics related to collaborative competency, namely the actors of inter-village collaboration in Somagede Subdistrict. Second, in-depth interviews to collect robust data through face-to-face surveys with informants with or without an interview guide, and the interviewer is involved in their social life for a relatively long time (Bungin 2013: 108;Creswell, 2009:11). Third, documentation which collects data from written sources related to the focus of the research problems in form of documents owned by the object or subject of research (Bungin, 2013:121). Lastly, the observation that requires the researchers to field visit to relevant sites and events in terms of space, time, place, actors, activities, things, and feeling (Patilima, 2007:60). --- Data Analysis Methods This research utilized a descriptive-qualitative analysis design. Data were built up from the results of interviews and focus group discussions for further analysis and deduction. Data processing used MaxQDA to codify narratives emerging from the interviews. The data were subjected to the steps of data flow analysis proposed by Miles & Huberman (2014) which includes data reduction, data presentation, and deduction/verification. In the current study, this method was implemented through several steps. First, data and information were categorized, sorted, and simplified to compile the main problems related to the roles of stakeholders in the management of primary education units that were obtained from observations and nonstructured interviews from the key informants. Second, the results of data categorization were subjected to triangulation using notes and documents obtained from the field. Third, these issues were grouped into relevant concepts with the research problems, then drawn into conclusions. --- RESULTS AND DISCUSSION This section elaborates on the findings of the collaborative competency study on the actors of inter-village collaboration for rural development. Four relevant competencies are communication, giving and receiving feedback, decision-making, and conflict management. Each aspect is explained in detail below. Santosa (2000) explains that communication, including the behavior of actors or external parties and the way they influence the interests and interpretation of others, is one of the processes toward conflicts. Effective communication between parties may occur when trust is built among them. --- Communication --- Wahyuningrat., Harsanto, B. T. (2023) Collaborative Competency for Rural Area Development in Indonesia Similarly, the pattern of communication competency in the inter-village collaborative programs found several issues related to communication in sectors as illustrated in Figure 1 below. Figure 1 shows an intercorrelation pattern between issues and actors in the collaborative competency of communication. In this term, the respondents agreed that communication is a crucial factor in the success of rural area development in Kendeng Mountain. Reflecting on the pattern of issues emerging in communication, some issues revolved around the strong sectoral ego, misunderstanding or misperception, extremely passive communication between authoritative parties, the ambiguous flow of communication between institutions, and conflicts of interest that potentially harmed the collaborative practices implemented in three villages. Further embodiment showed that some stakeholders were involved in supporting and bridging the communication process between villages. It demonstrates that at least they engaged multiple stakeholders including a higher education institution as the supporting partner, the subdistrict government as the leading sector to manage the administrative area, the regency government as the actor for program monitoring and evaluation, and other institutions such as --- Giving and Receiving Feedback Feedback or giving advice is an important element in an organization. Armstrong (2009) stated that feedback given to individuals about their performance is crucial for performance management. In this regard, feedback provides information on performance results, events, critical incidents, and significant behavior. While positive feedback informs the recipients of their good conduct and constructive feedback offers suggestion on how to do something better, negative feedback tells the recipients of their poor performance (Armstrong, 2009). It has been reported that feedback can reinforce effective behavior and point out where and how behavior should be changed. Rusli Lutan (2001) explains that feedback is acquired knowledge relative to particular tasks, actions, or responses given. Based on these expert definitions, we conclude that feedback is information related to individual and management capacity to keep improving Collaboration between institutions creates a mutually influential or interdependent relationship. Similar to the pattern of network involvement in Figure 2, the inter-village collaboration will lead to a high level of interdependency, especially between collaborating villages and other stakeholders such as Rural Collective Business Entity (BUMDESA), BKAD, and subdistrict government. In other words, when one party is underperforming, the overall collaboration will suffer. Giving and receiving feedback is a crucial competency in order to produce the best performance to achieve shared goals. Nevertheless, giving and receiving feedback may be perceived as taboo because local culture and tradition have suggested that correcting other people should be avoided in order to remain sensitive to others' feelings. As mentioned before, giving and receiving feedback becomes urgent when problems like sectoral ego emerge. However, despite establishing a memorandum of understanding between the parties, each village frequently upheld its own Reflecting on previous studies, we highlighted that the contributing factors to the success of giving and receiving feedback for the community to create constructive collaboration practices in rural development are: 1. Give and receive feedback using emotional intelligence embodied in polite, emotionally controlled manners; 2. Give and receive feedback sensibly for their shared problems; 3. Give and receive feedback trustfully; 4. Give and receive feedback that focuses on performance rather than personal character. --- Decision Making Ansell and Gash (2017) define Collaborative Governance as a regulation set by one or more public entities which directly involve non-public stakeholders in the process of making collective, formal decisions oriented towards consensus and forum discussion aiming to stipulate or implement public policies or management of programs and public assets. Agrawal and Lemos (2007) (in Emerson et al, 2012). Engaging both approaches, we concluded that decision-making competency is a crucial aspect of the issue of collaborative governance. The pattern of the narratives and actors in decision making which emerged from the present study is as follows. Figure 3 illustrates the relationship pattern between actors and narratives emerging from the decision-making process in carrying out collaborative rural development. Similar to giving and receiving feedback competency, collaboration, and intercorrelation patterns have made decision-making as a required instrument in the overall competency because every decision made will affect the achievement of targets in shared interests. Discussions and arguments about decision making highlighted the program planning and execution in which the decision makers should fully consider all aspects to come up with the decision which is the best and acceptable solution for all parties. To arrive at the best decision alternatives in collaborative rural development, some elements need to be considered. First, decision-making needs to better understand the problems currently faced. The situation may turn worse when every stakeholder promotes their interests, so decisions made should uphold mutual interests so that no party feels disadvantaged. Then, the decision maker should anticipate the potential issues or challenges after the decisions have been made. Therefore, the ability to gather information and analyze data is crucial to arrive at the right decision. By considering all available information and relevant data about the issues, decision-makers are expected to produce several alternatives with their risks and benefits. Then, these alternatives should be communicated in the forum for collective analysis to arrive at mutual decisions. Developing such a competency pattern is also expected to encourage mutual respect between parties so that the collaboration will be strengthened and impact the progressive development of the rural area that provide benefits for the local community. As explained by Mitchel, disagreement and conflict behavior are the common roots of conflict. This issue is fueled by the sectoral ego of each village which wants better development for their area than others. Meanwhile, each village has previously agreed to form a partnership and collaborate in rural development. Some findings of this study concluded that conflict management must meet several elements. First, it must have an accommodative capacity which means that each village must be open-minded to different aspirations, perspectives, and opinions. In this way, every collaborating party should have mutual understanding and respect. One of the ways to manage conflict is by listening to others' opinions, respecting differences, and upholding others' interests proportionally. Second, collaborative capacity is the capacity to turn conflict into a positive by offering opportunities to parties involved in the conflicts to enable them to collaborate. Third, the capacity to compromise is crucial to arrive at common ground. Disputes frequently occurred between the three villages, and conflicts of interest and disagreement will always exist. Therefore, every party should try to compromise and respect one another to come up with a mutual agreement. --- CONCLUSION This study develops a collaborative competency model between the actors of intervillage collaboration to develop rural areas. There are four aspects of competency required in this process. First, communication is crucial in collaborative development to build rural areas because communication enables all parties to express and achieve the shared goals in rural development. Second, giving and receiving feedback is an important competency to carry out collaborative development in rural areas. Positive or negative feedback may offer beneficial impacts on both organizations and individuals engaged in the collaboration. Third, decisionmaking is required in every implementation of collaboration because the decision will impact the achievement of mutual goals. Lastly, conflict management is vital for implementing collaborative development in rural areas because it serves as an evaluation system in the development of collaboration. --- Conflict Management The last aspect of this finding is management conflict competency. Conflict is a dispute between multiple interests, values, actions, or directions that have historically integrated into life (Johnson & Lawang, 1994). Conflict, either positive or negative, is inevitable. The positive aspects of conflict may emerge when conflict can help identify the process of managing ineffective resources, clarify ambiguous ideas and information, and explain the misunderstanding. The root of prevalent conflict includes four aspects: different knowledge and understanding, different values, different interests, and personal affairs or historical background (Mitchell, B., B. Setiawan, 2003). In this conflict management competence, we found the patterns between actors and narratives as follows. Figure 4 illustrates the narrative patterns emerging from the management conflict competency in inter-village collaborative development. The conceptual definition of conflict management is the process, arts, knowledge, and all resources available for individuals, groups, or organizations to achieve the goals of conflict management (Santosa, 2002). Conflict management is a series of actions and reactions between the actors and external parties in a conflict. This competency requires the involved parties to manage conflict because it potentially
The aim of this study is to formulate a model of collaborative competence for village government officials in the context of rural development in Indonesia. This study is conducted to fill the research gap on collaborative competence in the context of rural development in Indonesia, which is considered a strategic key to supporting village development in the country.The collaborative competency model is the concept of rural community development proposed in this study. Boyatzis (1982) defined competency as an individual's capability manifested in their attitude parallel to work demand. According to Woodruffle (1992), competency should not be considered an element, but rather a concept to demonstrate one's understanding of the relationship between the expected and desired implementations of a project based on previous implementation information. Design/Methodology/Approach: This study uses a qualitative approach with descriptive analysis, which emphasizes on context, research settings, and subjective references. Rural development in Indonesia has received focused attention through Nawacita, and Banyumas Regency in Central Java has built an agrotourism-based site in the villages of Tanggeran, Klinting, and Kemawi in the Gunung Kendeng area, Somagede District. The results of this study are expected to contribute to the development of social sciences, particularly regional development.The results have Three key elements are required for effective conflict resolution. For starters, each party must be open to different perspectives and opinions while demonstrating mutual understanding and respect. Second, a collaborative capacity that allows parties to turn disagreements into opportunities for collaboration. Finally, despite existing conflicts of interest and disagreement, the ability to compromise is required to reach a mutual agreement.The findings of this study indicate that effective collaborative capabilities among all parties involved, including the government, community, and private sector, are necessary for the development of rural areas in Indonesia. This highlights the significance of cooperation and collaboration among different parties in striving to enhance rural welfare and development in the country. Additionally, this research contributes to the advancement of social sciences, particularly in the realm of regional development. Consequently, this article serves as a valuable resource for researchers and practitioners seeking to enhance their collaborative competencies within the context of rural development in Indonesia.
family members as primary decision-makers. If extended families shape the objectives and constraints of households, then neglecting the role of this network may lead to an incomplete understanding of health-seeking behaviour. Understanding the decision-making processes behind care-seeking may improve behaviour change interventions, better intervention targeting and support healthrelated development goals. This paper uses data from a cluster randomised trial of a participatory learning and action cycle (PLA) through women's groups, to assess the role of extended family networks as a determinant of gains in health knowledge and health practice. We estimate three models along a continuum of health-seeking behaviour: one that explores access to PLA groups as a conduit of knowledge, another measuring whether women's health knowledge improves after exposure to the PLA groups and a third exploring the determinants of their ability to act on knowledge gained. We find that, in this context, a larger network of family is not associated with women's likelihood of attending groups or acquiring new knowledge, but a larger network of husband's family is negatively associated with the ability to act on that knowledge during pregnancy and the postpartum period. --- Introduction Economists have long expressed health gains as the result of a household production function in which care-seeking is an input (Grossman, 1972;Becker, 1973). Models of health production and health care demand (Grossman, 1972(Grossman,, 2000) ) commonly account for the role of nuclear family members in shaping investments in health. Building on work by Becker (1973Becker (, 1974)), Jacobson (2000) postulates a framework in which family members have common preferences in health production, assuming that family members will obey all decisions made by the family. Bolin et al. (2001) then present a model in which investment in health is decided through a bargaining process within the family. 1 They stress the importance of conflicting interests between husband and wife (Bolin et al., 2001), and in later work allow for conflict and strategic behaviour within the nuclear family (Bolin et al., 2002). Few studies considered that nuclear families are embedded within extended family networks. If extended families shape behavioural objectives and constraints, then neglecting this network may lead to an incomplete understanding of health-seeking behaviour. The role of extended families may be particularly relevant in poorer settings that are frequently characterised by missing or incomplete safety nets, missing markets and correlated shocks to economic and physical well-being (Cox and Fafchamps, 2008). Understanding the adoption of new knowledge or health care practices in this context may support behaviour change interventions, improve intervention targeting and support health-related development goals. While previous studies have analysed the determinants of maternal and neonatal care in Nepal (Niraula, 1994;Acharya and Cleland, 2000;Hotchkiss, 2001), none have yet focussed on the potential role of kinship networks in promoting health gains or losses. Outside of the health and development discourse, existing literature proposed community and kinship networks as a source of private transfers and financial risk sharing (Cochrane, 1991;Townsend, 1994). Savings and credit associations are practical examples of financial risk sharing within community networks (e.g. Besley et al., 1993;van den Brink and Chavas, 1997;LaFerrara, 2003). The analysis of financial transfers between households had also highlighted their role as risk sharing mechanisms (e.g. Rosenzweig, 1988;Rosenzweig and Stark, 1989;Fafchamps and Lund, 2003), indicating that such transfers usually take place between close relatives (see e.g. Lucas and Stark, 1985;Fafchamps and Gubert, 2007). Other studies of labour markets showed how family networks relay information about job or business opportunities. Granovetter (1995) similarly documented the role that networks play in matching workers and employers, emphasising the important role of weak ties over strong ties in diffusing new information and knowledge. Montgomery (1991), in contrast, proposed a model in which employed workers help their employer identify suitable recruits, who are often relatives (Barr and Oduro, 2002). Munshi (2003) provided evidence of how information about business opportunities circulates in family and ethnic networks. If information about employment opportunities is circulated in this way, it is possible that information about appropriate health behaviour and access to services is also circulated through extended family networks. While the literature on risk sharing (or opportunity pooling) suggested that extended family networks may positively impact appropriate care-seeking, other work suggested that this impact may be negative. Numerous studies from economics, anthropology and sociology have found mixed results. Some studies show that networks and family ties can have a negative effect on individual well-being when cultural norms and traditions prevent acting on new information, including the adoption of innovative and potentially beneficial behaviours and technologies. For example, Adongo et al. (1997) found that a high risk of social ostracism and familial conflict prevented the uptake of contraceptive use in rural Ghana, even when services were freely available. Similarly, Sear et al. (2003) found that the presence in the household of the husband's mother and, to a lesser extent the husband's father, increased the probability of a woman giving birth in rural Gambia; i.e., it increased her fertility rate, together with the associated health risks of high fertility in that context. Conversely, several other studies conducted in Africa and Asia showed that family networks may have a positive influence in matters related to the different stages of childbirth. For example, Aubel et al. (2004) found that Senegalese grandmothers have the ability to learn, to integrate new information into their practices and to positively influence the practices of women in reproductive age. Their results supported the need for future maternal and child health matters programmes to involve grandmothers and, in so doing, to build on their intrinsic commitment to family well-being. A number of studies have focused on the role of maternal and paternal grandmothers and kin and found that maternal grandmother and maternal kin have a positive effect on child survival, child health and nutrition (see among the others: Sear and Mace, 2008). Similarly, Karmacharya et al. (2017) focused on the associations between grandmothers' knowledge and infant and young child feeding practices and tested whether the associations are independent of, or operate via, maternal knowledge. Their findings suggested that grandmothers' correct knowledge translated into mothers' correct knowledge and, therefore, optimal infant and young child feeding practices. In the context of an intervention aiming to change health practice through information dissemination, the expected effect of extended family networks on health-seeking behaviour may be positive or negative: 2. Larger families might, however, exert more pressure on women to adhere to traditions and social norms in spite of new information received. This would result in less appropriate care-seeking in societies with norms that promote the seclusion of women or the use of traditional practices that carry health risks. This paper uses cross-sectional data from rural Nepal to empirically test the influence of family networks on positive health practices. In this study, we proxy family networks with the number of female relatives living in the same village development committee (VDC), distinguishing between women's own relatives and her husband's relatives. Husband and own relatives are differentiated because women in this context tend to live with their husband's families after marriage, usually in extended family groups. Data collection was embedded within the surveillance system of a cluster randomised control trial to reduce neonatal and maternal mortality. The intervention comprised community-based women's groups working through a participatory learning and action cycle, henceforth PLA (Mesko et al., 2003;Manandhar et al., 2004;Morrison et al., 2005;Wade et al., 2006;Prost et al., 2013). 2 The PLA groups disseminated information about appropriate health care practices for pregnant women and their newborn children. Women were free to attend or not attend the groups, and were free to act or not act on the information shared in the groups. Evaluation of the trial showed that the intervention reduced neonatal death by 30% in the intervention areas and that women in intervention areas were more likely to have antenatal care, an institutional delivery, a trained birth attendant and hygienic care compared with women in control areas (Manandhar et al., 2004;Prost et al., 2013). In this paper, we explore whether larger family networks are positively or negatively associated with the adoption of these and other potentially beneficial care-seeking practices by women during the perinatal period. This paper is organised as follows. Section 2 describes the study location and further detail on the data and data collection. Section 3 describes the analytical 2 Manandhar et al. (2004) focus on the effect of the participatory intervention with women's groups on birth outcomes as summarised above. Morrison et al. (2005) focus on the functioning of the women's groups. They describe the implementation including the community entry process, facilitation of monthly meetings, community planning and implementation and evaluation of strategies to tackle problems within the group discussions. They find that the women's groups developed varied strategies to tackle problems of maternal and newborn care. Wade et al. (2006) compare perinatal care-seeking before and after the intervention. They analyse whether the programme increased antenatal care, the use of a boiled blade to cut the cord, appropriate dressing of the cord and retaining colostrum. Among those not initially following good practice, women in intervention areas were significantly more likely to do so later for all four outcomes. Mesko et al. (2003) focus on information gathered from case studies and focus group discussions with women, family members and health workers. They find that early pregnancy was often concealed, preparation for birth was minimal and trained attendance at birth was uncommon. Family members were favoured attendants, particularly mothers-in-law. They find that there were delays in recognising and acting on danger signs, and in seeking care beyond the household, in which the cultural requirement for maternal seclusion played a part. methodology and Section 4 presents the main results. Section 5 concludes with a brief discussion of the results and implications for future research in this area. --- Data --- Study area The study was based in the district of Makwanpur, a central region of Nepal. It had a population of nearly 400,000 people, covering an area of 2500 km 2 and including both hills and plains. Most residents were engaged in small-scale agriculture at the time of the trial. There were more than 15 ethnic groups, the largest of which was Tamang (a predominantly Buddhist, Tibeto-Burman group), followed by Brahmin and Chetri (groups of Indo-Aryan origin). The district was geopolitically divided into 43 VDCs. The district hospital in the municipality of Hetauda had facilities for antenatal care and delivery. Perinatal care was available through a network of primary health centres, health posts, sub-health posts and outreach clinics. Traditional birth attendants were available throughout the district, but their services were costly and often not affordable for families (Borghi et al., 2006). --- Data As mentioned previously, data collection for this study was embedded within the surveillance system of a cluster randomised control trial to reduce neonatal and maternal mortality. For the trial, 12 pairs of VDCs were selected within the district and one of each pair was randomly assigned to the intervention or control group. In the intervention clusters, PLA meetings were organised to identify existing perinatal problems and formulate strategies to address them at a local level. 3In the second phase of the programme, the intervention was extended to the original control areas. During that phase, a sub-study aimed to collect data on social networks, spread of information, demographic and socio-economic characteristics, previous pregnancies, distance to group meetings and distance to health care facilities. These data were collected from the same 12 pairs of VDCs between January 2007 and May 2008. At 1 month postpartum, women were interviewed about antenatal care, delivery and post-delivery care, home-care practices, maternal morbidity, neonatal morbidity and health service use, as well as information on demographic and socio-economic characteristics. A sub-sample of women were also asked questions about social networks, spread of information within the family, participation in women's group meetings and distance to the PLA group meetings and health care facilities. These women were asked to list up to five female4 relatives currently living in the same ward and the same VDC.5 Relatives were categorised as sisters, wives of brothers, husband's sisters, wives of husband's brothers, mother and mother-in-law. This categorisation makes it possible to distinguish between 'own family' (sisters, wives of brothers and mother) and 'husband's family' (husband's sisters, wives of husband's brothers and mother-in-law), as described later in Section 3. The sample used for the analysis in this paper consists of 1749 women who answered both the main trial questionnaire and the additional social network questionnaire. The demographic and socio-economic characteristics of the sample are described in detail in Table 1. In summary, the average age of the women in our sample is 26 years (SD 6.49), and the average age at marriage is 17 years (SD 2.84). In all, 52% of women in the sample have no education, and only 47% of women were able to read a basic line of text. The most common source of drinking water is the river and public pipes (73%), and most homes are constructed from mud and stone (61%). Most women belong to households where the main occupation is agriculture (94%). Women lived an average of half an hour from the nearest PLA group meeting place, and just over an hour from the nearest health care facility. In Table 1, the 15 ethnic groups in our sample are collapsed into four categories as follows: Tamang (66%), Brahmin-Chhetri (14%), Magar (4%) and other (15%). The wider anthropological literature6 describes Tamang as the major Tibeto-Burman-speaking community in Nepal, who maintain the belief that they originate from Tibet. Most Tamang are self-sufficient in terms of food and are the owner-cultivators of their land. The Tamang community is divided into clans that are exogamus. Preferred marriage is between cross-cousins. The Brahmin-Chhetri population has had a dominant role in the formation of the Nepali nation. They rank highest in the cast hierarchy and form the majority of influential and wealthy people of traditional Nepal. Their main occupations are farming and government service. Among them, the richest are landlords, senior officers in the army or political leaders. Brahmin-Chhetris do not practice cross-cousin marriage. Village exogamy is observed. Magar are mostly Hindu. Agriculture is the basis of the Magar economy, which is largely self-sufficient. Magar are endogamous. Magar women occasionally marry outside the group, but men almost always marry within the group where they can marry anyone within the Magar community except members of their own patrilineage. Again, cross-cousin marriage is preferred. The residual group of ethnicities is heterogeneous. It includes privileged ethnicities such as the Newar, as well as less privileged ethnicities such as Praja and Kami. Newar are the indigenous people of Nepal's Kathmandu Valley and are prominent in every sphere, from agriculture, business, education and government administration to medicine, law, religion, architecture, fine arts and literature. --- Methodology To explore the potential role of family networks in influencing health behaviour in this context, we construct three linear regression models. First we estimate the number of times a woman attended PLA groups to establish the determinants of participation. Next we estimate the level of knowledge regarding positive health care practices, and the determinants of that knowledge. Finally, we estimate the determinants of positive care practice. In this study, we proxy family networks with a count variable that enumerates the number of female relatives living within the same VDC, distinguishing between women's own relatives and husband's relatives. On average, women in the sample had 1.26 (SD 1.31) 'husband's' female relatives and 1.42 (SD 1.36) 'own' female relatives within the same VDC. As group participation, level of knowledge and positive care-seeking are all enumerated by continuous variables, we estimate linear regression models specified as follows: PLA c = <unk> + <unk> 1 wife rel c + <unk> 2 husband rel c + <unk>X c + <unk> 1 dist PLA c + <unk> 2 dist healthinst c + <unk>; where PLA indicates the number of times a women attended the group in cluster c. The variables wife_rel and husband_rel represent, respectively, the number of woman's and husband's relatives. X is a vector of socio-demographic characteristics summarised previously in Table 1, including age, age at marriage, previous pregnancies and ethnicities. This vector also includes a proxy of wealth that is measured using a multi-dimensional poverty index (MPI) (Maasoumi, 1986;Bourguignon and Chakravarty, 2003;Alkire and Foster, 2011). The index used in this text covers the same three dimensions as the Human Development Index, i.e. education, health and standard of living,7 and 'captures a set of direct deprivations that batter a person at the same time' (Alkire and Santos, 2011). In this context, where households may arguably be described as homogeneously poor, it is a more comprehensive measure of deprivation that differentiates households in a meaningful way. The variable dist_PLA and dist_healthinst indicate, respectively, time to reach the nearest PLA and the nearest health institution. <unk> is an error term that we assume to be independently distributed. The subscript c stands for the cluster (VDC). In the model of level of knowledge, we also include PLA participation as an independent variable in a model specified as follows: health know c = <unk> + <unk> 1 wife rel c + <unk> 2 husband rel c + <unk>PLA c + <unk>X c + <unk> 1 dist PLA c + <unk> 2 dist healthinst c + <unk>: Health knowledge (the variable health_know) is measured using a count variable that adds up a woman's knowledge of 18 'good' behaviours during the three key stages of childbirthi.e. pregnancy, delivery and the postnatal period. In each instance, respondents were asked what care, in their opinion, mothers needed during each stage. To reduce respondent bias, the list of possible behaviours included positive, negative and neutral behaviours. These behaviours are summarised in Table 2, where 'good behaviours' included in the health knowledge count are numbered and those excluded from the count are not. A woman's level of health knowledge is then the sum of the good behaviours of which she is aware. In this sample, respondents were aware of an average of 4.56 (SD 3.12) positive behaviours. In the model of positive health care, we additionally include level of knowledge as an independent variable in a model specified as follows: healthcare c = <unk> + <unk> 1 wife rel c + <unk> 2 husband rel c + <unk> PLA c + <unk> health know c + <unk>X c + <unk> 1 dist PLA c + <unk> 2 dist healthinst c + <unk>: Health behaviour may include a range of possible behaviours as listed in Table 3. As with health knowledge, these behaviours span the three key stages of pregnancy, delivery and the postnatal period. To construct a single variable for health behaviour, respondents were asked which behaviours they undertook. These responses were then combined using a first-order factorial from a principle components analysis, to form a normalised index of care-seeking with a value between 0 and 1. A count measure is not appropriate for this variable as the behaviours are not additive in the same way as knowledgefor example, a delivery might take place in a health facility or it may be conducted at home by a skilled birth attendant. Both of these behaviours are positive, but are mutually exclusive. The constructed 'health behaviour index' has a high scale reliability coefficient of 0.7845 and skewness of 0.3668. Table 4 shows the descriptive statistics of outcome variables in the three models presented, namely the number of times of PLA attendance, level of knowledge and positive health care index, by four age groups (below the 25th percentile, between the 25th and the 50th percentile, between the 50th percentile and the 75th percentile and above the 75th percentile). Although there is no clear age-dependent pattern for the number of times PLA were attended, both level of knowledge and the positive health care index are higher for younger women (age below the median) and are the lowest for older women (age above the 75th percentile). --- Results The results for the three linear models of PLA participation, health knowledge and health behaviour are summarised in Table 5. In all the regressions, confidence intervals consider heteroscedasticity robust standard errors clustered at the community level (VDCs level). Moreover, given the number of communities, we adopted the wild-cluster bootstrap-t procedure, by Cameron et al. (2008). This procedure is shown to improve inference in cases of less than 30 clusters, which is our case as the total number of committees participating in the programme is 24. Estimates for PLA participation in column 1 suggest that family networks do not significantly affect PLA participation. However, women who married later in life or are living further from the nearest PLA group will attend less often. Conversely, women who have had previous pregnancies or are multi-dimensionally less poor attend a greater number of groups. Estimates for health knowledge in column 2 indicate that more frequent PLA participation significantly and positively affects health knowledge. The only other significant determinant of health knowledge is multi-dimensional poverty: less poor women have greater knowledge of maternal and newborn care. As with PLA participation, family networks do not affect the level of health knowledge. Estimates of positive health care practices in column 3 show further that the level of knowledge is a positive and statistically significant determinant of good practice. Other positive determinants of good practice include older age at marriage and being Family networks and healthy behaviour 241 multi-dimensionally less poor. In contrast with the two previous models, the number of husband's relatives in a woman's family network negatively and significantly predicts care practice. This finding suggests that women living in larger husband's family networks are less likely to adopt good health care practices even with the same level of knowledge as contemporaries with smaller husband's family networks. Other significant negative determinants of health practice include current age (with older women less likely to report positive care practices), having had a previous pregnancy, distance from a PLA group, distance from a health institution and being of Tamang, Magar or other ethnicity relative to Brahmin-Chhetri. --- Discussion and conclusion This paper reviewed the existing literature on the role of family networks in shaping health-seeking behaviour. While there have been a number of studies describing the effect of nuclear families on decision-making, the potential role of extended family networks is less well understood. Existing evidence was used to explain how the expected effect of extended family networks on health-seeking behaviour may be positive or negative. This paper used cross-sectional data from rural Nepal to empirically test the role of extended family networks on the acquisition of knowledge about positive health care practices, and then the impact of networks on the practice of positive care in that context. We measure family networks by counting the number of female relatives living in the same local area, distinguishing between women's own relatives and husband's relatives. We find that, in this context, family networks do not affect women's ability to attend PLA groups as the source of knowledge, nor women's ability to absorb and recall knowledge gained at the group. However, family networks are a significant and negative determinant of women's ability to act on the knowledge gained and engage in positive health practices. We find further that the differentiation between own and husband's family network is an important one in this context. While a women's own family network has no significant effect on health behaviour, the size of her husband's family network has a direct and negative effect on health behaviour. The difference in the effect of the two networks (own and husband's) is perhaps unsurprising given that women in this context live within the marital/husband's home and are thus physically located within the husband's extended family network. As such, this network might be considered to consist of strong ties. These data thus provide early evidence for the hypothesis that larger families exert more pressure on women to adhere to traditions and social norms in spite of new information received. This would result in less appropriate care-seeking in societies with norms that promote the seclusion of women or the use of traditional practices that carry health risks. Unfortunately, our data do not allow us to better investigate the role of tradition and social norms, and our interpretation of the results remains speculative. Indeed, there may be other factors driving the results. The main alternative factors leading husband's family network not to support/encourage positive health care practices during the different stages of childbirth might be perceptions by members of the husband's family network that antenatal care or postnatal care were not beneficial based largely on their own past experiences, the scarcity of resources under their control and power relations between mothers-in-law and other husband's family members on the one side and daughters-in-law from the other side. In addition, we find that a higher multi-dimensional wealth index positively predicts participation in knowledge-generating activities (PLA groups in this case), the level of health knowledge and good health practice. PLA participation is the only other significant predictor of knowledge aside from multi-dimensional poverty. Level of knowledge in turn positively predicts health practice, as does close proximity to a health institution. Notably having married older positively predicts health practice but negatively predicts group PLA participation and thus ostensibly knowledge acquisition. This is independent of the effect of education, captured within the MPI. Although marrying older negatively affects PLA participation, it does not, however, significantly affect a woman's level of knowledge. In this context, where very early marriage is the norm and 90% of women are married by 20 years of age, older age at marriage may be capturing something other than an age differentialinstead measuring a girl's (and her family's) willingness and ability to delay marriage. Older age at marriage will result in older age at first parity and possibly also a higher status within the household. Women in our sample who marry older have a higher level of education (p = 0.00). Women with a higher level of education similarly have a higher level of health knowledge (p = 0.00). A brief analysis of PLA non-group participants in this context further shows that women who marry older have a higher level of knowledge than non-participants who marry younger. However, among group participants, the difference in health knowledge is no longer significant. The PLA groups raise the level of knowledge among attendees, and women marrying younger attend more PLA groups, resulting in a levelling effect. Controlling then for level of knowledge, women who marry older are then more likely to be able to act on their knowledge. One known limitation of this analysis is our inability to control for the possibly differential and mediating effect of individual empowerment on the acquisition of health knowledge and on resulting behaviour change. Age at marriage may, in part, be capturing this effect, and more work is required in this area. Conversely, current age is not a significant predictor of knowledge and is a negative predictor of health practicesuggesting instead that older women may be more likely to adhere to traditional behaviours or less likely to adopt new ideas. Perhaps unsurprisingly, distance from a PLA group negatively predicts group participation and health practice. Similarly, the distance from a health facility negatively predicts practice. In conclusion then, the extended husband's family networks within which women reside in rural Nepal are negatively associated with medical 'best practices' for maternal and child health, while no significant association is found for woman's family networks. One potential explanation is that husband's family networks exert pressure on women to adhere to traditions and social norms that conflict with current thinking around medical 'best practice'. This results in s/ lower translation of new knowledge into practice. In this context, we find that analyses of extended family networks should differentiate between women's' own relatives and husbands' relatives, or risk a misleading null result overall. Although these findings relate directly to the surveyed communities in Nepal, they may also apply to other comparable societies where families live in extended family groups, with norms that promote the seclusion of women or the use of traditional care practices that carry health risks. These findings suggest that health information and behaviour change interventions targeted at women in this context will need also to engage the wider family network to maximise their effectiveness. Strategies to delay age at marriage or reduce multi-dimensional poverty may also improve women's ability to act on health knowledge. (LHMC) Trial', which combined PLAs with the strengthening of Health Management Committees (HMC) to increase skilled birth attendance. All of the 43 VDCs in Makwanpur district were randomised to intervention or control (independent of previous randomisation in the original trial) with 21 in intervention and 22 in control. No groups were run in control clusters of the LHMC trial by UCL or MIRA. The trial ran from 2010 to 2012 after which all activities closed. The intervention used the principles of the 'four D' cycle of discovery, dream, design and destiny. A consultant conducted a training of trainers with MIRA researchers, representatives from the District Public Health Office, District Development Committee, and Family Planning Association of Nepal. Four-day workshops were then conducted in local health facilities in each of the intervention VDCs, over 4 months. These workshops were attended by a district-level representative who had also attended the training of trainers. During the workshop, participants were exposed to the description of the maternal and newborn health situation in Nepal and government strategies and priorities. After briefing participants about the 'four-D' intervention, participants were invited to follow the 'D'cycle: • 'Discover' the success of their health institutions and remember who provided support or resources to facilitate this success; • 'Dream' of how health institutions and the quality of services should be in order to guarantee appropriate maternal and newborn care; • 'Design' a strategy to achieve their vision; • 'Destiny': the last phase of this intervention is completed after Health Management Committees have implemented their plans, and participants present their accomplishments and the lessons learned. --- Suspension of active engagement in the area From 1 October 2012 to January 2014, all interventions, programmes and surveillance activities led by UCL and MIRA ceased in the region. Follow-up activities are planned but not currently ongoing. --- Appendix 8 1. The primary participatory learning and action group trial As mentioned in Section 2, our study takes advantage of an existing surveillance system, designed around a large cluster randomised controlled trial of participatory action and learning groups. The original trial was conducted between 2001 and 2003 and led by the UCL Institute for Global Health, in partnership with Nepali NGO Mother and Infant Research Activities (MIRA). The intervention consisted of monthly community-based participatory learning and action group meetings, facilitated by a local non-health professional. Group participants explored health issues around pregnancy, childbirth and newborn health. The primary cycle consisted of a series of 10 meetings where the following issues were discussed: 1. The work of the MIRA team is introduced; 2. Discussion of how mothers and babies might die; 3. Discussion of how women approach maternal and neonatal issues; 4. Discussion of common local maternal and neonatal problems; 5. Planning of methods to collect information on the relevant issues in the community; 6. Sharing of the information collected. Identification of the most important problems; 7. Discussion of strategies for addressing these problems; 8. Planning of the involvement of other community members; 9. Preparation for a meeting with other community members; 10. Presentation of the previous work to other community members. Discussion of strategies with other community members. The form of the intervention could not be defined in advance as the nature of the discussion, levels of involvement and potential solutions differ from group to group. --- Expanding the primary trial location and activities Given the significant impact on mortality of the primary trial, UCL and MIRA had an ethical commitment to offer the intervention to the control areas. After a 2-year preparation period from 2003 to 2005, the original intervention was rolled out in the control arm, while a revised intervention focusing on care-seeking for childhood illness and involving men in maternal and newborn health was rolled out in the intervention arm. --- The local health management committee trial In January 2009, all participatory learning and action group activities were suspended in preparation for a new trial, the 'Local Health Management Committee
Family networks may serve as a source of private transfers and risk pooling. Extended family networks might therefore increase women's ability to act on information received and to access appropriate care. Family networks and healthy behaviour 233
Introduction Social support from family and friends is crucial for maintaining health and well-being. It is a broad concept based on interpersonal interactions, in which individuals perceive they have access to reliable friends or family members to rely on, both during good and challenging times 1,2. Good social relationships provide emotional and practical resources people need to feel cared for and valued, which can encourage the adoption of healthier behaviors 3. For this reason, social support is widely recognized by the scientific community and the World Health Organization (WHO) as an important health determinant, given its protective effects on individuals' physical and mental well-being 3,4. It also demonstrates a positive association with health promotion behaviors, quality of life, and selfrealization, directly influencing how individuals perceive their health 5,6. While most research on health demography and social epidemiology has focused on older adults, investigating midlife health is equally essential for several reasons. From a demographic perspective, middle-aged adults (often in their 40s and 50s) form a substantial and growing segment of populations in many countries, influencing key demographic indicators such as population size, aging trends, and healthcare use 7. Often referred to as the "sandwich generation", a term that describes those middleaged adults who are effectively pressured between the obligation to care for their aging parents and support their children 8,9, middle-aged individuals juggle multiple roles, serving as parents, caregivers, and sources of support for both younger and older generations 10. The level of social support they receive and perceive can significantly impact their mental and emotional well-being, caregiving abilities, and overall quality of life 11. From a public health perspective, self-rated health and social support have significant implications for health promotion and disease prevention, especially during middle age, a critical period when many chronic diseases emerge 12,13. Furthermore, social support plays a pivotal role in buffering the effects of stress and adverse life events. Access to adequate social support can provide individuals with emotional and practical resources to cope with stressors and reduce their negative health impacts 11. Studies exploring the potential effects of social support on self-rated health among middle-aged adults dwelling in Brazil and how it varies among men and women are scarce, which is a surprising gap considering the shared concern about the prevalence of loneliness among individuals in recent decades 14,15,16. Men and women in midlife may experience distinct social expectations, roles, and stressors that can influence their self-rated health. Understanding this relationship is crucial to address their specific health needs and promote gender equity in health 17. This study contributes to the current literature by examining whether social support is associated with self-rated health among middle-aged Brazilian adults and how this relationship varies among men and women. By identifying the factors associated with poor self-rated health and possible gender disparities, this study can inform the development of targeted interventions to improve the health of middle-aged Brazilian adults. --- Methods --- Study design and participants This cross-sectional study relied on data from the Brazilian National Health Survey (PNS), a nationwide, population-based survey conducted in 2019 by the Brazilian Ministry of Health and the Brazilian Institute of Geography and Statistics (IBGE). The PNS 2019 aims to describe the health situation and lifestyles of the Brazilian population and is representative of geopolitical macroregions, states, metropolitan areas, and 27 capitals of the Federative Units. The PNS 2019 draws upon a multistage probabilistic sampling design, including individuals aged 15 years old or over, residing in private households, i.e., built for the exclusive purpose of habitation. The selected sample included 31,296 middle-aged adults (40-59 years old) who answered questions about social support and self-rated health. No ethical approval was needed, as this was an analysis of publicly available data with no personally identifiable information. --- Main outcome measures Health differences between men and women were analyzed as gender disparities in health. Despite physical and physiological characteristics, such as chromosomal genotype, hormonal levels, and internal and external anatomy playing a role in health differences, this study recognizes that socially constructed roles, behaviors, and expectations associated with being male or female play the most significant role 18. Gender encompasses a wide range of non-biological traits, attitudes, and behaviors 19. Disparity was used in this context to refer to systematic, avoidable, and unfair inequalities in health and its social determinants, occurring within and between population groups and disproportionately affect vulnerable populations due to inequalities in underlying social, political, and economic institutions 20,21. Individual-level self-rated health (dependent variable) was assessed using the following question: "In general, how would you rate your health?". Answers to this question range from "very good" to "very poor". This variable was dichotomized considering individuals who rated their health as "good" or "very good" as having "good" self-rated health and individuals who rated their health as "fair", "poor", or "very poor" as having "poor" self-rated health. Information on social support was based on the following variables in the PNS 2019: "How many (family members/relatives or friends) can you count on in good or bad times?". From this question, social support was defined as the perceived availability and adequacy of emotional, informational, and tangible resources provided by family members/relatives or friends during times of need or stress. The variables on social support in PNS 2019 present four distinct categories: none, one, two, and three or more. Thus, in this study, lack of social support refers to individuals who reported having no family members or friends to rely on. Social support is the main explanatory variable that was hypothesized to link with self-reported health, but other control variables were included as well. The set of covariates considered in this study encompasses various aspects, including demographic and socioeconomic attributes, health behaviors, and healthcare access, all of which can significantly influence an individual's self-rated health. To capture the demographic characteristics, age was categorized into four groups: 40-44, 45-49, 50-54, and 55-59 years. Additionally, household location (urban/rural), region of residence (North, Northeast, Central-West, Southeast, and South), marital status, and race/skin color (white, black, mixed-race, and other) were included as relevant factors affecting self-rated health. Socioeconomic attributes were measured by schooling level, which was divided into three categories: low (0-7 years), middle (8-11 years), and high (12 years or more). Moreover, being a current smoker was used as a proxy for health behaviors, as smoking habits can significantly impact overall health and well-being. To account for physical and mental health status, binary variables were included for chronic diseases (diagnosis of any chronic, physical, mental, or long-term illness), obesity (body mass index -BMI <unk> 30kg/m 2 ), and depression diagnosis. The latter was assessed by investigating whether the individual had ever received a diagnosis of depression from a physician or mental health professional (psychiatrist or psychologist). This study also incorporated a dummy variable for health insurance coverage to assess the impact of healthcare access on self-rated health. --- Statistical analysis After selecting eligible individuals and potential variables for this study, a descriptive analysis was conducted based on the dependent variable and its covariates. Categorical variables were described by their absolute and relative frequency. Pearson's chi-squared test with Yates' continuity correction was used for categorical variables when comparing differences between groups in the descriptive analysis. Cramer's V was employed to measure the association between the nominal variables. P-values above 0.05 were interpreted as insufficient evidence to differentiate groups. Logistic regression models were employed to test for differences in self-rated health between middle-aged adults. Separate models were estimated for men and women to analyze gender differences in the association. The models were adjusted for potentially confounding variables such as sociodemographic characteristics, adulthood socioeconomic status, health behaviors, and physical and mental health status. Cad. Sa<unk>de P<unk>blica 2023; 39(12):e00106323 A set of models was generated to test the additive and interactive effects between the variables. Sensitivity and residual analyses were also performed in the preliminary model selection rounds. Odds ratios (OR) -a measure of association that compares the odds of an event occurring in one group to occur in another -were used to present the results. Only the final fitted models were presented in this study. Results were considered significant at p-value <unk> 0.05. All estimations were performed using R program (https://www.r-project.org/) with appropriate methods to handle complex survey designs such as PNS 2019. --- Results Figure 1 illustrates the proportion of individuals based on self-rated health (good and poor) (Figure 1a), social support received from family members (Figure 1b), and social support received from friends (Figure 1c) for both men and women, with 95% confidence intervals (95%CI). The overall prevalence of poor self-rated health among middle-aged Brazilians was 40.7%, with a significant difference between men (32.7%, 95%CI: 31.3; 34.2) and women (41.2%, 95%CI: 39.8; 42.5), suggesting a higher prevalence of women reporting poor self-rated health compared to men. Approximately 5.7% of women and 5.3% of men reported not receiving any social support from family members (Figure 1b). More than 65% of the sample reported receiving family support from three or more members, with a higher proportion of men in this category. Regarding social support from friends, 21.4% of women reported having no friends to rely on in good or bad times. For men, this value was slightly lower, around 19.5%. Moreover, concerning social support from friends, a higher proportion of men have three or more friends to rely on compared to women (Figure 1c). Table 1 shows other sample characteristics stratified by gender. Regarding schooling level, approximately 20% (n = 3,763) of women had a high education level compared to 17% (n = 2,576) of men. Most women in the sample presented cases of chronic, physical, mental, or long-term illness (65.1%) compared to men (48.7%). Furthermore, the prevalence of depression was greater among women (16.7%) than men (5.3%). The sample population predominantly resided in urban areas (77.4%), with most participants concentrated in the Northeast (34.2%) and Southeast (22.3%) regions. Most participants self-declared as mixed-race (50.2%). Smoking habit is more prevalent among men (17.5%) compared to women (12.2%) in the sample. There is a higher proportion of women with obesity (25.4%) compared to men (21.6%). Additionally, the proportion of women with health insurance (24%) also surpasses that of men (21.7%). Cad. Sa<unk>de P<unk>blica 2023; 39(12):e00106323 Table 2 presents the results of the logistic regression for the overall population and stratified by gender. Results from the overall model showed that men are 17.6% less likely to report poor self-rated health than women, whereas other factors were equal (OR = 0.824, 95%CI: 0.754; 0.900). Social support was also associated with lower odds of reporting poor self-related health. For example, middleaged adults with two friends are 16.3% less likely to report poor self-rated health (OR = 0.837, 95%CI: 0.737; 0.952) than those without friends. Those who receive support from three or more friends have an even lower chance of reporting poor self-related health, with a 24.8% lower risk than individuals who receive no support from friends. --- Table 2 Risk of presenting poor self-rated health among middle-aged men and women by selected covariates (n = 31,926). Brazilian National Health Survey, --- (PNS 2019). --- General Men Women --- OR (95%CI) p-value OR (95%CI) p-value OR (95%CI) p-value Gender Women Regarding family support received from family/relatives, individuals who reported receiving support from three or more relatives were 16.5% less likely to report poor self-related health than those who did not receive any support. However, there was insufficient evidence to establish differences in self-rated health between individuals with only one or two family members compared to the base group at a 95%CI. Factors associated with a greater chance of reporting poor self-related health also included age, such as those in the 55-59 age group (OR = 1.487, 95%CI: 1.317; 1.679), living in rural areas (OR = 1.295, 95%CI: 1.176; 1.426), residing in the North or Northeast regions (OR = 1.034, 95%CI: 0.930; 1.150), having low schooling level, and being black (OR = 1.307, 95%CI: 1.130; 1.512) or being mixed-race (OR = 1.337, 95%CI: 1.218; 1.467). Marital status was not associated with poor self-rated health among middle-aged Brazilian adults. Regarding health characteristics, individuals diagnosed with a physical or mental illness were 72.9% more likely to report poor self-related health than those without any diagnosed disease. Additionally, those who have not been diagnosed with depression (OR = 0.519, 95%CI: 0.459; 0.587), did not smoke (OR = 0.846, 95%CI: 0.756; 0.946), and had no obesity (OR = 0.680, 95%CI: 0.616; 0.752) were at lower risk of reporting poor self-realted health. Results from models stratified by men and women revealed interesting gender disparities in the association between social support and self-rated health (Table 2). The results showed that social support received from friends is a more significant factor for women's self-rated health than men's. Specifically, women with three or more friends are 26.9% less likely to report poor health than their counterparts without friends (OR = 0.731, 95%CI: 0.624; 0.858). On the other hand, men with three or more friends are 22.2% less likely to report poor health than men without friends (OR = 0.778, 95%CI: 0.643; 0.941). These differences are significant compared to the reference group without friends (p-value = 0.01 for men and p-value <unk> 0.001 for women). However, it was not possible to establish differences in self-rated health between individuals with only one or two friends compared to the baseline group at a 95%CI. Regarding family support, the results suggest a weaker association with self-rated health for both men and women compared with social support received from friends, except for men with three or more family members they can rely on (Table 2). In this case, having three or more family members/relatives that men can count on in good or bad times is associated with a 24.5% lower chance of reporting poor health than men without family support (OR = 0.755, 95%CI: 0.603; 0.945). However, in the case of women, the gender-separated logistic regression model did not provide enough statistical evidence to establish an association between family support and poor self-rated health at a 95%CI. The variables related to sociodemographic characteristics, adulthood socioeconomic status, health behaviors, and physical and mental health status for men and women showed a consistent pattern with the general model. Specifically, poor self-rated health for men and women was associated with residing in rural households, living in the North or Northeast regions, having low education, being black or mixed-race, having a disease diagnosis, suffering from depression, smoking habit (women), having obesity, and lacking health insurance. --- Discussion This study investigated whether the lack of social support was associated with poor self-rated health among middle-aged Brazilian adults and how it varied among men and women. The results revealed several findings that shed light on the importance of social support in shaping self-rated health outcomes. After adjusting for potential confounders, this study showed that having no friends or family to rely on in good or challenging times was associated with poorer self-rated health. Our findings also suggest that gender differences significantly affect self-rated health among middle-aged Brazilian adults. Specifically, men were less likely to report poor self-rated health than women, with a 17.6% lower likelihood when controlling for other factors. This gender disparity in self-rated health aligns with the so-called "gender paradox", which refers to the observation that although women tend to have a higher life expectancy and lower mortality rates than men, they tend to report poorer self-rated health and experience more chronic health conditions than men 22. The fact that women often tend to rate their health lower than men can be attributed to a combination of social and cultural factors. In the social dimension, one possible explanation is that women may have a higher awareness of their own health status than men, and therefore may be more likely to report poor health 23. When considering the "sandwich generation" concept, in which middle-aged adults are responsible for caring for aging parents and supporting their children, the current literature suggest that women are more affected than men due to societal norms placing a greater caregiving burden on them 24. This situation leads to increased stress and challenges in balancing work and family responsibilities, impacting women's self-rated health and resulting in lower health ratings compared to men 25. Women may also be more willing to seek medical attention and report symptoms, leading to a higher likelihood of a diagnosis of chronic health conditions 26. Conversely, men may be more likely to deny or downplay health problems, leading to underreporting of poor health 27. Moreover, men may be less likely to seek or receive emotional support from their social networks due to cultural norms that encourage them to be self-reliant and independent 28. These interconnections between work-life balance, the sandwich generation phenomenon, and cultural norms can collectively contribute to the observed gender disparities in self-rated health among middle-aged Brazilian adults. Despite the widely diffused idea of a gender paradox in the literature, the debate surrounding this concept has been inconclusive. While some studies propose that men and women differ significantly in their self-related health evaluations due to the influence of various biological, social, and cultural factors, other studies suggest that men and women may be more similar in how they incorporate a wide range of chronic and acute health conditions, functioning, healthcare use, and health behaviors in their self-rated health evaluation 29. Our study, on the other hand, diverges from this previous idea, given that there are marked differences between self-rated health of Brazilian men and women, even controlling for chronic conditions, health behaviors, and socioeconomic status, as also observed in other settings 30. Such discussion highlights the complexity of the relationship between gender and self-rated health. Regarding social support, the results demonstrated that a substantial proportion of middle-aged Brazilian adults receive support from family and friends. However, gender differences were visible in the patterns of social support received. The logistic regression analyses revealed that social support from both friends and family members was associated with better self-rated health among Cad. Sa<unk>de P<unk>blica 2023; 39(12):e00106323 middle-aged adults. These findings align with the social support literature, indicating that strong social networks and interpersonal relationships positively impact individuals' self-rated health 31. The observed gender disparities in the association between social support and self-rated health are particularly intriguing. The results suggest that social support from three or more friends shows a more significant impact on women's self-rated health compared to men. Women with three or more friends were 26.9% less likely to report poor health, whereas for men, the reduction in the odds of reporting poor health was 22.2%. This finding could be explained by gender differences in coping mechanisms and the tendency of women in maintaining closer relationships with their friends, placing more importance on social support received from them 4. The results are consistent with previous research suggesting that social support from friends is a strong predictor of health outcomes 32. Conversely, family support seems to play a minor role in shaping women's self-rated health compared to social support from friends. For men, having three or more family members they can rely on was associated with a 24.5% lower chance of reporting poor health. This result aligns with previous research demonstrating the importance of family support in promoting men's health and well-being 27. Receiving support from three or more family members may be particularly important for Brazilian men, possibly due to cultural norms that place greater emphasis on family relationships and support 33. Although no significant association between family support and poor self-rated health was found for women, such outcome must be cautiously interpreted. The complexity of women's social networks and the influence of varying cultural norms regarding the role of family support in their lives may underlie these findings. For instance, women may rely more on external support systems beyond immediate family members 34, such as friends or community networks, which could contribute to the muted impact of family support on their self-rated health. Additionally, societal expectations of women as caregivers may lead to potential underreporting of health issues, possibly masking the true relationship between family support and self-rated health among women 35. To achieve a deeper understanding of these gender-specific patterns, further research should explore the underlying mechanisms and cultural dynamics that could be driving the observed association between family support and self-rated health among women. This study contributes to the literature on social support and self-rated health by investigating gender differences in the association between social support and self-rated health among middle-aged Brazilian adults. However, some limitations should be considered. First, the study cross-sectional design does not allow us to establish causality or temporal relationships between social support and self-rated health. Longitudinal studies are needed to investigate the directionality of the association between these variables. Second, the study relies on self-reported social support and self-rated health measures, which may be subject to bias. Self-rated health may be subject to varying perceptions based on individual characteristics such as culture, age, and gender. Future research could benefit from incorporating objective health measures to validate self-rated health assessments, as proposed by Lazarevi<unk> 36. Similarly, the perception of social support can also be influenced by several factors, including the quality and closeness of interpersonal relationships, the availability and accessibility of social resources, and the person's ability to seek and use available support. Future studies should also consider the influence of cultural and contextual factors on the association between social support and self-rated health. This line of investigation should expand beyond the scope of our study, addressing other Latin American countries. Cad. Sa<unk>de P<unk>blica 2023; 39(12):e00106323 --- Conclusion In conclusion, this study provides evidence into the association between social support and self-rated health among middle-aged Brazilian adults, with a specific focus on understanding gender disparities in this relationship. Our findings demonstrate that both friends and family social support are linked to better self-rated health in middle-aged adults. Particularly, social support from three or more friends presents a more pronounced impact on women's self-rated health compared to men, whereas family support plays a more significant role in promoting men's health. Our study contributes to the ongoing discussion about the impact of social support on health and emphasizes the importance of further research to explore the underlying mechanisms shaping gender differences and other aspects of the association between social support and midlife health. --- Resumen El apoyo social de la familia y de amigos se reconoce como un importante determinante social de salud basado en sus efectos protectores sobre el bienestar f<unk>sico y mental de los individuos. Aunque la mayor<unk>a de las investigaciones se ha centrado en adultos mayores, investigar la salud en la mediana edad también es esencial, una vez que estos individuos también son susceptibles a los resultados perjudiciales para la salud resultantes de un apoyo social inadecuado de amigos e de la familia. Este estudio contribuye al debate al investigar si el apoyo social está asociado con la autoevaluación de salud entre adultos brasile<unk>os de mediana edad y cómo esa relación var<unk>a entre hombre y mujeres. Usando datos de la Encuesta Nacional de Salud realizada en 2019, se utilizaron modelos de regresión log<unk>stica para evaluar diferencias en la autoevaluación de salud, contabilizando factores de confusión. La muestra se compuso de 31.926 adultos de mediana edad, de los cuales el 52,5% eran mujeres. La prevalencia general de autoevaluación de mala salud fue del 40,7%, con diferencia significativa entre hombres y mujeres. Los resultados de este estudio sugieren que no tener amigos o familiares en los que confiar en buenos o malos momentos se asoció con la peor autopercepción de salud. Sin embargo, la fuerza de esa asociación es diferente seg<unk>n el género, ya que el apoyo social de amigos es más importante en la autoevaluación de salud de las mujeres que en la autoevaluación de los hombres. Por otro lado, el apoyo familiar se asoció con la autoevaluación de la salud masculina, particularmente para hombres que ten<unk>an tres o más personas de la familia en los que confiar. Estudios futuros deben tener en cuenta factores culturales y contextuales para mejor comprender otras dimensiones del apoyo social y su asociación con la salud en la mediana edad. --- Diferencias de Género; Apoyo Social; Persona de Mediana Edad --- Additional information ORCID: Hisrael Passarelli-Araujo (0000-0003-3534-8392).
Social support from family and friends is recognized as an important social determinant of health, given its protective effects on individuals' physical and mental well-being. While most studies have focused on older adults, investigating midlife health is equally crucial since middle-aged individuals are also susceptible to the harmful health outcomes of inadequate social support from friends and family. This study contributes to the debate by examining whether social support is associated with self-rated health among middle-aged Brazilian adults and how this relationship varies between men and women. Using data from the nationwide Brazilian National Health Survey conducted in 2019, logistic regression models were employed to assess differences in self-rated health, accounting for confounding factors. The sample comprised 31,926 middle-aged adults, of which 52.5% were women. The overall prevalence of poor self-rated health was 40.7%, with a significant difference between men and women. Results from this study suggest that having no friends or family members to rely on, both during good and challenging times, was associated with poorer self-rated health. However, the strength of this association differs by gender, with social support from friends playing a more critical role in women's self-rated health. On the other hand, family support was associated with male self-rated health, particularly for men with three or more family members they can rely on. Future studies should consider cultural and contextual factors to better understand other dimensions of social support and its association with midlife health.
Introduction The concept of altmetrics, created by Priem et al. (2010), is proposed as an alternative to more traditional citation-based metrics. Altmetrics is a new approach to measure scholarly impact on the basis of activities in social media platforms (Haustein et al., 2014;Priem, 2014;Thelwall et al., 2013). It is a new approach to evaluate the impact of scientific outputs mainly based on the academic use on social media (Thelwall et al., 2013). It targets various types of scientific outputs using a wide variety of data sources and indicators (Kwok, 2013) comparing to traditional research evaluation using the number of publications and citations. Although altmetrics is regarded as a democratizer of science and its reward system, as it potentially overcome the Matthew Effect reflected in traditional citation-based metrics (Haustein et al., 2015), previous studies indicate that existing altmetric indicators are biased against non-English speaking countries such as China, Japan, Russia, Iran and Latin America (Alperin, 2014(Alperin,, 2015b;;Maleki, 2014;Park & Park, 2018;Wang et al., 2016) due to their low visibility in English social media (e.g., Twitter, Facebook, Mendenley, etc.). Scholars from non-English speaking countries having different scientific communication behavior on social media may use their local social media platforms (Alperin, 2013(Alperin,, 2015a;;Ortega, 2020;Sugimoto et al., 2017;Yu et al., 2017;Zahedi, 2017), which are not fully covered by current altmetric studies focusing on international social media platforms in English (Zahedi, 2016). China is the largest social media market with the most social media users (Zahedi, 2016) who use their local Chinese social media (e.g., Wechat, Weibo, etc.) instead of international ones. Due to the data availability and the language barrier, few altmetric studies pay attention to the local altmetrics in China and analyze the academic use of social media among Chinese scholars (Yu et al., 2017). Thus, it is necessary to have an in-depth understanding of the characteristics of social media commonly used in China for conducting the altmetric studies regarding China, which has its unique social media culture and administration regulations. The purpose of this paper is to explore the Chinese local social media platforms, discover the academic use of Chinese social media as well as related altmetric indicators, and review the local altmetric literature. --- Local Social media platforms for academic use in China According to iMedia Research (2020), there are around 800 million social media users in China; WeChat, QQ, and Sina Weibo are the top three social media platforms in terms of the number of users, accounting for 73.7%, 43.3%, and 17.0% of the China's population respectively. These Big Three are also included in the top 10 global social media platforms with 1151, 731, and 497 million global users respectively (We are Social & Hootsuite, 2020). In addition to these well-known three, there are various local social media tools or platforms that are used by Chinese scholars (shown in Figure 1), which have been divided them into seven categories as shown in Table 1. --- Document-sharing Document-sharing social media is an online platform offering storage for users to share their documents. Users are encouraged to upload their course materials, research articles, business proposals, industry standards, notes as well as other professional documents; they can generate income when these documents are downloaded by other users. Normally, readers could freely access the abstract or some free pages but need to pay for downloading the full document. The platforms profit from the commission, price difference and VIP membership that allows readers to download documents at a discount price. Although users are required not to upload documents of which they are not authors or copyright holders, some copyright infringements are still claimed regarding document-sharing in these platforms (Guo, 2011). These document-sharing social media platforms vary by their coverage, copyright policy, document format, and marketing strategy. Yi (2019) reports that Doc88 is the largest one in terms of the coverage while Baidu has the largest group of users. According to Baidu (2021), there are over 50 million users and around 800 million documents contributed by more than 180 thousand authors in Baidu Library. Although billions of documents are shared in these document-sharing platform, few are downloaded or acknowledged by users. The impact of such documents could be assessed by their online usage, including the number of clicks, views, and downloads. The document-sharing platforms also allow users to leave comments, ratings, and recommendations for these documents, which could also be used to evaluate their quality or impact. --- Blog The blog platform is a discussion or informational diary that is published online and managed by individuals. The bloggers could create their personal blogs on a blog site and post their personal articles from time to time. Some Chinese scholars would like to share their research and opinions through such academic blogs as ScienceNet and CSDN blog, and other general blogs like Sina blog. ScienceNet is an academic blog, with more than one million users; most users are scientists or researchers. They use ScienceNet blogs to discuss scientific research with their peers, establish friendship with other researchers. In addition to the individual bloggers, some research institutions also set up their official blog at ScienceNet to disseminate knowledge and research. CSDN Blog is dedicated to creating a communication platform for IT developers, providing technical people with comprehensive information and knowledge exchange and interaction. Most CSDN bloggers are technical persons working in computer science or information science. Although Sina Blog is a general blog covering various topics, some Chinese scholars also use it to promote their research and carry out academic exchanges. Although each blog is free to read and share, they are ranked by the number of readers, recommendations, and comments. Only those high ranking blogs could be displayed in the homepage while the rest need to be browsed and searched. Thus, these indicators (i.e., the number of readers, recommendations, and comments) could be used to evaluation the impact of the blogs. --- News The news-type social media contain massive information representing the timely scientific news and stories. In addition to sharing scientific news and updates regarding research and development, Chinese scholars also disseminate their recent research via some local news-type social media including Guokr.com, Tencent News, Bioon.com, and China social science net. Guokr.com is an open and diverse community in science and technology, consisting of three sections: scientists, interest groups and Q&A. These three sections allow users to follow their interested people or groups, read recommended articles, and share their own articles. Tencent news is a mobile application based on iOS and Android platform. It features a combination of news, videos, and microblogs, providing mobile users news and updates at the very first time. There are 15 news channels in Tencent news, of which science and technology is one. Bioon.com is a news platform for the biotech industry, providing news, consulting service and industry analysis. Most Bioon members have master's or doctoral degrees in the field of medicine or biology. China Social Science Network is a national social science academic research network sponsored by the Chinese Academy of Social. It has 54 channels and more than 1,300 columns for social scientists from different disciplines. --- Community In China, the social media is also used as an interactive community platform for scientific communication, with members mainly from universities, research institutes, and enterprises for R&D. Users use the community forum to exchange academic resources, share research stories, and help each other. The popular social media communities in China include DXY, Douban, and Xiaomuchong. DXY is a leading online healthcare community in China, connecting health practioners, health researchers, patients, pharmaceuticals, and insurance companies. It has served over one hundred million public users and six million professional users. Indeed, 71% of the health practioners in China are DXY users. Xiaomuchong is an academic platform sharing academic resources for scientific researchers. It covers academic content such as fund application, patent standards, studying abroad, graduate admission, paper submission, and academic assistance. Most members come from universities, research institutions, and enterprises for R&D. Douban is a reading community for educated youth. In addition to the collection of books, movies, music and other products, Douban offers a review and recommendation platform that users could express their comments and recommendations for all contents. --- Q & A Compared with social media community, Q&A social media only focus on the interaction between questions and answers. Online Q&A platforms connect users with different backgrounds. Scholars with special expertise in their disciplines obviously become the active users in Q&A platforms. In addition to answering questions as requested, some users also share their knowledge, experience, and insights to others. The most popular Q&A platforms in China are Baidu Zhidao and Zhihu. Baidu Zhidao, developed by Baidu, is a leading search-based interactive knowledge question and answer sharing platform. Everyone could provide the answer for a given question, as the answers are ranked and returned as the search result. Compared with search-based Q&A platforms, Zhihu focuses on providing comprehensive answers on the basis of a group of experts in different disciplines. Similar to Quora, Zhihu Users can actively participate in the Q&A process by editing questions and commenting on answers that have been submitted by other users, which helps Zhihu surpass other competitors and become the largest Q&A platform in China. According to Yiguan (2020), there were over 220 million users in Zhihu in which more than 40% are 24 years and under. --- General Scholars also use general social media platforms such as WeChat, Sina Weibo for academic use. Scholars could create an official WeChat account to disseminate their research and promote related business; they also could use Sina Weibo to post their message like " tweets". Many scholars would like to disseminate their research via general social media platforms to gain higher impact considering the large number of users in WeChat and Sina Weibo. The number of active WeChat users have been over 1.2 billion over 200 countries and regions as of the first quarter of 2020 (The China Academy of Information and Communication and Technology, 2020). Although WeChat is a social networking tool with the function of instant messaging. People could share information including academic content via two approaches: personal friend group and public account. Sina weibo looks like the Chinese version of Twitter; users post texts less than 140 Chinese characters with photos, music or videos. Similar to Twitter, Sina Weibo is also used to disseminate knowledge and promote research. --- Local altmetric indicators in China With the academic use of social media, some altmetric indicators have been developed by different social media platforms to measure the social impact of papers, books, journals, and individual scholars. --- Paper The local altmetric indicators at the paper level in China generally come from three categories of social media platforms: document-sharing (Baidu Library, Doc88.com, Docin.com, and Taodocs.com), blog (Sina blog, CDSN blog, and Sciencenet blog), and general (Sina weibo and WeChat). These social media platforms provide various altmetric indicators to measure the social impact of research papers as shown in Table 2, which are retrieved and summarized in this study. The social impact of a paper is measured on the basis of the readership, and quantified by the number of readings, comments," likes" and other indicators. Most indicators are objective as counting the number of actions by the readers, while some indicators (e.g," likes", " dislikes", star ratings) are subjective as representing readers' personal opinions. --- Books Altmetric indicators for books are developed in some reading community platforms to recommend books. These indicators could be grouped into two categories: library collection indicators and network utilization indicators (Jiang & Wei, 2018;Li et al., 2019). The library collection indicators measure the quantity of the reading including number of readings, number of collecting libraries, number of downloads, number of recommendations, number of collections, and number of comments. The network utilization indicators measure the quality of reading including numbers of book reviews, academic community discussions, news reports, reader reviews and mentions. Among various reading community platforms, Douban is the most famous one for book recommendation (Jiang & Wei, 2018). In order to allow readers to rate and recommend books, the following 14 indicators are included in Douban, which has been copied by other reading community platforms: •Douban score (The overall rating of the book) --- Journals Although the journal impact factor (JIF) is the most popular indicator for assessing academic impact of journals, some altmetric indicators are also constructed to evaluate social impact of journals in some community platform. For example, Xiaomuchong, one of the popular platforms for scientific communication ( Li et al., 2017), includes the Chinese periodical evaluation section in its platform. The evaluation criteria include number of forum replies, number of posts viewed, number of posts reviewed, number of the" helpful" labelled by users, review speed, publishing speed, review cost, publishing cost, and editorial communication. Wang ( 2019) selected 420 journals reviewed in Xiaomuchong and constructed a journal impact evaluation model measuring both the academic impact and the social impact of journals. The academic impact is based on the traditional citation impact while the social impact is measured by four dimensions (Li et al., 2017) as below: •Social attention: number of journals viewed, number of comments, number of webpage views •Comprehensive editorial communication: editorial communication, review quality •Time cost: publication speed, review speed •Economic cost: publication fee, acceptance rate, review fee In addition, Liu and Liu (2018) constructed a framework for evaluating the impact of Chinese academic journals. Their framework consists of citation indicators, online usage indicators and social media impact indicators. In addition to the traditional citation indicators, online usage indicators include total online usage, journal usage factor, usage annual index, and usage half-life while social media impact indicators include total number of blog posts and average number of blog posts. --- Scholars ScienceNet is a comprehensive website promoting science to build an influential Chinese scientific community. In addition to news reports, it also provides blogs for scholarly communication. Zhao (2015) established some indicators to evaluate the scholarly impact of these ScienceNet blogs, including blog status (time created, number of activities, number of points and number of readers), post status (total posting volume, average annual posting volume, number of featured papers) and evaluation status (total reading, average reading per article, number of evaluations and average number of evaluations). Another study regarding the ScienceNet blog also contributed three evaluation indicators: blogger enthusiasm, communication coverage, and blog post quality (Cao, 2017). The enthusiasm of bloggers includes the number of blog posts, the number of activities, the number of points and gold coins, and the number of topics; the spread coverage includes the number of friends, the number of visits, and the number of visits per article; and the quality of the blog is measured by the number of blogs recommended by the website. In addition to the general platform measuring social impact of individual scholars, some altmetric indicators assessing health physicians are also provided by some health websites including" Good Doctor" (https://www.haodf.com/), Sohu Health (https://health.sohu.com/), Zhihu (https://www.zhihu.com), "Yimaitong" (http://www.medlive.cn/), " and Xunyiwenyao (http://www.xywy.com/). Such altmetric indicators are developed to measure the scholarly impact, the social media impact, and online diagnosis impact of a health physician. --- Local altmetric studies in China The concept of altmetrics has been paid attention by Chinese scholars as it was coined by Priem et al. (2010). We conducted the keyword search using "Altmetrics" (keywords or abstracts) and "China" (author address) to retrieve the literature from the core collection of Web of Science; in addition, we also searched keywords "Altmetric*" as well as other variants from Chinese CNKI database and local Chinese literature. After manual validation, 52 English and 327 Chinese papers regarding altmetric studies were identified as shown in Figure 2. The number of altmetric studies in China have been increasing since 2012 until 2019 when the number of altmetric research declined. Since altmetrics is a new imported concept, it was translated into different Chinese names (Yu et al., 2019). The first paper introducing altmetrics in China was published by Liu (2012) who named altmetrics as " Xuan zhe ji liang xue". On the other hand, altmetrics was also named as" Bu chong ji liang xue" (You & Tang, 2013) and" Ti dai ji liang xue" (Qiu & Yu, 2013). Although the last translation (Ti dai ji liang xue) has been accepted by most Chinese scholars, some scholars still prefer to use the" altmetrics" other than any Chinese translation in their publications. Altmetric research in China mainly focused on theoretical discussion and literature review regarding the production, development and research tools of altmetrics ( Liu, 2012( Liu,, 2016;;Qiu & Yu, 2015;Yu et al., 2019). Some studies discussed the needs of altmetrics stakeholders (Shen et al., 2018), content of altmetric data (Meng & Xiang, 2016), user motivation (Liu & Wang, 2020), context analysis ( Wang & Liu, 2017), and data quality (Liu et al., 2019;Yu & Cao, 2019). Chinese scholars also conducted some empirical research as investigating the comprehensive evaluation model integrating altmetrics and citation indicators (H. Li et al., 2020;Li & Ren, 2020;Peng et al., 2018;Zhai et al., 2020), the factors associated with altmetric indicators (Li & Hao, 2019), and design and develop of altmetric indicator on Sina Weibo (Yu et al., 2017), news report (Yu, Cao, & Wang, 2020), policy documents (Yu Cao, Xiao, & Yang, 2020), social impact of individuals ( Guo & Xiao, 2019), books (Jiang et al., 2020;Wang et al., 2019;Xiao & Yang, 2020), journals (Zhao & Wang, 2019), datasets (L. Li et al., 2020) and papers (Zhao et al., 2019). Due to the limited availability of Chinese social media data, many empirical studies conducted by Chinese researchers used international data such as Altmetric.com, PlumX, PLOS ALMs or directly from Twitter and Mendeley (Fang & Wang, 2019;Jin et al., 2015;Shu et al., 2017;Tian et al., 2019;Zhao & Yu, 2020); few researches used local Chinese altmetric data in their altmetric studies (Yu et al., 2016;Yu et al., 2017;Zhao & Wei, 2017). Indeed, the plug-in of PlumX or altmetric.com has been embedded in the institutional repositories of some Chinese universities. In addition to the impact of academic use on social media, some Chinese scholars also explored and compared the characteristics of the local social media platform. Xiong (2020) found that Xiaomuchong users were highly active while ScienceNet blogs were most influential. Fan (2016) compared Zhihu, Douban and Guokr. com using Alexa ranking of third-party statistical data, and ranked them from high to low in terms of access traffic. Yan (2016) found that DXY community is highly professional; questions and requests were answered and replied to quickly. In summary, although Chinese scholars publish a lot of papers introducing the altmetrics and reviewing their literature, few studies have tried to measure the dissemination of research via local Chinese social media. Although Chinese scholars conduct many altmetric studies investigating the influence of scholarly activities on various socia media tools, local Chinese social media such as WeChat, Weibo have rarely been studied. --- Discussion and Conclusion Although social media has been frequently used to promote research, and some altmetric indicators have been developed in China, China's altmetric studies still face the challenges due to the limited availability of data source. Most local Chinese social media don't provide or limit the use of APIs. Alternatively, researchers have to use the web crawlers or other programming to obtain the data, which hinders the development of China's altmetric studies. In addition, various scholarly identifiers such as DOI, PubMed ID, ISBN and so on are used in altmetric studies linking the publications with their social media activities. However, document identifiers are not assigned to some Chinese local publications so that we could not establish the relationship between academic activity in social media and the mentioned publications, which is one of the main obstacles to the acquisition of Chinese altmetric data. As a result, few altmetric studies investigate the academic use of local social media and test the local altmetric indicators. As this study presents, although various altmetric indicators have been developed and applied, the validity and reliability of these indicators have never been validated, which needs to be explored in future research. As the largest source country of international scientific literature with the largest social media user population, China's local altmetrics is inevitable to be a popular research topic, within or outside the scope of bibliometrics. The local social media platforms, altmetric indicators, as well as local altmetric studies reviewed by this paper could build a foundation for future studies focusing on local altmetrics in China.
Altmetric indicators are largely affected by countries or regions, especially for non-English speaking countries such as China, Japan, Russia, etc. Although China is the largest county in terms of the number of social media users, we still know little on the academic use of local social media tools and local altmetric indicators in China. The purpose of this paper is to present the landscape of local altmetrics in China, including the local social media platform for academic use, local altmetric data sources and indicators, as well as the local altmetric studies conducted by Chinese scholars.
Introduction The research of network science is experiencing a blossom in the last decade, which provides profound implications in very different fields, from finance to social and biological networks [1]. Considering the enormous data scale, most studies merely focus on a small group of influential nodes rather than the whole network. Take social networks for instance, influential nodes are those that have the most spreading ability, or playing a predominant role in the network evolution. Notably, a popular star in online social media may remarkably accelerate the spreading of rumors, and a few super spreaders [2] could largely expand the epidemic prevalence of a disease (e.g., COVID-19) [3]. The research of influencer identification is beneficial to understanding and controlling the spreading dynamics in social networks with diverse applications such as epidemiology, collective dynamics and viral marketing [4,5]. Nowadays, individuals interact with each other in more complicated patterns than ever. It is a challenging task to identify influencers in social networks for the various kinds of interactions. As we have known, the graph model is widely utilized to represent social networks, however, it is incapable of dealing with the multiple social links. For example, people use Facebook or WeChat to keep communication with family members or friends, use Twitter to post news, use LinkedIn to search for jobs, and use TikTok to create and share short videos [6]. It is easy to represent each social scenario via a graph model separately, in spite of they are belonging to the same group of individuals. The neglect of the multiple relationships between social actors may lead to an incorrect result of the most versatile users [7]. With the proposal of multilayer networks [8,9], we are able to encode the various interactions, which is of great importance and necessity of identifying influencers in multiple social networks. In this paper, we design a novel node centrality measure for monolayer network, and then apply it to multilayer networks to identify influencers in multiple social networks. This method is solely based on the local knowledge of a network's topology in order to be fast and scalable due to the huge size of networks, and thus suitable for both real-time applications and offline mining. The rest of this paper is organized as follows. Section 2 introduces the related works on influencers identification in monolayer network and multilayer networks. Section 3 presents the mathematical model and the method for detecting influencers. Section 4 exhibits the experiments and analysis, including comparison experiments on twenty-one real-world datasets, which verifies the feasibility and veracity of the proposed method. Section 5 summarizes the whole paper and provides concluding remarks. --- Related Works The initial research on influencers identification may date back to the study of node centrality, which means to measure how "central" a focal node is [10]. A plethora of methods for influencers identification are proposed in the past 40 years, which can be mainly classified into centrality measures, link topological ranking measures, entropy measures, and node embedding measures [11,12]. Some of these measures take only the local information into account, while others even employ machine learning methods. Nowadays, it has been one of the most popular research topics and yielded a variety of applications [7] such as identifying essential proteins and potential drug targets for the survival of the cell [13], controlling the outbreak of epidemics [14], preventing catastrophic outages in power grids [15], driving the network toward a desired state [16], improving transport capacity [17], promoting cooperation in evolutionary games [18], etc. This paper investigates the problem of identifying influencers in social networks, by introducing a family of centrality-like measures and gives a brief comparison in Table 1. Degree Centrality (DC) [19] is the simplest centrality measure, which merely counts how many social connections (i.e., the number of neighbors) a focal node has, defined as DC(i) = N <unk> j a ij, (1 ) where N is the total number of nodes, a ij is the weight of edge (i, j) if i is connected to j, and 0 otherwise. The degree centrality is simple and merely considers the local structure around a focal node [20]. However, this method is probably mistaken for the negligence of global information, i.e., a node might be in a central position to reach others quickly although it is not holding a large number of neighbors [21]. Thus, Betweenness Centrality (BC) [22] is proposed to assess the degree to which a node lies on the shortest path between two other nodes, defined as BC(i) = <unk> s =i,s =t,i =t g st (i) g st, (2 ) where g st is the total number of shortest paths, g st (i) is the shortest path between s and t that pass through node i. The betweenness centrality considers global information and can be applied to networks with disconnected components. However, there is a great proportion of nodes that do not lie on the shortest path between any two other nodes, thereby the computational result receives the same score of 0. Besides, high computational complexity is also a limitation of applying for large-scale networks. Analogously, Closeness Centrality (CC) [23] is proposed to represent the inverse sum of shortest distances to all other nodes from a focal node, defined as CC(i) = N -1 <unk> j =i d ij, (3 ) where N is the total number of nodes, d ij is the shortest path length from node i to node j. The closeness centrality is capable of measuring the core position of a focal node via the utilization of global shortest path length, while it suffers from the lack of applicability to networks with disconnected components, e.g., if two nodes that belong to different components do not have a finite distance between them, it will be unavailable. Besides, it is also criticized by high computational complexity. Eigenvector Centrality [24] (EC) is a positive multiple of the sum of adjacent centralities. Relative scores are assigned to all nodes in a network based on an assumption that connections to high-scoring nodes contribute more to the score of the node than connections to low-scoring nodes, defined as EC(i) = k -1 1 <unk> j A ij x j,(4) where k i depicts the eigenvalue of adjacency matrix A, x = k 1 Ax depicts the eigenvector stable state of interactions with eigvenvalue k -1 1. This measure considers the number of neighbors and the centrality of neighbors simultaneously, however, it is incapable of dealing with non-cyclical graphs. In 1998, Brin and Page developed the PageRank algorithm [25], which is the fundamental search engine mechanism of Google. PageRank (PR) is a positive multiple of the sum of adjacent centralities, defined as PR k (i) = N <unk> j=1 a ji PR k-1 (j) k out j, i = 1, 2,..., N.(5) where N depicts the total number of nodes, <unk> N i=1 PR 0 (i) = 0, k out j is the number of edges from node j point to i. Likewise, this method is efficient but also criticized by non-convergence in cyclical structures. As we have known, the clustering coefficient [26,27] is a measure of the degree to which nodes in a graph tend to cluster together, defined as C i = <unk> j =i,k =j,k =i a ij a ik a jk <unk> j =i,k =j,k =i a ij a ik. (6 ) It is widely considered that a node with a higher clustering coefficient may benefit forming communities and enhancing local information spreading. However, Chen et al. expressed contrary views that the local clustering has negative impacts on information spreading. They proposed a ClusterRank algorithm for ranking nodes in large-scale directed networks and verified its superiority to PageRank and LeaderRank [28]. Therefore, the effect of clustering coefficient on information spreading is uncertain, which may benefit local information spreading but prohibit global (especially directional network) information spreading. In 2016, Ma et al. proposed a gravity centrality [29] (GR) by considering the interactions comes from the neighbors within three steps, defined as G(i) = <unk> j<unk> i k s (i)k s (j) d 2 ij,(7) G + (i) = <unk> j<unk> i G(j),(8) where k s (i) and k s (j) are the k-shell index of i and j, respectively. <unk> i is the neighborhood set whose distance to node i is less than or equal to 3, d ij is the shortest path length between i and j. These methods consider semi-local knowledge of a focal node, i.e., the neighboring nodes within three steps, which are successful in many real-world datasets, such as Jazz [30], NS [31] and USAir network [32], etc. However, they are also with high computational complexity by globally calculating k-shell. In 2019, Li et al. improved the gravity centrality and proposed a Local-Gravity centrality (LGR) [33] by replacing k-shell computing and merely considering the neighbors within R steps, defined as LG R (i) = <unk> d i j<unk>R,j =i k i k j d 2 ij,(9) where k i and k j are the degrees of i and j, respectively, d ij is the shortest path length between i and j. This method had been extremely successful in a variety of real-world datasets, however, the parameter R requires the calculating of network diameter, which is also a time-consuming process. The above-mentioned centrality measures have been utilized to rank nodes' spreading abilities in monolayer networks. The ranking of nodes in multilayer networks is a more challenging task and is still an open issue. The information propagation process over multiple social networks is more complicated, and conventional models are incapable without any modifications. Zhuang and Ya <unk>an [36] proposed a clustered multilayer network model, where all constituent layers are random networks with high clustering to simulate the information propagation process in multiple social networks. Likewise, Basaras et al. [37] proposed an improved susceptible-infected-recovered (SIR) model with information propagation probability parameters (i.e., <unk> ii for intralayer connections and <unk> ij for interlayer connections). Most of the recent endeavors concentrated on the multiplex networks, (e.g., clustering coefficient in multiplex networks [38]), where all layers share the identical set of nodes but may have multiple types of interactions. Rahmede et al. proposed a MultiRank algorithm [39] for the weighted ranking of nodes and layers in large multiplex networks. The basic idea is to assign more centrality to nodes that are linked to central nodes in highly influential layers. The layers are more influential if highly central nodes are active in them. Wang et al. proposed a tensor decomposition method (i.e., EDCPTD centrality) [7], which utilize the fourth-order tensor to represent multilayer networks and identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition. They also exhibited the superiority to traditional solutions by comparing the performance of the proposed method with the aggregated monolayer networks. In a word, it is of great significance in identifying influencers in multiplex networks. Our purpose in this work is to devise a measure that can accurately detect influential nodes in a general multilayer network. --- Modeling and Methods --- Network Modeling The problem of finding influential nodes is described as extracting a small set of nodes that can bring the greatest influence on the network dynamics. With a given network model G = (V, E), where V = <unk>v 1, v 2,..., v n <unk> is the node set and E = <unk>(v i, v j )<unk>, (v i, v j <unk> V) is the edge set. The identification of influential nodes is to pick a minimum of nodes as the initial seeds, which can achieve the maximum influenced scope, described as A = arg min A<unk>V max<unk>(A)<unk>, (10 ) where A is the initially infected nodes, <unk>(A) denotes the final influenced node set. This problem is simplified as top-k influencers identification by additional setting |A| = k, which has recently attracted great research interests [40][41][42]. A variety of real-world social networks are, in fact, interconnected by different types of interactions between nodes, forming what is known as multilayer networks. In this paper, we employ a multilayer network model [9], which can represent nodes sharing links in different layers. The multilayer network model is defined as M = (G, C),(11) where G = <unk>G <unk> ; <unk> <unk> <unk>1,..., L<unk> is a family of (directed or undirected, weighted or unweighted) graphs G <unk> = (V <unk>, E <unk> ), which represents layers of M and C depicts the interactions between nodes of any two different layer, given by C = <unk>E <unk> <unk> V <unk> <unk> V <unk> ; <unk>, <unk> <unk> 1,..., L, <unk> = <unk>,(12) The corresponding supra-adjacency matrix can be represented as M = <unk> <unk> <unk> <unk> <unk> <unk> A 1 I 12 • • • I 1L I 21 A 2 • • • I 2L......... • • • I L1 I L2 • • • A L <unk> <unk> <unk> <unk> <unk> <unk> <unk> R N<unk>N, (13 ) where A 1, A 2,..., A L are the adjacency matrix of layer 1, 2,..., L, respectively. N is the total number of the nodes, which can be calculated by N = <unk> 1<unk>l<unk>L |V l |. The non-diagonal block I <unk> represents the inter-layer edges of layer <unk> and layer <unk>. Thus, the interlayer edges can be represented as I = L <unk>,<unk>=1,<unk> =<unk> I <unk>. (14 ) Take the 9/11 terrorists network [43] for instance, the edges are classified into three categories (i.e., layers) according to the observed interactions which are plotted in Figure 1. --- Methods We employ the susceptible-infected-recovered (SIR) spreading model [44] as the influence analysis model. It has three possible states: • Susceptible (S) state, where a node is vulnerable to infection. --- • Infectious (I) state, where a node tries to infect its susceptible neighbors. --- • Recovered (R) state, where a node has recovered (or isolated) and can no longer infect others. In a network, if two nodes are connected then they are considered to have "contact". If one node is "infected", and the other is susceptible, then with a certain probability the latter may become infected through contact [45]. A node is considered to be recovered if it is isolated or immune to the disease. In detail, to check the spreading influence of one given node, we set this node as an infected node and the other nodes are susceptible nodes. At each time step, each infected node can infect its susceptible neighbors with infection probability <unk>, and then it recovered from the diseases with probability <unk>, the differential equations are shown in Figure 2. For simplicity, we set <unk> = 1. The process of the SIR model is plotted in Figures 3 and4 with the famous Krackhardt's Kite network [46]. The process of SIR model on Krackhardt's Kite network. In panel (a), all the nodes are in Susceptible state; while we select one node to be infected, and the neighbors will be infected soon, as shown in panel (b); Finally, the network will reach a stable state, i.e., the number of recovered nodes will reach a maximum, as shown in panel (c). 1XPEHURI6XVFHSWLEOHV,QIHFWLRXVDQG5HFRYHUHGV 6XVFHSWLEOHV,QIHFWLRXV 5HFRYHUHGV In this paper, we define the node influence (INF, for short) as the energy derived from the neighbors, given as I NF R (i) = <unk> d ij <unk>r <unk> j<unk>(i) d 2 ij w ij k j, (15 ) where R is the truncation radius, <unk>(i) is the set of neighbors of node i, d ij is the shortest path length between node i and node j, k j is the degree of node j, w ij is the weight of edge e ij. For unweighted networks, w ij = 1. Analogously, we apply the proposed INF measure to multilayer networks (represented as I NF M R ) by the following modifications I NF M R (i) = <unk> d ij <unk>R <unk> <unk>L <unk> j<unk> <unk> (i) d 2 ij w ij k <unk> j, (16 ) where R is the truncation radius, <unk> <unk> (i) is the set of neighbors of node i at layer <unk>, k <unk> j is the degree of node j at layer <unk>, d ij is the shortest path length between node i and node j. For simplicity, we choose R = 1, thus d ij = 1 if node i and node j is connected through an intralayer edge or interlayer edge, and 0 otherwise. To explain the effect, we take the above-mentioned Krackhardt's Kite network (as plotted in Figure 5) and the 9/11 terrorists network (as plotted in Figure 1) for examples. The nodes centralities in Krackhardt's Kite network are shown in Table 2. As shown in Table 2, Node 4 is considered to be the most important node under the Degree, Katz and the proposed INF measure, while Node 8 has greater Betweenness, Node 6 and node 7 has greater Closeness or (Eigenvector). Thus, the node list (i.e., [4,6,7,8]) is considered to be the influencers. Furthermore, to evaluate the nodes' influence, we set each node as the initially infected and recorded the final recovered nodes, respectively. This process is repeated for 10,000 times and the results are shown in Table 3. As shown in Table 3, Node 4 (i.e., Diane), which is considered to be more influential under Degree, Katz, and INF centrality, shows more recovered nodes (i.e., 5.3182) after 10,000 times SIR stimulations. This experiment is available at https://neusncp.com/api/sir. Analogously, we conduct experiments on the three-layer 9/11 terrorists network. Particularly, we set the infected probability between intralayer edges as <unk> and the probability between interlayer edges as <unk> M = w ij <unk>. The experimental results are plotted in Figure 6. By conducting SIR simulations on the three-layer 9/11 terrorists network, we can obtain the influential nodes of each layer by calculating the number of finally recovered nodes. Afterward, we sort the nodes by the averaging recovered nodes, and compare the order with the results computed from the proposed INF indicator. It is shown in Figure 6 that the compared values (i.e., recovered nodes and INF) are in the same tendency, which verifies the feasibility of the proposed INF measure. Notably, several influential nodes, such as "Essid Sami Ben Khemais", "Mohamed Atta", and "Marwan Al-Shehhi" are also in the central position of the network, as shown in Figure 1a. The experimental results on the two sample networks show the feasibility of the proposed measure on monolayer and multilayer networks, respectively. Experiments on more real-world networks will be given in Section 4. --- Complexity Analysis Suppose m and n are the numbers of edges and nodes, respectively, L is the number of layers, the average degree of nodes is d, R is the truncation radius (commonly setting as R = 1). The complexity of INF for monolayer network is O(n + d R ). As for multilayer networks, the computational complexity is O(n + Ld R ), where L is also a small positive integer. Thus, the time complexity is acceptable as O(n + Ld). Overall, the proposed measure considers more neighboring information than the degree centrality and has a lower computational complexity than betweenness centrality and closeness centrality (i.e., O(nm + n 2 log n)). --- Experiments and Discussion The experimental environment was with Intel(R) Core (TM) i5-7200U CPU @ 2.50 GHz (4 CPUs), 2.7 GHz, the memory was 8 GB DDR3. The operating system was Windows 10 64 bit, the programming language was Python 3.7.1, and the relevant libs were NetworkX 2.2 and Multinetx. The goal of the experiments was to compare the performance of the proposed INF measure with competitive indicators. --- Experimental Datasets In this paper, 21 real-world datasets were employed to verify the performance of the proposed method, which were classified into two groups. The first group covered 12 monolayer networks, which comprised four social networks (i.e., Club, Dolphins, 911 and Lesmis), three biological networks (i.e., Escherichia, C.elegans and DMLC), collaboration networks (i.e., Jazz and NS), a communication network (i.e., Eron), a power network (i.e., Power) and a transport network (i.e., USAir), as shown in Table 4. The second group covered nine multilayer networks, which comprised six social networks (i.e., Padgett, Krackhardt, Vickers, Kapferer, Lazega and CS-Aarhus), two transport networks (i.e., LondonTransport and EUAirTransportation) and a biological network (i.e., humanHIV), as shown in Table 5. Data availability: http://www.neusncp.com/user/file?id=12&code=data. --- Performance Comparison To verify the performance of the proposed node influence in networks, this paper carries out a comparison experiment on the above-mentioned datasets: The nodes were removed by a certain indicator in descending order, and the number of subgraphs was recorded. This process repeated until there were not any nodes left. The varying tendency of the subgraphs' number exhibited the influence of a focal centrality. The experimental results are shown in Figure 7. As shown in Figure 7, with the nodes removing, the number of subgraphs was increasing and reached a maximum when the network was totally broken up, i.e., there were no edges at this moment. Afterward, the number of subgraphs (i.e., the number of nodes) was decreasing and finally reached zero when all the nodes were removed. The maximum numbers of subgraphs were obtained by the proposed INF measure on all the datasets except C.Elegans. However, the result of C.Elegans obtained by INF was very close to the best situation of BC, which suggests the feasibility of the proposed INF measure. We applied the SIR model to compare the rankings of influences calculated by each indicator among the above-mentioned networks. Initially, one node was set as "infected" state to infect its neighbors with probability <unk>. Afterward, the infected nodes were recovered and never be infected again with probability <unk>. This spreading process repeated until there were no more infected nodes in the network. The influence of any node i can be estimated by P(i) = N R N,(17) where N R is the number of recovered nodes after the spreading process, and N is the total number of nodes in the network. For simplicity, we set <unk> = 1 and the epidemic threshold was <unk> c <unk> k k 2 -k(18) After having obtained the standard nodes' influence sequence via SIR model simulations, we employed the Kendall's Tau coefficient [65] to compare the performance of each indicator. The Kendall's Tau coefficient is an index measuring the correlation strength between two sequences. Suppose given the standard sequence X = (x 1, x 2,..., x N ), and we obtained the computational sequence Y = (y 1, y 2,..., y N ) by a certain indicator. Any pair of two-tuples (x i, y i ) and (x j, y j ) (x = j) are concordant if both x i > x j and y i > y j or x i <unk> x j and y i <unk> y j. Meanwhile, they are considered as discordant, if x i > x j and y i <unk> y j or x i <unk> x j and y i > y j. If x i = x j or y i = y j, pairs are neither concordant nor discordant. Therefore, Kendall's Tau coefficient is defined as <unk> = N c -N d 0.5n(n -1),(19) where N c and N d indicate the number of concordant and discordant pairs, respectively. The range of <unk> is [-1, 1]. Table 6 shows the computational Tau results with the comparison of standard sequence from SIR model simulations. q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q qqq qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q C.Elegans DMLC Power As shown in Table 6, the proposed measure outperformed the competitors in most cases, even in the Escherichia network, the computed Tau result of INF (0.0692) was close to that of CC (0.0971). Thus, it was also competitive in this network. --- Jazz If setting the limitation of identifying k influencers, we conducted experiments on the real-world datasets with top-k nodes by computational centrality nodes and compared the recovered nodes (i.e., the final number of nodes with recovered states). To compare the varying parameter k with the obtained <unk>, we conducted experiments on the above-mentioned datasets and set the ratio of <unk>/<unk> c, as shown in Figure 8. q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q C.Elegans DMLC Power As shown in Figure 8, the proposed node influence method is quite competitive in most of the datasets, although second to the performance of betweenness indicator in DMLC and Jazz datasets. --- Jazz Analogously, we conducted experiments on the nine multilayer networks by removing nodes with maximum centralities; the results are plotted in Figure 9. q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q
Social network analysis is a multidisciplinary research covering informatics, mathematics, sociology, management, psychology, etc. In the last decade, the development of online social media has provided individuals with a fascinating platform of sharing knowledge and interests. The emergence of various social networks has greatly enriched our daily life, and simultaneously, it brings a challenging task to identify influencers among multiple social networks. The key problem lies in the various interactions among individuals and huge data scale. Aiming at solving the problem, this paper employs a general multilayer network model to represent the multiple social networks, and then proposes the node influence indicator merely based on the local neighboring information. Extensive experiments on 21 real-world datasets are conducted to verify the performance of the proposed method, which shows superiority to the competitors. It is of remarkable significance in revealing the evolutions in social networks and we hope this work will shed light for more and more forthcoming researchers to further explore the uncharted part of this promising field.
q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q CS-Aarhus LondonTransport EUAirTransportation As shown in Figure 10, the runtime accumulated from either group indicated that the proposed INF measure was efficient, which was close to that of DC and superior to BC, CC and LGR. Krackhardt Lazega humanHIV1 Padgett Kapferer Vickers --- Discussion Influencers identification is a fundamental issue with wide applications in different fields of reality, such as epidemic control, information diffusion, viral marketing, etc. Currently, degree centrality [19] is the simplest method, which considers nodes with larger degrees are more influential. However, for the lack of global information, a node lying in a "bridge" position might be neglected for holding a small degree. The betweenness [22] and closeness [23] centrality consider global information, but they are holding a high complexity, which are not suitable for applications in large-scale networks. Local gravity is a balanced method, however, the determination of parameter R requires computing network diameter, which is also time-consuming. Thus, a novel node influence measure is proposed in this paper, which merely considers the local neighboring information of a focal node with the complexity of O(n + Ld). Experimental results on 21 real-world datasets indicate the feasibility of the proposed measure. Firstly, the experiments of counting subgraphs with removing influential nodes show that the capability of the proposed INF measure. By removing the nodes according to the INF indicator, the networks are more easily broken up, as shown in Figures 7 and9. Secondly, we apply the SIR model to evaluate the node influence, which suggested the proposed INF measure is competitive to other indicators in most cases. Although inferior to BC on Jazz and DMLC networks, it is also competitive. By analyzing the structures of these two networks, we find that the nodes of Jazz network are densely connected (i.e., the average degree of 27.6970) and most of the nodes are holding the same number of neighbors (approximately 28 neighbors), which brings difficulties to identify which node is more influential. On the contrary, there is only one node (i.e., Node 2) holding a large number of neighbors (i.e., 439 neighbors) and the others only holding few neighbors (approximately four neighbors) in DMLC network, which is also difficult to identify influencers. Overall, the proposed method outperforms the other indicators in most cases. Finally, we compare the running time of each indicator on the 21 real-world datasets. Experimental results show the efficiency of the proposed measure. --- Conclusions Aiming at solving the problem of identifying influencers in social networks, this paper proposes a novel node influence indicator. This method merely considers the local neighboring information in order to be fast and suitable for applications in large-scale networks. Extensive experiments on 21 real-world datasets are conducted, and the experimental results show that the proposed method outperforms competitors. Afterwards, the time complexity is compared, and we verify the efficiency of the proposed indicator. Overall, the proposed node influence indicator is capable of identifying influencers in social networks. The contribution of this work is likely to benefit many real-world social applications, such as promoting network evolutions, preventing the spreading of rumors, etc. As part of future works, the influencers in dynamic networks can be further studied by applying the proposed INF measure into a multilayer network model with numerous ordinal layers. The node's influence can be calculated by accumulating the local neighbors across all the layers. Besides, the effect of layers needs to be taken into consideration. In a word, we hope the findings in this work will help to improve the researches in this promising field. --- Author Contributions: X.H. designed the method and wrote the original draft; D.C. revised the manuscript; T.R. and D.W. checked the manuscript and made some modifications. All authors have read and agreed to the published version of the manuscript. --- Conflicts of Interest: The authors declare that there are no conflicts of interest regarding the publication of this paper.
Social network analysis is a multidisciplinary research covering informatics, mathematics, sociology, management, psychology, etc. In the last decade, the development of online social media has provided individuals with a fascinating platform of sharing knowledge and interests. The emergence of various social networks has greatly enriched our daily life, and simultaneously, it brings a challenging task to identify influencers among multiple social networks. The key problem lies in the various interactions among individuals and huge data scale. Aiming at solving the problem, this paper employs a general multilayer network model to represent the multiple social networks, and then proposes the node influence indicator merely based on the local neighboring information. Extensive experiments on 21 real-world datasets are conducted to verify the performance of the proposed method, which shows superiority to the competitors. It is of remarkable significance in revealing the evolutions in social networks and we hope this work will shed light for more and more forthcoming researchers to further explore the uncharted part of this promising field.
Background The effects of financial crises on health have been studied for decades. The evidence suggests that recessions have damaging effects on many health indicators, particularly mortality and suicide [1]. There is also evidence that financial crises can have some positive effects on health (e.g. fewer workplace accidents or less tobacco consumption), although in general results are more heterogenous [2]. Furthermore, periods of financial crisis are associated with higher psychological stress among the population and greater use of mental health services [3,4]. Increased levels of anxiety and depression are equally recorded [5]. In turn, these conditions are associated with an increase in the number of attempted suicides and premature deaths due to episodes of violence and suicide [6,7] and increased consumption of alcohol [8]. However, the effects of an economic downturn do not have the same impact on all individuals and all countries; sex, age, level of education, marital status, size of household, employment, income, belief systems and social relationships are individual factors which have a bearing on better or worse resilience [9]. And socio-economic factors can also play a part in this impact. Analysis of the policies implemented by some countries during times of economic crisis reveals the link between these policies and impact on mental health among the population [10][11][12]. Austerity measures such as the massive cutbacks made as a result of the crisis in different European countries have had a harmful effect on mental health [11]. Precisely when individuals may require more care due to mental health problems, cutbacks in the healthcare sector may lead to reduced services for prevention, early detection and treatment of mental health problems. In this respect, vulnerable groups -people in financial difficulty and people with health issues -would be at higher risk [13]. The meta-analysis by Paul and Moser [14] showed that the negative effect of unemployment on mental health was more pronounced in countries with a low level of economic development, unequal distribution of income or weak unemployment benefit systems. The effect of contextual factors has been noted in highly diverse geographical areas distant from Spain such as Asia, where the economic crisis appears to have had a lower impact on health in Malaysia than in Thailand or Indonesia. Unlike its neighbours, Malaysia rejected World Bank advice to make cutbacks in healthcare spending [12]. Spain has stood out as one of the countries most severely affected by the so-called great recession [15], one of the most overwhelming effects of which is unemployment [15][16][17]. To analyse the impact on health of the crisis in Spain, two particularities must be taken into account: on the one hand, the healthcare system provides almost universal coverage and on the other, there are differences between regions as a result of political decentralisation. An example of this is the spending gap per inhabitant between the regions with the highest and lowest spending, reaching 62% in 2014 [18]. As regards social protection (retirement pension, sickness or disability benefit, unemployment benefit, measures to protect families and prevent social exclusion), this gap was 87% [18]. A recent study detected major differences in austerity measures during the recession [19]; whilst in the Basque Country policies for austerity and privatisation were almost non-existent, the trend in other regions such as La Rioja, Madrid and the Balearic Islands was clearly in the opposite direction. This reality may determine variations in the impact of the recession depending on the region where people live, as a result of how different Autonomous Community governments have responded to the recession. Studies on the impact on mental health of contextual factors between regions in the same country are limited [9][10][11][12][13][14]20] and we consider that looking at regions in a single country facilitates comparison given similarities in the population as regards culture, values and belief systems. Various articles have addressed the impact of socioeconomic crises on mental health in Spain [3-5, 8, 13, 15-18, 21-25]. They have focused only on analysing the effect of individual factors. But in addition to these individual variables there are contextual variables which can either lessen or intensify the adverse effects of the crisis, amongst which are the variables relating to the political and institutional context, such as economic indicators, public welfare services indicators and labour market indicators. The impact of the crisis on the health of the population could be lessened or intensified by policies, affecting the financial security and social conditions of families [1]. The aim of this study is to analyse the socio-economic factors impacting on mental health during the recession in Spain. --- Methods --- Design Cross-sectional descriptive study of two periods: before the recession (2006) and after the recession (2011-2012). --- Study population Individuals aged 16+ years old, resident in Spain, polled for the National Health Survey in 2006 and 2012. There were 25,234 subjects in 2006 and 20,754 subjects in 2012. --- Variables Dependent Psychic morbidity measured through self-referred poor mental health: yes (GHQ > = 3)/no (GHQ <unk> 3). According to the Goldberg Health Questionnaire, 12 items (GHQ-12), adapted and validated in our environment. --- Individual independent -Socio-demographic variables: a) axes of social inequality: age, socio-professional class, level of education (low, medium or high, as per ISCED International Standard Classification of Education). Low level equates to no schooling or primary education, medium level equates to secondary education and mid-grade vocational training, and high level equates to advanced vocational training and university qualifications, nationality; b) other: employment situation, marital status. Social class has been determined based on current or most recent professional occupation according to National Occupation Classification CNO-2011. Psycho-social variables: social support (emotional and personal support collated by means of the Duke-UNC Functional Social Support Questionnaire). --- Contextual independent The contextual variables were selected on the basis of their availability for the years analyzed and the degree of disaggregation by region (Additional file 1). The geographical unit of analysis is based on NUTS-2 regions of EUROSTAT (called Autonomous Communities in Spain). - To calculate socio-economic indicators, we used data from the National Institute of Statistics (GDP per capita, income per capita per household and risk of poverty) [26,27]; Eurostat (employment and unemployment rates, percentage of temporary workers) [28]; and BBVA Foundation (healthcare spending per capita) [29]. --- Data analysis All the analyses were performed by sex (male and female) and for the total population. Prevalence was calculated for the psychic morbidity variable and the independent proportions comparison test was applied to compare significant changes. The Chi-square test was used to compare determinant bivariates between the two periods. Two multilevel logistic regression models with random effect were constructed to determine change in psychic morbidity according to individual and contextual variables respectively. In the first model, the study period and predictor variables at the individual and socioeconomic level were included, and intercepts at the NUTS-2 region level were included as random effect. In the second model, contextual variables were included individually (in order to avoid collinearity) and adjusted for individual characteristics, and intercepts at the NUTS-2 region level were included as random effect. In all models, whether the differences are significant was assessed by using the Wald test for each predictor. Correction of the clustered robust variance was done by the observed information matrix (OIM). The magnitude of effects is measured by the odds ratio (OR) and 95% confidence interval, and a significance level of 0.05 will be set for hypothesis checking. In the indicator models for the macroeconomic context, the magnitude of association was expressed for a change of approximately one standard deviation of the context variable analysed. Statistical analyses were performed using Stata software (StataCorp., TX). --- Results Between 2006 and 2011-2012, the pattern of psychic morbidity differed between men and women. Among men, poor mental health has increased significantly in the 30 -34 age group (14.2%-17.0%) and in the 45 -59 age group (16.1%-19.9%), also among single men (14.4%-17.2%) and married men (14.5-16.7%), men with a low level of education (17.5%-19.8%) and normal social support (14.6%-16.8%). Country of origin was not found to have any link to differences in prevalence of poor mental health, since this was significant for Spaniards and foreigners alike. Nor was any link found between socio-professional class and differences in prevalence of psychic morbidity (Table 1). Among women, groups which showed significant differences in mental health between 2006 and 2012 were the 16 -29 age group (drop from 22.3% to 17.3%) and the over-60 age group (drop from 33.8% to 29.4%). Married women (25.3%-23.7%) and widows (37.2%-33.3%) also showed a significant decrease in the prevalence of poor mental health, similarly to working women (21.9%-19.7), retired women (36.3%-30.5%) and women studying (21.9%-16.4%). In accordance with the first multilevel logistic regression model (Table 2) for men, widowers (OR: 1.45 CI 95%: 1.27-1.55) presented a higher risk of poor health compared with single men, as did separated or divorced men (OR: 1.54 CI 95%: 1.33-1.78). By contrast, married men (OR: 1.97 CI 95%: 0.91-0.798) presented a lower risk of psychic morbidity than single men. As regards employment situation, unemployed men presented a higher risk of psychic morbidity in comparison to working men (OR: 1.81; CI 95%: 1.67-1.98) and retired men (OR: 1.23; CI 95%: 1.12-1.35). Lastly, a link was found between better social support and lower risk of psychic morbidity. Among women, widows (OR: 1.40 CI 95%: 1.24-1.57) presented a higher risk of psychic morbidity compared to single women, as did separated or divorced women (OR: 1.61 CI 95%: 1.43-1.71). As regards employment situation, homemakers presented a higher risk of psychic morbidity than working women (OR: 1.84; CI 95%: 1.14-1.95) and retired women (OR: 1.63; CI 95%: 1.60-1.75). Lastly, a link was found between better social support and lower risk of psychic morbidity. --- Table 1 Prevalence of poor mental health (according to individual characteristics), 2006 and 2012 Values in grey: p <unk> 0.05 According to the second multilevel logistic regression model, among the macroeconomic variables studied, those associated with worse mental health for men and women alike were lower healthcare spending per capita and a higher percentage of temporary workers. By contrast, risk of poverty, income per capita per household, Gross Domestic Product and employment rate were not found to be linked to worse mental health (Table 3). Among women, the only contextual variable associated to a worse mental health was healthcare spending per capita (the risk of poor mental health increased a 6% for each 100€ decrease in healthcare spending per capita). Among men, the contextual variables associated to a worse mental health were healthcare spending per capita and percentage of temporary workers (the risk of poor mental health decreased 8% for each 5 percentage point increase in temporary workers). --- Discussion The severity of the current economic crisis has hit Spain far harder than other European countries, with the possible exceptions of Portugal, Greece and Cyprus [25]. The recession has had a significant impact on conditions and levels of employment and on poverty rates in Spain as a whole, although with considerable differences between Autonomous Communities. In this respect, in a prior study comparing regions, Zapata states "Spain is currently a natural laboratory for exploring how negative macroeconomic changes affect health" [25]. As regards limitations, Parmar [2] states that the majority of studies on crises and health are subject to biases, pointing above all to reverse causality or not taking possible prior trends into account. In this study, in the first place, we have used a short period to study the impact of the crisis with two cut-off points and therefore it is quite possible that mental health has continued getting worse. It was not possible to measure the trend, since in previous years the Health Survey has not measured psychic morbidity. In the second place, given the cross-sectional nature the possible existence of reverse causality cannot be overlooked. There may be some uncontrolled confusion bias given that other variables are not taken into account (some gathered in surveys and others not) which may or may not have an effect on state of mental health. Yet in spite of these limitations, our study is the first of its kind to analyse a multilevel design to investigate the impact of contextual variables during the recession in Spain and its possible consequences on mental health. The socio-economic factors linked to mental health were healthcare spending per capita and percentage of temporary workers. Estimating the contribution of factors which can affect the health of the population is a complex and inexact task [30]. What does seem clear is that a robust health system can level out inequalities, since it enables support to be given to the most vulnerable sectors of the population [31]. By contrast, a weaker health system (with lower spending) would leave the most vulnerable less protected and these groups are the most exposed in the recession and therefore at higher risk of worse mental health. Although Spain has a national health system which provides (almost) universal coverage, there is considerable variation in healthcare spending and services from one Autonomous Community to another [32]. It is difficult to find reliable data on healthcare spending specifically for mental health, since budges are not broken down by medical fields. However, it is not unreasonable to believe that it may have suffered the same fate as spending as a whole, at least as regards the most general figures and trends. Inequalities in healthcare spending have a two-pronged effect: a) differences in resource allocation for service provision in different regions (the territorial perspective) and b) differences in public health insurance contributions by individuals or families (the personal perspective) [33]. There is an additional area as regards provision of mental health services which professional associations for mental health have condemned for years: Spain is still bringing up the rear in comparison with other European countries in terms of numbers of mental health practitioners, as shown by official WHO figures [34]. The link between worse mental health and percentage of temporary workers can be understood given that economic recessions can have a direct effect on people who keep their jobs. These individuals face situations of stress and anxiety caused by possible reduction in income, greater employment insecurity and increased workload. Recessions can likewise have a disproportionate negative impact on subgroups in the vulnerable population such as persons with a pre-existing mental disorder, or a low socio-economic level, or the unemployed [35]. The literature shows contradictory results for the relationship between unemployment and mental health. Some studies have found that unemployment is associated with poorer mental health, particularly amongst women [36], whilst others have found that during recessions or in cases of higher regional unemployment when the number of unemployed people increases and unemployment becomes a status, the psychological cost and stigma of being unemployed diminishes and the subjective well-being of the unemployed improves [37]. Taking into consideration the context variables found in our study, these differences would be nuanced by factors such as per capita healthcare spending or percentage of temporary workers. In the light of these findings, one might think that different political responses to economic crises would give rise to different mental health outcomes among the population. For example in Spain, unemployment levels in the 70's and 80's were accompanied by a corresponding increase in risk of suicide. In Sweden, however, the banking crisis of 1990 left a lot of people unemployed but the suicide rate dropped, even during this period. This marked difference has been attributed to the protection provided by the Swedish welfare state [38,39]. As regards the measures which should be taken during economic crises to palliate effects on mental health, Kentikelenis and Papanicolas [40,41] state the need to safeguard programmes for vulnerable groups such as the mentally ill and drug addiction rehabilitation programmes; to increase the number of general practitioners working in rural areas; to taken on the cost of non-medical illnesses among patients; and to prescribe a higher proportion of generic drugs in order to make savings in spending on drugs. Other studies have highlighted the effectiveness of policies such as active programmes to incentivise the labour market, which have a significant impact on reducing suicide rates [38]. Policies which aim to prevent individuals from taking on too much debt and for making it easier to pay off debts could be beneficial for people whose excessive levels of debt cause them stress [41]. Similarly, policies or initiatives such as financial mediators have huge potential for mitigating the effects of recession [42]. As regards health centres, it has been found that health initiatives for exploring the subjective perception of aloneness can be effective in improving mental health and should focus particularly on individuals in poor health and the unemployed [43]; similarly effective are programmes which support the role of primary care professionals in detecting persons at risk of suicide or other psychological problems [42]. Therefore, instead of making cutbacks in healthcare and social welfare, there should be higher spending on measures for social protection during times of recession and increased support for mental health programmes in the health sector, particularly in primary care [44,45]. Additionally, there should be more comprehensive and cooperative consolidation of the mental health network within healthcare (social services, primary care, specialised care, and social rehabilitation and reintegration) which takes into account the specific needs of the individuals which this healthcare sector focuses on [45]. --- Conclusions Lastly, data will be required in following years in order to analyse whether fresh government cutbacks to healthcare and social spending [35] and the policies implemented by different Autonomous Communities will have a medium and long-term impact on mental health among the Spanish population. Furthermore, it is to be noted that social inequalities in Spain have increased since the beginning of the financial crisis. Moreover, various studies have highlighted that increased social inequalities are not only an effect of the crisis but also a determining factor of the crisis. Therefore, a more sustainable economic model should make reduction of social inequalities one of its primary goals [46]. --- Key points Various articles have addressed the impact of socioeconomic crises on mental health. They have focused on analysing the effect of individual factors and have left out other factors linked to welfare state public services and economic indicators, which would be proxies for public policies implemented at the regional level. The impact of the crisis on the health of the population could be lessened or intensified by policies, affecting the financial security and social conditions of families. The findings of this study emphasize that policies during periods of recession should focus on support and improved conditions for vulnerable groups such as temporary workers. Healthcare cutbacks should be avoided in order to prevent increased prevalence of poor mental health among the population. --- Availability of data and material Please contact author for data requests. --- Additional file Additional file 1: Contextual Indicators. (XLSX 12.9 kb) Abbreviations BBVA: Banco Bilbao Vizcaya Argentaria; GDP: Gross domestic product; GHQ: Goldberg health questionnaire; ISCED: International Standard Classification Of Education; NOC: National Occupation Classification; NUTS: Nomenclature des unités territoriales statistiques Authors' contributions IRP designed the study. MRB and CBT conducted the statistical analysis. IRP, MRB and CBT drafted the article. All authors provided input during the preparation of the manuscript, and approved the final version. --- Competing interests The authors declare that they have no competing interests.
Background: Periods of financial crisis are associated with higher psychological stress among the population and greater use of mental health services. The objective is to analyse contextual factors associated with mental health among the Spanish population during the recession. Methodology: Cross-sectional, descriptive study of two periods: before the recession (2006) and after therecession (2011)(2012). The study population comprised individuals aged 16+ years old, polled for the National Health Survey. There were 25,234 subjects (2006) and 20,754 subjects (2012). The dependent variable was psychic morbidity. Independent variables: 1) socio-demographic (age, socio-professional class, level of education, nationality, employment situation, marital status), 2) psycho-social (social support) and 3) financial (GDP per capita, risk of poverty, income per capita per household), public welfare services (health spending per capita), labour market (employment and unemployment rates, percentage of temporary workers). Multilevel logistic regression models with mixed effects were constructed to determine change in psychic morbidity according to the variables studied. Results: The macroeconomic variables associated with worse mental health for both males and females were lower health spending per capita and percentage of temporary workers. Among women, the risk of poor mental health increased 6% for each 100€ decrease in healthcare spending per capita. Among men, the risk of poor mental health decreased 8% for each 5-percentage point increase in temporary workers. Conclusions: Higher rates of precarious employment in a region have a negative effect on people's mental health; likewise lower health spending per capita. Policies during periods of recession should focus on support and improved conditions for vulnerable groups such as temporary workers. Healthcare cutbacks should be avoided in order to prevent increased prevalence of poor mental health.
Introduction University students' classroom performance is an important parameter for predicting their learning and academic achievements [1][2][3][4]. As a negative form of classroom performance, classroom silence among university students is a behavioral manifestation in which no verbal interaction occurs during classroom learning [5]. This form of classroom performance often does not lead to better learning gains; on the contrary, it may significantly and negatively affect university students' deep learning and thinking and indirectly affect their academic performance [6]. Numerous experimental and empirical studies showed that interactive learning benefitted university students. A qualitative study revealed that teacher-student interaction was one of the factors that improved university students' learning outcomes [7]. There were also studies based on an empirical comparative perspective that demonstrated that the gains of interactive learning were significantly better than those of silent passive individual learning [8,9], although some experimental studies proved that there was no significant difference in the impact of interactive or traditional classrooms on undergraduates' performance. However, interactive classrooms could make classroom instruction more efficient. In addition, interactive classrooms were more helpful than traditional classrooms in closing the scores gap between students, which was particularly beneficial for students with lower scores [10]. In higher education institutions worldwide, university classroom silence is particularly prominent in the Chinese learner population [11][12][13][14][15][16][17]. Therefore, the phenomenon of classroom silence and the challenge of improving Chinese university students' classroom learning performance have received much attention from researchers and practitioners. Regarding the classroom silence of Chinese university students, existing studies have shown strong interest in two types of contexts. The first type of research explored the silence of Chinese students in overseas university classrooms. These studies mainly investigated the Chinese students' reticence in university classrooms overseas such as in New Zealand [18], in the United States [15,19,20], and in Australia [21], to interpret and understand the causation of undergraduates' classroom silent behavior by students learning from teachers with different cultural backgrounds. For example, Wilkinson and Olliver-Gray [18] conducted an exploratory study of Chinese learners' lack of participation in classroom discussions, which was a frequent problem in New Zealand universities. Based on the concept of "cultural learning encounters", the study unpacked the different interpretations of non-participation and excessive speaking by New Zealand and Chinese students and highlighted the need to create a culturally collaborative teaching model in university classrooms with international students based on a comparison of three different forms of a teaching organization. Because the subjects of such studies were in crosscultural contexts, the cultural sensitivity of researchers often tended to make them much more concerned with the role of cultural factors (especially Chinese and Western cultural differences) in the reticence of Chinese students in classroom settings. Therefore, the findings of such studies are of less relevance in explaining Chinese university students' performance in classroom silence in local cultural contexts. The second type of research investigated the silence of Chinese undergraduates in foreign language classrooms at Chinese universities and its influencing factors [13,14,17,[22][23][24][25]. For example, Liu and Jackson [25] (pp. 119-137) investigated the performance of 93 non-English major first-year freshmen at a top Chinese university in the English classroom. Through viewing course videos, reflective diaries, and interviews, the researchers found that although students self-reported a strong willingness to speak, their actual speaking behavior was low. The reasons behind this were complex, including multi-dimensional factors such as language, culture, education, psychology, and personality, but among them, the lack of English language ability or students' lack of confidence in their English proficiency was the most important constraint. He [22] (pp. 87-142) investigated 302 non-English majors from two Chinese universities on the phenomenon of learning anxiety in university English classes. From the questionnaire, a lack of vocabulary or background knowledge was the primary cause of anxiety and silence. From the interview, students listed 16 reasons for anxiety and silence, of which 1/4 were factors directly related to English language ability. In contrast to the first type of research, this type of research broke out of the confines of cultural centrism. However, as the language medium in the classroom was English, researchers tended to consider the lack of English language skills as the dominant cause of Chinese university students' classroom silence. Focus was on factors such as lack of self-confidence, fear of classroom participation, and fear of losing face due to poor performance to explain Chinese learners' silent performance in English classes at domestic universities [13,14]. Therefore, such research was constrained by classroom language contexts and could not well reflect the silence of Chinese university students in classroom situations where the native language was the medium of communication. Obviously, the studies above mainly discussed the speech situation of Chinese undergraduates in foreign language-mediated classrooms, but the studies on overseas classrooms also involved cultural heterogeneity in addition to language barriers. However, for Chinese university students, classrooms at Chinese universities that use their mother tongue as the medium of communication are the more important learning environments. Considering this, some studies have begun to focus on the third classroom situation. For example, Zhang and McNamara [16] (pp. 146-147) studied students' participation in mathematics and Chinese classes at Shandong University and found that a lack of classroom interaction between teachers and students was common [13,14]. Lv analyzed the types of students' silence in general education classes at Nanjing University and the complex psychological factors behind them [6,26]. In addition, some studies have begun to explore strategies to improve such silence in the classroom [27][28][29]. However, compared with the first two types of research, the research in this area was still very weak, especially for in-depth understanding of the silencing mechanism of Chinese university students. Two insights can be drawn from current studies: on the one hand, Chinese undergraduates' classroom silence exists in a wide range of classroom contexts; on the other hand, there is some variability in the attribution of Chinese students' classroom silence from studies based on different classroom contexts. This suggests that changes in context may bring about changes in silence factors, which reflects the need to focus on new contexts. Therefore, this study intends to explore in depth the native language-mediated classroom context at domestic universities, which has received less research attention but is the primary classroom learning environment for Chinese university students, to further verify whether classroom silence still exists among Chinese undergraduates in conditions free from cultural differences and language adaptation and if so, what are the features and reasons of silence in this situation? Answers to these questions will help us deepen the understanding of Chinese university students' classroom reticence; that is, when the language and culture that have been summarized by present studies are no longer the main reasons, what are the key factors contributing to this phenomenon? Since the professional courses of non-foreign language majors in Chinese universities are the most extensive native language classroom situations, this study will focus on the phenomenon of classroom silence among Chinese university students in such situations, and further, select the education major courses as the specific research context. Professional courses for undergraduates majoring in education are the main site for these students to learn educational theories, knowledge, and skills. After one to two years of professional study, these students have already acquired the preliminary professional quality of education, have a certain professional judgment ability on "what is a good classroom", and have strong sensitivity as well as reflection on educational issues. At the same time, most of these students are potential future teachers, and their classroom learning experiences at the undergraduate level have a profound impact on their subsequent classroom teaching practices. Since China entered the 21st century, curriculum reform has been continuously promoted. One of the trends in the reform is to highlight the dominant position of students and emphasize students' classroom participation. For this reason, new classroom learning methods such as cooperative learning have also been proposed [30,31]. Therefore, investigating the current situation and causes of classroom silence in professional courses of university students majoring in education will not only help to expand the understanding of classroom silence among Chinese undergraduates, but also can partially predict whether these future teachers will grow to be enablers or hindrances to the curriculum reform, so that action can be taken as early as possible. But we know very little about it due to omissions in existing studies. To sum up, the purpose of this study is to reveal the specific representation and formation mechanism of professional classroom silence of Chinese undergraduates' majoring in education. Specifically, taking university students majoring in education at a normal university in China as the participants, the grounded theory method is used to explore the experience of silence in the professional class and the complex factors behind it, and to develop a corresponding substantive theoretical model, to expand and deepen the recognition of Chinese university students' classroom silence. Due to the similarities of cultural backgrounds, the conclusions of this study will also be applicable to explain the phenomenon of classroom silence among Asian university students elsewhere. In addition, due to the particularity of the participants, the relevant findings of this study will also help to grasp the reasons for the success or failure of curriculum reform that advocates students' classroom interaction, in terms of the contribution of teacher education. --- Materials and Methods This study uses a grounded theory approach to collect data, analyze information, and construct theory. Originally invented by Glaser and Strauss [32], the grounded theory approach was intended to oppose the deductive paradigm of using experience to generate theory. Grounded theory advocates for the discovery of theory from experience and then using theory to reflect the experience and serve an understanding of experience. The goal of grounded theory is to generate a theory from empirical material to explain a pattern of behavior that is relevant to the participant, or to the problem with which the participant is involved. Generally, such a theory is a substantive theory that is relevant, focused on individuality and complexity, and explores a particular phenomenon and its intrinsic connections. When the substantive theory accumulates to a certain extent it can also be developed into a more generalized formal theory. In general, grounded theory is a generative rather than a validated methodology [32] (pp. 2-3), and the flow of qualitative investigation using the grounded theory approach is shown in Figure 1. To sum up, the purpose of this study is to reveal the specific representation and formation mechanism of professional classroom silence of Chinese undergraduates' majoring in education. Specifically, taking university students majoring in education at a normal university in China as the participants, the grounded theory method is used to explore the experience of silence in the professional class and the complex factors behind it, and to develop a corresponding substantive theoretical model, to expand and deepen the recognition of Chinese university students' classroom silence. Due to the similarities of cultural backgrounds, the conclusions of this study will also be applicable to explain the phenomenon of classroom silence among Asian university students elsewhere. In addition, due to the particularity of the participants, the relevant findings of this study will also help to grasp the reasons for the success or failure of curriculum reform that advocates students' classroom interaction, in terms of the contribution of teacher education. --- Materials and Methods This study uses a grounded theory approach to collect data, analyze information, and construct theory. Originally invented by Glaser and Strauss [32], the grounded theory approach was intended to oppose the deductive paradigm of using experience to generate theory. Grounded theory advocates for the discovery of theory from experience and then using theory to reflect the experience and serve an understanding of experience. The goal of grounded theory is to generate a theory from empirical material to explain a pattern of behavior that is relevant to the participant, or to the problem with which the participant is involved. Generally, such a theory is a substantive theory that is relevant, focused on individuality and complexity, and explores a particular phenomenon and its intrinsic connections. When the substantive theory accumulates to a certain extent it can also be developed into a more generalized formal theory. In general, grounded theory is a generative rather than a validated methodology [32] (pp. 2-3), and the flow of qualitative investigation using the grounded theory approach is shown in Figure 1. We adopt the grounded theory approach commonly used in qualitative research to systematically collect and analyze empirical data for several reasons. First, since the purpose of this study is to explore the nature of silence in the professional classroom of Chinese university students majoring in education, a deductive-based approach is not applicable. Grounded theory is a proven method for exploring essentials, allowing concepts and categories to emerge naturally with greater objectivity, and has been successfully applied in many fields. For example, Burns and Schneider [33] used grounded theory to reveal the elements of leadership programs that had the greatest impact on the alumni's lives and careers, as well as recommendations for how the program could better prepare students for the future. Second, the grounded theory approach is good at refining and summarizing the students' learning experiences. Because grounded theory describes and conceptualizes respondents' perspectives, behaviors, and lived experiences in the context of their lives, it ensures a participant-centered understanding [34] (pp. 131-146). Third, the nature and formation mechanism of classroom silence among Chinese undergraduates majoring in education have not been thoroughly studied, and grounded theory has important applications when theory and research are underdeveloped and underdefined [35] (pp. 5-7). Fourth, based on research in other contexts, the nature and mechanisms of silence in the classroom are complex and involve a variety of relevant factors, which is We adopt the grounded theory approach commonly used in qualitative research to systematically collect and analyze empirical data for several reasons. First, since the purpose of this study is to explore the nature of silence in the professional classroom of Chinese university students majoring in education, a deductive-based approach is not applicable. Grounded theory is a proven method for exploring essentials, allowing concepts and categories to emerge naturally with greater objectivity, and has been successfully applied in many fields. For example, Burns and Schneider [33] used grounded theory to reveal the elements of leadership programs that had the greatest impact on the alumni's lives and careers, as well as recommendations for how the program could better prepare students for the future. Second, the grounded theory approach is good at refining and summarizing the students' learning experiences. Because grounded theory describes and conceptualizes respondents' perspectives, behaviors, and lived experiences in the context of their lives, it ensures a participant-centered understanding [34] (pp. 131-146). Third, the nature and formation mechanism of classroom silence among Chinese undergraduates majoring in education have not been thoroughly studied, and grounded theory has important applications when theory and research are underdeveloped and underdefined [35] (pp. 5-7). Fourth, based on research in other contexts, the nature and mechanisms of silence in the classroom are complex and involve a variety of relevant factors, which is where grounded theory can be useful. Grounded theory is defined as "the discovery of theory from data systematically obtained and analyzed in social research" [32] (p. 1) and is a method that better reflects social psychological processes. The use of grounded theory as a methodology can help to reveal the formation mechanism of classroom silence among university students majoring in education in China. --- Participants Participants in this study were recruited by researchers from a normal university directly under the Ministry of Education in central China in spring 2019, spring 2020, and spring 2021. In China, normal universities are the main institutions for training future teachers. Education is the dominant discipline in normal universities, which generally cover both undergraduate and graduate levels; there are fewer comprehensive universities with education majors, among which, even fewer comprehensive universities have undergraduate education majors. Thus, the normal university was chosen as the study site. The university chosen on this basis is one of the six key comprehensive normal universities in China and one of the Chinese universities with a relatively high level of development of the discipline of education. Therefore, it is representative to choose this place for investigation. Considering that juniors have already had a long professional learning experience and have a deeper sense of classroom silence and that many seniors are not on campus due to internships or job searches, this study prefers to select junior students who have a professional course classroom silence experience. In March 2019, the researchers recruited 156 eligible participants by distributing recruitment leaflets. In March 2020, due to the impact of the epidemic, the researchers recruited 131 eligible participants by distributing electronic recruitment leaflets. In March 2021, the researchers recruited 107 eligible participants by distributing recruitment leaflets. In doing so, a sample of 394 participants was formed (see Table 1 for details). The inclusion criteria for participants were as follows: (1) have been living and receiving education in China; (2) are enrolled in an undergraduate program; (3) are majoring in education; and (4) have been identified by self-report and classroom instructor as having silent behavior in the classroom. --- Procedure The grounded theory research procedure is characterized by the integrated nature of data collection and data analysis. Therefore, the data collection for this study was based on the principles of theoretical sampling [36] (p. 197). It adopted a strategy of mutual facilitation and dynamic generation and was conducted in three stages with semi-structured interviews. Theoretical sampling is a sampling that builds on concepts that have proven theoretical relevance in the developing theory [36] (p. 197). The principle of sampling is to keep adding to the sample as needed for theory development until each category in the data reaches theoretical saturation (i.e., no new theoretical elements emerge) [32] (pp. [61][62]. Based on the informed consent of the interviewees, the researchers recorded the entire interview and transcribed it verbatim. Each interview lasted between 30 and 60 min. To ensure the privacy of the interviewees, we treated all interviewees anonymously. At the same time, the study tried to maintain rigor while processing qualitative data with the guidance of grounded theory [37,38] (see Figure 2 for details). data reaches theoretical saturation (i.e., no new theoretical elements emerge) [32] (pp. [61][62]. Based on the informed consent of the interviewees, the researchers recorded the entire interview and transcribed it verbatim. Each interview lasted between 30 and 60 min. To ensure the privacy of the interviewees, we treated all interviewees anonymously. At the same time, the study tried to maintain rigor while processing qualitative data with the guidance of grounded theory [37,38] (see Figure 2 for details). --- Stage 1 This stage focused on building a preliminary interview system and coding system. First, an initial interview outline was prepared based on the purpose of the study (see Table 2). Based on this, the researchers selected 87 participants from the 156 participants recruited in March 2019 as a preliminary sample, including 5 freshmen, 18 sophomores, 57 juniors, and 7 seniors. These students not only had classroom silence in their professional classes but also had many views about it, through which they could help the researchers to better identify the characteristics of classroom silence and students' attitudes in education major classes. Researchers conducted one-on-one semi-structured interviews with the participants. The interview locations were set in empty classrooms or the school cafeteria according to the interviewee's preference. Although the researchers designed the interview outline in advance, the actual interview was followed up when the researchers found new and more valuable information at the right time. The interview questions were based on the interview outline and focused on the current state of silence in the professional classes experienced by the participants and the reasons for their silence. Starting with an understanding of classroom silence, the participants were allowed to share autobiographical accounts of themselves and their experiences in class [39] (pp. 119-121). Questions were then directed to the respondents' perceptions and attributions of the silencing phenomenon in the professional classroom, strategies to improve it, and to pursue new questions that arose. Once all interviews were completed, two researchers simultaneously coded these 87 interview texts for independent analysis, and then the four researchers discussed the coding rationale and further interview questions together. To enrich the prototype coding system, the researchers continued to select 57 participants from the remaining participants for one-on-one semi-structured interviews and repeated the coding procedure above. This round of interviews further uncovered the reasons for the --- Stage 1 This stage focused on building a preliminary interview system and coding system. First, an initial interview outline was prepared based on the purpose of the study (see Table 2). Based on this, the researchers selected 87 participants from the 156 participants recruited in March 2019 as a preliminary sample, including 5 freshmen, 18 sophomores, 57 juniors, and 7 seniors. These students not only had classroom silence in their professional classes but also had many views about it, through which they could help the researchers to better identify the characteristics of classroom silence and students' attitudes in education major classes. Researchers conducted one-on-one semi-structured interviews with the participants. The interview locations were set in empty classrooms or the school cafeteria according to the interviewee's preference. Although the researchers designed the interview outline in advance, the actual interview was followed up when the researchers found new and more valuable information at the right time. The interview questions were based on the interview outline and focused on the current state of silence in the professional classes experienced by the participants and the reasons for their silence. Starting with an understanding of classroom silence, the participants were allowed to share autobiographical accounts of themselves and their experiences in class [39] (pp. 119-121). Questions were then directed to the respondents' perceptions and attributions of the silencing phenomenon in the professional classroom, strategies to improve it, and to pursue new questions that arose. Once all interviews were completed, two researchers simultaneously coded these 87 interview texts for independent analysis, and then the four researchers discussed the coding rationale and further interview questions together. To enrich the prototype coding system, the researchers continued to select 57 participants from the remaining participants for one-on-one semi-structured interviews and repeated the coding procedure above. This round of interviews further uncovered the reasons for the participants' classroom silence. Subsequently, researchers conducted a focus group interview with the remaining 12 participants to confirm the developed coding system. To avoid participants' nervousness, the researchers chose a smart classroom with a cozy environment and tables and chairs that could swing freely, arranged the tables and chairs in a ring shape in advance, informed and invited 12 participants to come to the interview at a uniform time, with Researcher 2 and Researcher 3 acting as the moderator and recorder, respectively. Questions were set up mainly around the interview outline and the results of the previous two rounds of interviews and were adjusted according to the on-site conversation, for example, asking the participants to talk about their perceptions of professional classroom silence based on their professional knowledge and experience to obtain more information for the study. The fieldwork was organized in a way that allowed data collection and preliminary analysis to occur simultaneously [40] (pp. 64-69). This stage focused on further validation and refinement of the coding system developed in the first phase. Only online interviews were used in this phase because students were learning online due to the epidemic. 78 participants were randomly selected from the 131 participants recruited in March 2020, and one-to-one semi-structured interviews were still conducted by the researchers with the participants. After that, the researchers selected 43 participants for one-on-one in-depth interviews and conducted a focus group interview for the remaining 10 participants. The specific process was similar to the first stage and will not be repeated here. This round of interviews was dedicated to fulfilling the attributes and dimensions of the emerged categories on the one hand, and to discovering new elements on the other [41] (pp. [36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54]. After this round of interviews, four researchers continued to follow the first phase of independent coding followed by a consultative workshop. At this stage, the existing categories were further enriched. The relationship between these categories was clarified through the paradigm model of axial coding. Based on this, the researchers established the core category through selective coding and formed a theoretical model about the formation and development mechanism of the phenomenon of silence in the professional courses of Chinese undergraduates majoring in education. --- Stage 3 This phase focused on confirming or revising the theoretical model developed in phase 2. 52 of the 107 subjects recruited in March 2021 were selected by the researchers for oneon-one semi-structured interviews. The remaining 55 participants were divided equally into five separate groups for focus group interviews. The focus group interviews were conducted in empty classrooms at the university, and each focus group interview consisted of 2 researchers and 11 participants, one researcher was responsible for in-depth interaction with the 11 participants, and the other researcher was recording the interview. After this round of interviews, researchers found that no new categories or attributes emerged, and the theoretical model of the representation and formation mechanism of the phenomenon of the professional classroom silence of Chinese undergraduates majoring in education developed in the second round was further confirmed here. This meant that this study had achieved theoretical saturation at this point, and no new information needed to be collected [36,41]. --- Data Analysis Coding is the key to generating theory from empirical data and thus is the core strategy for data analysis in this study. The "open coding-axial coding-selective coding" strategy invented by Strauss and Corbin [36] is widely accepted. In this study, this coding strategy was chosen and the interview data were analyzed level by level with the help of Nvivo12 software (QSR International, Burlington, MA, USA). Open coding occurred mainly in the first stage of data collection and analysis and was centered on decomposing, comparing, labeling, conceptualizing, and categorizing data through line-by-line analysis. For example, the label "attitude" was assigned to the statement "Silence in the classroom has become a common phenomenon in professional classrooms nowadays, and for me, this is a bad phenomenon". With the emergence of the new label "harm" from the supplementary interview, a new related concept "subjective perception" could be created on top of the two and further developed into the category "silence cognition". Through this inductive and continuous comparison approach, five major categories were established at this stage. The axial coding occurred mainly in the second phase of data collection and analysis, which centered on the establishment of a paradigm model and the determination of the relationships among the major categories. A paradigm model is an analytical model for linking categories and sub-categories in a set of relationships at a higher level of abstraction [42]. In this study, the analysis of the five categories' relationships clarified the status of the main category "classroom silence". The core of the selective coding was to write a story line, select the core category, and complete the theoretical construction of the specific representation of classroom silence and its formation mechanisms among Chinese education major undergraduates. The selective coding was initially achieved in the second phase of the research process, and the rationality and credibility of the coding were further verified in the third stage. --- Theoretical Saturation Test The grounded theory approach requires researchers to continuously collect and analyze data and to continuously supplement and improve emerging concepts and categories [38]. When the newly collected data cannot be classified in new ways, this indicates that the theory has reached saturation [43]. After the first phase was completed, we organized the second phase of data collection and analysis to confirm whether new categories or concepts would be generated and found that new concepts and categories were indeed generated in the second phase, for which we organized the third phase. The third phase generated no new theoretical elements and further validated the previously coded logical relationships. This indicated that the previously constructed theoretical model was saturated. In addition, we fed back the categories and models generated by the coding to the professional course instructors and some of the interviewees. They confirmed that the model was consistent with reality and no more new categories were needed. The collection of new data was stopped because of the clarity and robustness of the extracted major class genera, initial class genera, and relationship descriptions. --- Rigor The credibility of research starts from the data. The depth and scope of the data are very important. Research that generates data with rich area of coverage, rich content, and relevant information is exceptional [44] (pp. [24][25]. In this sense, the larger the sample size of a qualitative study, the more representative the analysis of the sample will be of the population. However, the qualitative research process is more complex and depth-oriented, and it is not possible to select a very large sample as in quantitative research, so the sample size of common grounded theory studies is usually less than 100 [33,42]. Thus, it is clear that this study is rich and substantial in terms of both sample size and interview data while following the principles of theoretical sampling. In addition, it is particularly important to consider how to enhance self-reflexivity throughout the study [45]. To this end, the researchers used memos and reflections. After each interview and during data analysis, the researchers wrote memos about interview elements or illuminating data based on professional experience. At the same time, the researchers considered the possible impact of these ideas on the interview and analysis process through workshops and reflections and re-evaluated the process of interviewing and analyzing the data. --- Results and Theory Through theoretical sampling and a rigorous three-level coding procedure, this study focused on the basic situation of silence in the professional classroom of Chinese undergrad-uates majoring in education, and the real perceptions and explanations of students involved in the silence in the professional classroom. Responses to these questions formed the basis of the grounded theory of the formation and development of classroom silence within the professional course context. This section will use the three-level coding as providing clues to drive the analysis deeper layer by layer until the theory is generated. --- Open Coding The purpose of open coding is to develop a large number of codes to describe, name, or classify events [32] (pp. [35][36][37][38][39]. Through fine-grained coding of the interview texts, in the open coding phase we grouped all data into 13 initial concepts: subjective awareness, objective awareness, speaking situation, class concentration, self-confidence, personality, speaking mindset, course attractiveness, peer influence, interaction convenience, educational experience, university environment and learning motivation. After further comparative analysis, a total of five major categories were extracted: silence cognition, silent behavior, personality characteristics, classroom experience, and learning adjustment (see Table 3). Detailed information about these categories and concepts will be specified in Section 3.3 of this study. Learning adjustment P107: The long-time habit from elementary school to high school causes students to maintain their original listening habits and gradually develop classroom silence when they enter the university classroom. Learning habits P35: Basic education establishes the image of teacher authority and knowledge authority, which leads students to be afraid to challenge teachers and textbooks. Authority awareness P2: The strict entry and lenient exit of domestic universities also give students the capital to ignore the classroom. Management system University environment P24: The free and diffuse atmosphere of the university campus somewhat undermines students' motivation to perform in class. Learning atmosphere P8: Most students simply require that they do not fail the exams in each course. --- Self-development Learning motivation P2: Because of the different levels of attention, university students are generally scattered and not motivated to study, so it makes sense that they are silent in class. --- Learning emphasis --- Axial Coding Axial coding is the use of a combination of inductive and deductive reasoning to connect codes [46]. Following Strauss and Corbin's paradigm model, this paper regrouped categories and attributes from the open coding. By analyzing the causal conditions, phenomenon, context, intervening conditions, action/interaction, and consequences of the phenomenon, the major categories and sub-categories were distinguished. Figure 3 showed the results of the axial coding. --- Int. J. Environ. Res. Public Health 2022, 19, x FOR PEER REVIEW 11 of 22 Axial coding is the use of a combination of inductive and deductive reasoning to connect codes [46]. Following Strauss and Corbin's paradigm model, this paper regrouped categories and attributes from the open coding. By analyzing the causal conditions, phenomenon, context, intervening conditions, action/interaction, and consequences of the phenomenon, the major categories and sub-categories were distinguished. Figure 3 showed the results of the axial coding. As can be seen from Figure 3, "classroom silence" is the main category of the study, and the open coding of silence cognition, silent behavior, personality characteristics, classroom experience, and learning adjustment are all sub-categories for further understanding and interpretation of this main category. Through the paradigm model, the story context of silence in education professional courses among Chinese undergraduates becomes clear: the phenomenon of silence in education courses (the phenomenon) emerges among students majoring in education triggered by personality psychological characteristics such as introversion and fear (causal conditions). In turn, under the external conditions of inconvenient interaction, peer pressure, and a "strict entry and loose exit", free and relaxing university environment (situational conditions), Chinese education major students adopt learning adjustment strategies such as lowering self-requirement or maintaining passive learning habits (action strategies), mediated by students' silence perceptions and basic education experiences (intervention conditions), ultimately leading to the continuation of silent behaviors in education professional courses (outcome). --- Selective Coding Selective coding is the process of systematically selecting categories to find the core category by exploring the deeper relationships among the main categories. This paper described the relationship of each sequence using selective coding, focusing on the story- As can be seen from Figure 3, "classroom silence" is the main category of the study, and the open coding of silence cognition, silent behavior, personality characteristics, classroom experience, and learning adjustment are all sub-categories for further understanding and interpretation of this main category. Through the paradigm model, the story context of silence in education professional courses among Chinese undergraduates becomes clear: the phenomenon of silence in education courses (the phenomenon) emerges among students majoring in education triggered by personality psychological characteristics such as introversion and fear (causal conditions). In turn, under the external conditions of inconvenient interaction, peer pressure, and a "strict entry and loose exit", free and relaxing university environment (situational conditions), Chinese education major students adopt learning adjustment strategies such as lowering self-requirement or maintaining passive learning habits (action strategies), mediated by students' silence perceptions and basic education experiences (intervention conditions), ultimately leading to the continuation of silent behaviors in education professional courses (outcome). --- Selective Coding Selective coding is the process of systematically selecting categories to find the core category by exploring the deeper relationships among the main categories. This paper described the relationship of each sequence using selective coding, focusing on the storyline "reasons for classroom silence in professional courses of university students majoring in education". This section begins with a detailed description of the meaning of each category and the sub-categories it contains and cites the original data in which they are grounded when necessary. --- Silence Cognition Silence cognition was the subjective awareness and objective cognition of Chinese undergraduates majoring in education about the phenomenon of silence in the professional classroom. It mainly included two categories of status quo perception and subjective cognition. Subjective awareness This referred to the participants' perception of silence in the professional classroom and contained three aspects: the participants' attitude, judgment on the nature of classroom silence, and the impact analysis. Firstly, almost all participants had a negative attitude toward silence in the professional classroom; they believed that silence in the professional classroom was "a very bad phenomenon" (P13, P27, P131) and a "persistent problem" (P16, P168) in the university classroom, and expressed their "disagreement" (P15, P97) with it. Secondly, one of the basic judgments given by the participants about the nature of classroom silence was that classroom silence was the opposite of classroom dialogue which alienated the potential two-way constructive activity in which both teachers and students participated into "a one-way activity" (P159, P201) on the teacher's side, with no place for the students' subjectivity to manifest. Finally, the participants provided a professional analysis of the harm of classroom silence. Participating students pointed out that this phenomenon not only limited multiple aspects of their development but also led to "a vicious circle" (P3) in which teachers lost the enthusiasm to teach and students lost the desire to learn. Objective perception This category was concerned with the objective description of the current situation of classroom silence by the participants. According to the general descriptions of the frequency, extent, and scope of classroom silence in professional classes by the interviewees, classroom silence was frequent and serious in such courses. "Classroom silence has become a normal for university students' classroom learning." (P42) --- Silent Behavior Silent behavior was a silent activity of non-participation exhibited by students in the classroom, as evidenced by circumstances of speaking up and concentration on the course. Speaking situation According to the descriptions of the participants, the positive and initiative level in classroom interaction among university students today was not high. "Contemporary university students are always poorly motivated or engaged in the university classroom" (P27), which was a deep obstacle to classroom interaction. Even
Classroom silence is a negative form of classroom performance that is particularly prominent in the Chinese learner population. Existing research has mainly explored the silence phenomenon among Chinese university students in two types of learning contexts: overseas university classrooms and foreign language classrooms at local universities, without focusing on the Chinese undergraduates' reticence in courses mediated by native language at domestic universities. However, the last type is the most common habitat for Chinese university students' learning in higher education. Therefore, a sample of Chinese undergraduates majoring in education (n = 394) was recruited to determine the mechanisms of silence formation in professional classrooms. This study was based on grounded theory and in-depth interviews, and the recorded material was processed using NVivo 12. After a series of steps including open coding, axial coding, selective coding, and theoretical saturation testing, the core feature of the phenomenon of silence in professional classrooms of Chinese university students majoring in education was found to be the separation of students' cognition and speaking practice. Then, a theoretical model of the formation and development of the phenomenon of classroom silence in professional classrooms of these undergraduates was constructed. The study showed that these university students had professional perceptions of classroom silence and displayed strong opposition to it, but they continued to maintain silent classroom behavior under the combined influence of individual characteristics, classroom experience, and learning adjustment. Following this, implications for existing research and suggestions for future practice are discussed.
the teacher's side, with no place for the students' subjectivity to manifest. Finally, the participants provided a professional analysis of the harm of classroom silence. Participating students pointed out that this phenomenon not only limited multiple aspects of their development but also led to "a vicious circle" (P3) in which teachers lost the enthusiasm to teach and students lost the desire to learn. Objective perception This category was concerned with the objective description of the current situation of classroom silence by the participants. According to the general descriptions of the frequency, extent, and scope of classroom silence in professional classes by the interviewees, classroom silence was frequent and serious in such courses. "Classroom silence has become a normal for university students' classroom learning." (P42) --- Silent Behavior Silent behavior was a silent activity of non-participation exhibited by students in the classroom, as evidenced by circumstances of speaking up and concentration on the course. Speaking situation According to the descriptions of the participants, the positive and initiative level in classroom interaction among university students today was not high. "Contemporary university students are always poorly motivated or engaged in the university classroom" (P27), which was a deep obstacle to classroom interaction. Even when there was a break in classroom silence, it was a passive choice forced by the final assessment. "Some students' classroom participation would be significantly higher if the instructor made it clear that classroom presentations would be recorded in the overall final grade" (P24). Course concentration Students' attention was not focused on the classroom, and they appeared to "sleep" (P46), "indulge in temptations" (P9, P25, P40, P42, P78), "wander" (P43), and "do other assignments" (P12). Modern technological products made learning activities "a little bit tedious and long" (P39) and created a great temptation for students who lacked a clear goal and a strong will. Indulging in various digital temptations had become a prominent manifestation of silence in the university classroom (P4). --- Personality Characteristics Students' personality traits were individual factors of their silence in professional classes and its perpetuation. The results of the interviews and analyses showed that students' silent orientation was closely related to their self-confidence, disposition, and speaking mentality. Self-confidence This study found that students' self-confidence had a profound effect on their silent behavior in the classroom. First, a lack of self-efficacy caused students to lose the courage to answer questions. Students were often afraid to be the first speaker, "because if they don't speak well, they will look reckless and stupid." (P30) Second, speaking competency influenced students' decisions about whether to speak in class. Many students "worry that they will not be able to give the right answer" (P1, P12, P23, P87, P167) because they "lack effective preparation" or "lack the ability to organize their thoughts and language in a short time" (P34); thus, they chose not to respond. Disposition According to the descriptions of the participating subjects, there was an important connection between individual students' disposition and their classroom performance. First, introverted students had a natural tendency to remain silent. "For students who are introverted or timid, silence may happen on any occasion." (P5, P11, P14, P20, P34, P73). Second, students who emphasized modesty and self-esteem would not express or present themselves in front of groups either. "Keeping face and remaining a low profile as well as humble attitude will reduce the frequency they speak or show in public" (P32). It is important to note that the determination of personality in this study was based on the self-reports of the participants and combined with the descriptions of the classroom instructors as well as the daily observations of the researchers. This is consistent with the basic requirements of grounded theory and has a basis in reality. Speaking mentality This was a factor that was relatively controllable by the individuals themselves and included fear, willingness to speak, and speaking needs. Most of the participating subjects expressed their fear of speaking in class, which could be categorized into three specific situations: fear of speaking, fear of being criticized by the teacher, and fear of being ridiculed or excluded by their classmates. Students might be afraid to speak up because they were "afraid of being blamed and criticized by the teacher for saying something wrong" or "afraid of being ridiculed by their classmates for incorrect comments" (P25). Another direct cause of classroom silence, as noted by some of the participants, was the lack of willingness to participate in class. "Some university teachers just read the content from a PowerPoint, causing students to lose interest in interacting with the teacher" (P13). Some of the participants indicated that speaking was unnecessary for classroom learning, so there was "no need for speaking" (P37). --- Classroom Experience The classroom experience was the subjective feelings of the participating subjects about classroom learning in professional courses and the contextual factors of classroom silence in professional courses, which mainly included three sub-categories of course attractiveness, peer influence, and interaction convenience. Course attractiveness Course attractiveness often determined the extent to which students were willing to engage in the course, and it could be reflected in the teaching situation, student-teacher relationships, personal attributes of the instructor, score incentive, and classroom interest. According to the narratives of the participants, students did not participate in classroom interactions when the content was outdated, boring, or too abstruse (n = 67), the teaching style was single (n = 43), questions from the teacher were too difficult, too empty or too grand (P18, P41), "there is rejection written all over the teacher's face" (P40), the teacher-student relationship was indifferent (P193, P275), there was a lack of score incentives (P30, P40), and they were not interested in the course (P4, P11, P83, P191). --- Peer influence The classroom was a collective, and students' performance was often influenced and even pressured by other students. "When it becomes clear that no one is interacting with the teacher, students are reluctant to be the first to communicate for fear of being alienated by group" (P28). Another scenario of peer influence was if the number of students who were afraid to express their inner thoughts increased in the classroom, it would create a more serious classroom atmosphere. Those who were usually willing to speak were more likely to remain silent due to the collective silence of the majority, which was called "contagious silence" (P15, P24, P31, P77). Interaction convenience The ease of interaction reflected how easy or convenient it was for students to interact with the instructor and was primarily influenced by seating distribution, class size, and classroom equipment. The teacher-student relationship was alienated by the unreasonable layout of seating space, and "the general classroom seating arrangement does not facilitate discussion between the teacher and students and overemphasizes the teacher's authoritative position" (P162). In addition, the "excessive class size" and "poorly equipped instructional equipment" (P32) also impacted students' participation in the classroom. --- Learning Adjustment Learning adjustment reflected the psychological and behavioral transformation of students from basic education featuring "high-intensity pressure" to higher education featuring "freedom and ease". It was another important realistic factor related to the subjects' professional classroom silence, mainly in terms of educational experience, university environment, and learning motivation. Educational experience Educational experience referred to the schooling experience of the participating subjects before they entered university, and was a historical factor that included educational inertia, study habits, and authority awareness. Most of the participants (n = 47) indicated that silence in class began to appear in primary and secondary schools, and gradually intensified with the increase of school years. The long-standing "indoctrination' teaching" (n = 69), "exam-oriented education" (P2, P6, P30, P36, P73, P161), and "high-intensity pressure" (P8, P16, P211, P313) learning atmosphere had cultivated a learning habit of passive acceptance for students and fostered thinking inertia. The image of authority established by teachers in basic education also led students to "be afraid to show their differences" (P35). Therefore, the silence of the professional classroom in higher education was a natural extension of the silence of the basic education classroom. University environment Universities were the learning environment and living space of university students, and the management system and learning atmosphere of universities were the realistic factors for the silence in professional courses of the participants. On the one hand, the entry and exit mechanism and assessment system of universities inhibited students' enthusiasm for self-expression. The "strict entry and lenient exit" (n = 45), together with the resultoriented and quantity-oriented assessment system (n = 27) of universities gave university students the capital to ignore the classroom (P8, P10, P93, P173). On the other hand, the free and undisciplined learning atmosphere of universities made it easy for students to let themselves go. Excessive free time (P24) and loose requirements from parents and teachers (P16) made students free to indulge in their respective worlds and appear silent in the classroom. --- Learning motivation The majority of participants (n = 63) reported that, due to the stark differences between the two educational systems, their demands for themselves and the emphasis on learning generally decreased after they entered university. Most students adjusted their learning goals from high scores to not "failing the exam" (P8). With the lowering of learning goals, students' enthusiasm for classroom participation gradually disappeared, and thus, classroom silence happened. University students in a relaxed environment had a lax learning mindset (P10, P17, P38, P178), they were less committed to learning (P2, P21, P22, P41, P290), so they lost the prerequisite for classroom interaction. Considering this, the silence in class was "reasonable". From the analysis above, it was clear that the Chinese undergraduates majoring in education had a profound understanding of the nature and the harm of classroom silence in professional courses. At the same time, they perpetuated classroom silence that was barely recognized by themselves under the combined influence of multiple factors. As a result, the core category of the study emerged, namely, the separation of cognition and practice in the professional classroom silence of Chinese undergraduates majoring in education. Based on this, the researcher further clarified the attributes and dimensions of this core category (see Table 4). --- Core Category Attribute Dimension --- Separation of cognition and practice --- Cognitive level High-low Behavior state Speaking-No speaking Through in-depth analysis of the core category, the study found four variants of the undergraduates' cognition and practice of classroom silence in their professional courses (see Figure 4). Through in-depth analysis of the core category, the study found four variants of the undergraduates' cognition and practice of classroom silence in their professional courses (see Figure 4). --- The Theoretical Model of the Formation and Development of the Phenomenon of Professional Classroom Silence for Chinese Undergraduates Majoring in Education The analysis above showed that the classroom silence of these students in professional classes featured the separation of cognition and speaking practice, which was manifested as "high cognition and low practice" (see Figure 4, quadrant IV), that was, students had professional cognition of silence's hazards and disapproved of silence but remained silent practically. To deeply explain the mechanism of this phenomenon, this study constructed a theoretical model of the formation and development of cognition and practice separation in the classroom silence of Chinese undergraduates majoring in education. --- The Theoretical Model of the Formation and Development of the Phenomenon of Professional Classroom Silence for Chinese Undergraduates Majoring in Education The analysis above showed that the classroom silence of these students in professional classes featured the separation of cognition and speaking practice, which was manifested as "high cognition and low practice" (see Figure 4, quadrant IV), that was, students had professional cognition of silence's hazards and disapproved of silence but remained silent practically. To deeply explain the mechanism of this phenomenon, this study constructed a theoretical model of the formation and development of cognition and practice separation in the classroom silence of Chinese undergraduates majoring in education. Figure 5 is the visualization of the formation mechanism of "high cognition, low practice". sional classes featured the separation of cognition and speaking practice, which was manifested as "high cognition and low practice" (see Figure 4, quadrant IV), that was, students had professional cognition of silence's hazards and disapproved of silence but remained silent practically. To deeply explain the mechanism of this phenomenon, this study constructed a theoretical model of the formation and development of cognition and practice separation in the classroom silence of Chinese undergraduates majoring in education. Figure 5 is the visualization of the formation mechanism of "high cognition, low practice". Chinese undergraduates majoring in education had a professional judgment about classroom silence due to their professional background. For example, they were able to recognize that students' silent behavior in professional classes transformed teaching as a bilateral activity between teachers and students into a one-way activity on the teacher's side and pointed out that the greatest harm was the formation of a vicious circle in which Chinese undergraduates majoring in education had a professional judgment about classroom silence due to their professional background. For example, they were able to recognize that students' silent behavior in professional classes transformed teaching as a bilateral activity between teachers and students into a one-way activity on the teacher's side and pointed out that the greatest harm was the formation of a vicious circle in which "teachers do not want to teach and students do not want to learn". Through the interviews, we found that the judgment and reflection of education major students on classroom silence reflected strong professionalism, which was manifested in the terminology of expressions, such as "unilateral activity", "alienation", and "the sequela of basic education". This is one of the characteristics that distinguish this study from other studies. Despite a high degree of professional awareness, they continued to behave as contributors to the phenomenon of silence in the professional classroom. This study found multiple causes for this phenomenon through an inductive analysis of data grounded in student interviews. The primary causes were students' personality characteristics such as lack of confidence, introversion, and a mentality that hindered speaking. Among them, speaking mentality was the most direct cause of the silent behavior. The speaking mentality was not innate, some deeper causes could be further traced. For example, individual psychological characteristics of students, classroom experiences, the university environment, and the learning adjustments that occurred after students enter university. Some of these factors were situational, some were cultural, some were historical, and some were personalized. It was the interaction and joint influence of these factors that shaped students' silent behavior in the classroom [47]. At the same time, this validated the paradigm model proposed at the phase of axial coding. In other words, the speaking mentality was a direct cause of students maintaining silence in the professional classroom, and this relied on contexts such as the classroom experience and the university environment and it acted through mediating conditions such as silence cognition and educational experiences. Specifically, the fear of speaking was closely related to students' psychological characteristics of lack of self-efficacy or introversion and was also rooted in their educational experiences, especially their personal experience of being criticized for wrong answers in basic education. Some students were afraid to speak because of their lack of speaking competency, which was closely related to their lack of knowledge reserves. The root cause was the loose study habits and negative study motivation of university students. The reluctance to speak was largely due to the students' poor classroom experience, especially their low interest in course content or interactive topics. At the same time, peer pressure in the classroom experience might also weaken students' willingness to speak for fear of being "embarrassed", "perceived as being strangely" or "excluded". This depended in part on the lax management and inappropriate evaluation of teachers and students in universities and also affected students' adjustment to the university environment. The lack of need to speak was mainly due to the one-way input learning habits developed in the indoctrination classroom mode of basic education, which hindered the development of students' thinking and expression skills. In other words, the inappropriate learning adjustment that occurred after students entered the relaxed environment of universities was the root cause of students' lack of speaking needs. The underlying causes were both relatively independent and interrelated. On the one hand, personality and psychological characteristics, classroom experience, and learning adjustment each constituted relatively independent influencing factors on the silence maintenance of the undergraduates in their major courses. On the other hand, the great contrast between the "high-intensity pressure" learning environment in basic education and the "free and easy" learning atmosphere in higher education generated learning adjustment behaviors, which interacted with their classroom experience through the influence of students' psychological characteristics. --- Discussion --- Summary and Discussion This study proposed a dynamic theoretical model of the formation and development of silence in the professional classroom of Chinese undergraduates with an education major by grounding in interview data. The main contribution of the study is to reveal the phenomenon of the separation of cognition and practice in the professional classroom silence among Chinese undergraduates majoring in education and to further investigate the mechanisms of this phenomenon. The main findings of this study will be discussed in further depth in comparison with related existing research. First, this study found that not only the silence in the professional classroom was very common and serious among Chinese university students who majored in education, but there was also a separation of cognition and practice in which students had high cognition and low practice of speaking in the professional classroom (see Figure 4, quadrant IV). On the one hand, both the researchers' informal classroom observations and the researchers' in-depth interviews with the participants suggested that classroom silence was normal for university students. Silence in the classroom was still widespread among Chinese university students, even in professional classes at domestic universities where the native language was the medium of communication, which formed a mutually corroborating relationship with existing research on silence in overseas classrooms and second language classrooms in local universities [11,13,[17][18][19]21,22]. On the other hand, all participants in the interviews had a negative attitude toward classroom reticence and made a more professional analysis of silence in the professional classroom, but at the same time took no measures to improve it, so that silence continued to occur or even gradually increased. Zhou [24] and Hsu [14] revealed in their study that there was a clear contradiction between Chinese university students' perceptions of oral participation in university English classes and their actual behaviors. Although students valued classroom oral participation and did not wish to be passive learners, their overall level of participation remained low. Cheng's [48] study focused on the reasons why some Asian ESL/EFL learners continued to fail to take an active role in the classroom although they might have a strong desire to speak. The research above identified some mutual evidence with the finding of this study, that there was a separation of cognition and practice among Chinese university students regarding classroom silence. In contrast, this study not only directly addressed Chinese university students' perceptions of classroom silence but also explored the mechanisms that generated the cognition-practice gap of classroom reticence among Chinese university students in their native language classrooms at domestic universities. Correspondingly, "low cognition, low practice" (see Figure 4, quadrant III) was observed in research on Asian students' classroom reticence. Silence was a positive strategy for Asian students to save face and show respect and courtesy [49,50] as well as maintain harmony in the social order [51]. As a result, they often chose to be reticent for the sake of these positive functions of reticence both in domestic and foreign classrooms. A certain explanation of this interpretation difference might be that relevant studies tended to reveal the positive meaning of silence from the perspective of understanding silence or the cultural identity of the participants. While this study was not limited by a specific theoretical perspective (which was also a requirement of the grounded theory), due to the education professional background of participants, the analysis based on the interviews with these students inevitably reflected the disciplinary characteristics of education, i.e., showing the understanding of the essence of teaching and learning and paying more attention to the problems of education itself. Second, the reasons why students remained silent in the classroom were systematic and diverse. This study found that students' classroom silence in specific contexts was facilitated by a combination of factors. Many of these factors extended beyond the specific classroom context and were linked to the learning environment and its changes as well as the learners' learning adaptations. Although the outwardly observable manifestations of silent behaviors such as students not taking initiative, not actively speaking, and doing things unrelated to the classroom were similar, the reasons behind them were not the same. Moreover, the combination of these factors could make students' tendency to be silent more stable, making it difficult for students to induce changes in their actions even when they were highly aware of the dangers of classroom silence. Relevant research had also explored the causes of students' refusal to participate orally in class from multiple perspectives [5,13,14,25,[52][53][54][55][56]. For example, King [5], from the perspective of dynamic system theory, pointed out that the state of silence prevalent in Japanese second language classrooms was built on many pillars. Even within a single lesson, there might be multiple interrelated reasons behind students' silence. These reasons included both dynamically changing external factors as well as internal characteristics of learners. This systematic perspective was highly compatible with the model ultimately constructed in this study, with the difference that King's [5] study used dynamic system theory as a prior research perspective, while this study generalized the findings by grounding them in the interview texts. In addition, this study did not stop at displaying the influential factors behind students' persistently silent behavior in professional classes but further subdivided the factors into direct and deep causes. Then, the interrelationships among the causes were clarified to build a systematic hierarchy of the causes of persistent silence in professional classes among Chinese undergraduates majoring in education. Finally, this study further confirmed some of the common factors that had been studied and also revealed some factors that had not received sufficient attention in relevant studies. For example, some of the influential factors summarized in this study, such as individual psychological factors such as self-confidence, personality, speaking mentality, and classroom contextual factors, were consistent with factors concluded from studies of Jia et al. [57], Wang et al. [58], Ai [21], Flowerdew and Miller [10], Eddy-U [13], Hsu [14], Liu and Littlewood [12], Sedova and Navratilova [59]. These factors constituted the general reasons for Chinese students' silence in different types and contexts of university classrooms. At the same time, this study also highlighted some new factors. This study found that the learning adjustment that occurred after students moved from basic education to higher education was both a cause of their professional classroom silence and a consolidating factor in their continued classroom silence despite their disapproval of silent behavior. These factors had not received the attention they deserved in related studies. As mentioned earlier, basic education not only helped students develop a passive acceptance of learning inertia, but its excessive emphasis on scores and standard answers also imposed limitations on the development of students' thinking and expression skills. As one study revealed, Chinese students' classroom silence was essentially a learned behavior, as they began to "learn to be silent" in elementary school [60]. Even in conversational contexts, teachers' questions were usually not used to elicit reasoning or probe students' understanding, but rather to check students' memory [61], and unskilled questioning left students with very limited opportunities for inquiry and discussion in the classroom [57,62,63]. In the absence of any external intervention, these factors, which the participants called the "after-effects of basic education", would accompany students to the university and became an important historical source of silence in the university professional classes. Most students entered Chinese universities with a change in their learning mindset and behavior as a result of changes in the learning environment, which manifested itself in the form of lowered attention to learning, relaxed self-requirement, and indulgence in external temptations. These factors were accompanied by a lack of knowledge and a scattered state of learning in the classroom, resulting in students losing the prerequisites for interaction in the dialogue situations of professional classes due to a lack of ideas or the absence of concentration. The results of this study have important implications for countries or regions where university classroom silence also exists, such as Australia [50], the United Kingdom [58], and South Africa [64], and East Asian countries such as Japan and South Korea [12], which share a similar culture and educational context as China. Generally speaking, improving silent behavior in the classroom requires dealing with factors such as classroom experience and learning adaptation to achieve consistency in university students' cognition and practice. First, educators should focus on fostering an encouraging, positive classroom atmosphere and arranging open-ended discussions to increase student friendship and teamwork which means a non-threatening environment. Second, educational administrators should pay attention to students' adaptation problems during the educational phase transition and adopt a combination of centralized preaching and individual counseling to help students make a smooth transition. Third, students should be guided to continuously increase their knowledge base and keep the classroom progressively active. In the beginning, students are invited to express ideas as a group rather than as individuals; then they are encouraged to express ideas as individuals; eventually, they will be able to actively share their views. --- Limitations This study focused on students majoring in education, so the results obtained might be influenced by the professional attributes of the participants. The results of this study showed that the core feature of the silencing phenomenon in professional courses in Chinese universities was the separation of cognition and speaking practice among university students, which was characterized by "high cognition and low practice". The "high cognition" might be related to the professionalism and sensitivity of such a student group. Due to the contextual limitations of the substantive theory of "cognition and practice separation of Chinese education undergraduates' professional course silence", whether the findings of this study can apply to those from different majors should be further discussed. Therefore, subsequent studies need to expand the professional dimension by conducting research on classroom silence in different types of professional courses to explore whether the "high cognition, low practice" (see Figure 4, quadrant IV) model of classroom silence formation and development can explain classroom silence for other majors outside of education in Chinese universities. In terms of sampling, future research needs to include the classroom silence and participation of junior students in various categories of majors at representative comprehensive universities to develop or revise the findings of this study. From the perspective of seeking contextual variables to increase the density and explanatory power of the theory, future research should focus on whether there is a unity between cognition and practice of "high cognition, high practice" (see Figure 4, quadrant I) or "low cognition, low practice" (see Figure 4, quadrant III) in Chinese university students' professional course learning and a new separation between cognition and practice of "low cognition, high practice" (see Figure 4, quadrant II). Due to the use of a qualitative research method based on grounded theory, the findings obtained in this study lacked large-scale quantitative validation. Therefore, using quantitative research methods and statistical analysis to validate the findings of this study and further explore the effects of other factors such as age, gender, socioeconomic status, career prospects and academic achievement on Chinese university students' silent behavior in the classroom are also our next directions. --- Conclusions To investigate the phenomenon of silence in the professional classroom among Chinese undergraduates, this study used a grounded theory approach to conduct in-depth interviews and coding analysis with 394 Chinese undergraduates majoring in education who had experiences of silence in the professional classroom. The study found that students maintained silence while holding negative attitudes toward the phenomenon of classroom silence in professional classes. This finding further reinforces the relevant findings of existing studies (see Section 4.1 for details). However, the outstanding contribution of this study is not only to show the existence of this phenomenon but also to reveal the formation mechanism of the cognition-practice mismatch of students' silence in the professional classroom with the help of the grounded theory approach. In terms of high cognition of classroom silence, the students' background in education gives them a certain professional advantage in analyzing educational and learning issues. This insight further influences their attitudes toward silent behavior in the professional classroom. In terms of the low practice of classroom speech, speaking mentality is the direct cause. In contrast, personality and psychological characteristics, classroom experiences, and learning adjustment as deeper causes jointly influence students' speaking impediments, thus indirectly contributing to the persistence of silent behavior in students' professional classes. In addition, this study provides some new insights based on reaffirming and deepening the relevant findings of existing studies. For example, this study finds that learning adjustment contributes equally importantly to Chinese university students' classroom silence, but these factors have not received enough attention in existing research. --- Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy. --- Institutional Review Board Statement: Ethical review and approval were waived for this study, due to that the research does not deal with vulnerable groups or sensitive issues. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. --- Conflicts of Interest: The authors declare no conflict of interest.
Classroom silence is a negative form of classroom performance that is particularly prominent in the Chinese learner population. Existing research has mainly explored the silence phenomenon among Chinese university students in two types of learning contexts: overseas university classrooms and foreign language classrooms at local universities, without focusing on the Chinese undergraduates' reticence in courses mediated by native language at domestic universities. However, the last type is the most common habitat for Chinese university students' learning in higher education. Therefore, a sample of Chinese undergraduates majoring in education (n = 394) was recruited to determine the mechanisms of silence formation in professional classrooms. This study was based on grounded theory and in-depth interviews, and the recorded material was processed using NVivo 12. After a series of steps including open coding, axial coding, selective coding, and theoretical saturation testing, the core feature of the phenomenon of silence in professional classrooms of Chinese university students majoring in education was found to be the separation of students' cognition and speaking practice. Then, a theoretical model of the formation and development of the phenomenon of classroom silence in professional classrooms of these undergraduates was constructed. The study showed that these university students had professional perceptions of classroom silence and displayed strong opposition to it, but they continued to maintain silent classroom behavior under the combined influence of individual characteristics, classroom experience, and learning adjustment. Following this, implications for existing research and suggestions for future practice are discussed.
Introduction Employment is vital for financial stability in the lives of all people, and when it comes to people with disabilities it becomes even more essential to increase job quality to ensure their financial stability. In particular, bullying might become an issue for people with mild intellectual disabilities as they interact with supervisors and fellow employees in the workplace. Espelage and Swearer (2003) defined bullying as physical and verbal aggression that happens repeatedly from individuals or groups to achieve a goal. The issue of bullying in the workplace for workers with disabilities is rarely discussed in Saudi Arabian empirical studies. Therefore, the needs of people with disabilities in the workplace must be addressed to increase overall workplace quality. In 2000, the Saudi government enacted the Disability Welfare Law which supports people with disabilities in all life aspects, including employment services to find jobs with their typically developing peers (Bureau of Experts at the Council of Ministers, 2000). This law guarantees the basic rights of people with disabilities to protect and increase their quality of life. Also, the United States (US) of America passed the Americans with Disabilities Act (1990) to protect the rights of people with disabilities in all aspects of life. This law prevents discrimination against people with disabilities in all activities of life; it also ensures that people with disabilities have the same access and opportunities as their typically developing peers in areas such as employment and services (Equal Employment Opportunity Commission, 1990). These laws clearly affirm that people with disabilities need protection of their rights in everyday activities just as the rights of their typically developing peers are protected. Thus, people with disabilities need more attention not only regarding their employment rights but also their right to a safe environment in their workplaces. This involves a clear policy and more awareness about workplace bullying and how employees are protected. The current study focused on the variables of age, education level, gender, years of work experience, and the employment positions of assistant supervisor, assistant manager, area manager, and co-worker. These variables are important since age, education level, and years of work experience could determine why some workers with intellectual disabilities have faced bullying or not. Education level is also important because workers with intellectual disabilities with less education might face more bullying. It is also important to find out how gender affects levels of bullying in the workplace. I also included important employment position variables (i.e., assistant supervisor, assistant manager, area manager, and co-worker) to determine the extent to which these variables are linked with bullying, and, therefore, limiting workplace quality for workers with intellectual disabilities. L<unk>vvik et al. (2022) conducted a study about bullying in the workplace and found that 36% of their participantsw experienced workplace bullying. Thus, it is important to study this issue among people with intellectual disabilities to potentially improve the quality of workplace for these individuals. Vickers (2015) noted that not many studies have addressed the issue of bullying for people with disabilities compared to studies of bullying issues for typical people. Thus, bullying is a critical issue for any organization, and it becomes more of a concern to people with disabilities in their workplaces as they might be unable to defend their rights or even recognize bullying when it occurs. There are few studies on bullying against people with intellectual disabilities in the field to help stakeholders improve workplace quality for people with disabilities and learn how to prevent workplace bullying. The objective of this study was to determine which groups have had more experience with bullying based on demographic variables to assist stakeholders in improving workplace quality by decreasing bullying incidents that might occur for people with intellectual disabilities. This study's hypothesis was that there is an association between the variables identified in this study and workplace bullying of people with intellectual disabilities. This study is essential for the field of disabilities as we strive to support people with disabilities by preventing in workplace bullying. --- Workplace Bullying in Related Fields Bullying is an issue that people with disabilities face in the workplace. Jones et al. (2018) studied workplace discrimination and harassment among workers with disabilities and found that 18.4% of the sample reported harassment in the past 2 years in their jobs; workers with disabilities faced higher levels of harassment compared with workers without disabilities. Also, women with disabilities experienced higher levels of workplace harassment than men; and younger workers faced lower levels of harassment than older workers. Jones and his colleagues found that about 8% of their sample with disabilities had faced discrimination in the workplace in the past 2 years. Also, workers with disabilities faced higher levels of discrimination than workers without disabilities, workplace discrimination levels were similar for men and women, and discrimination levels were lower for younger workers than for older ones. These results imply that people with disabilities face more workplace discrimination than their typically developing peers because of their disabilities. Another study by Gardner et al. (2016) indicated that 15% of their sample had experienced bullying in New Zealand workplaces and 2.8% faced cyberbullying in the workplace; women experienced more workplace bullying than men, and women had worse physical health, more emotional strain, and more destructive leadership and team conflicts in the workplace. This study showed that people faced bullying of different types and at different levels in all workplaces, that women might experience more workplace bullying than men, and that workplace bullying might occur more frequently for workers with disabilities than for their typically developing peers. Also, organization strategies were less effective in the workplace. Gardner et al.'s participants self-identified workplace bullying vs. cyberbullying, and 16.79% indicated they experienced bullying either as workplace bullying or cyberbullying; 1.7% of the sample said they faced bullying several times a week or even daily, and 31% of the sample experienced bullying from supervisors, employers, or managers; 48% experienced bullying from their peers, 17% experienced bullying from subordinates, and 17% mentioned they experienced bullying from clients. Workplace bullying is an issue in any workplace. Etienne (2014) stated that 48% of nurse participants experienced bullying in their workplaces, and the most bullying acts they faced involved being ignored or excluded in their workplaces. In another study report discussing bullying in Saudi Arabia, Basfr et al. (2019) noted that 90.3% of nurses in Saudi Arabia experienced bullying in their workplaces and 57.7% faced physical and verbal abuse; the majority of them attributed the stress or anxiety resulting from that bullying to lack of support for these nurses in their workplaces. Also, Islam and Chaudhary (2022) found that bullying in the workplace was related to emotional exhaustion and workers' knowledge hiding in the health sector; they also found that friendship in the workplace was key to reducing bullying and knowledge hiding. Workplaces should attempt to prevent bullying by improving their organization systems and training their staff members (Etienne, 2014;Gardner et al., 2016). However, Ekici and Beder (2014) studied workplace bullying among nurses and found that 82% of nurses and 74% of physicians had faced workplace bullying at least once in the past year, and 12% of nurses and 11% of physicians had experienced intentional bullying at least once in the past year; the most common type of bullying among them was aggression related to their professional positions and their personalities. Islam and his colleagues (2021) studied the impact of workplace bullying among health care workers and found that the negative impact of workplace bullying caused burnout in nurses, and that passive avoidant leadership was one of the variables that reinforced workplace bullying and the resulting burnout. These findings show that bullying occurs in many workplaces at different levels. Based on these findings, we need more interventions to establish healthier and more stable work environments for people with disabilities. In addition, Sveinsdttir et al. (2018) indicated that 66% of their participants experienced bullying, 39% faced violence, and 53% cited psychological distress as common health issues. Women had more mental health and physical issues than men. Bullying is a serious issue in all places, especially in workplaces, because bullying may cause serious health issues over time. Lindsay and McPherson (2012) studied bullying and exclusion among students with disabilities, and their results indicated that teachers' attitudes impacted social exclusion and that social exclusion of and bullying toward students with disabilities appeared to be verbal and physical. Marraccini et al. (2015) also studied bullying at the college level, and found that 51% of their participants had witnessed other students being bullied by staff members at least once; 18% of their sample had experienced bullying by staff members at least once, 44% had experienced bullying in high, middle, or elementary school, most of sample (64%) had witnessed bullying by their peers in college at least once, and 33% had faced bullying by their peers in college. Marraccini et al. also reported that 47% of their female participants indicated that they had been bullied by teachers before they entered college, and 34% of their male participants mentioned that they had faced bullying by teachers; 21% of the female students had experienced bullying by staff members at least once; 9% of the male students had also faced bullying by staff members; 75% of students with disabilities indicated that they had faced bullying by teachers before they entered college, compared to the 42% of students without disabilities; and 50% of students with disabilities were bullied by staff members in college compared to 16% of students without disabilities. The studies discussed here stated clearly that bullying occurred verbally and physically, that females were more likely to experience bullying than males, and that students with disabilities faced bullying more than their typically developing peers did. Bullying might occur more often for individuals with disabilities because of lack of awareness and effective policy in the workplace. Robert (2018) found that bullying in the workplace had no impact on job stress and job performance. However, bullying might cause serious physical and health issues among workers who are impacted by bullying (Robert, 2018;Sveinsdttir et al., 2018). However, Khubchandani and Price (2015) studied harassment and morbidity in the workplace among U.S. adults, and results indicated that 8.1% of their participants had experienced harassment in the workplace in the past year, and women reported higher levels of harassment, especially those women who were divorced or separated compared to their nondivorced or nonseparated peers. Khubchandani and Price also reported that workers who worked with local government on night shifts or who were paid by the hour for their work were more likely to face harassment in the workplace compared to other working peers. Also, individuals who reported harassment had more health issues, less sleep, more asthma attacks, and smoked every day. Fattori and his colleagues (2015) also studied workplace bullying and their results showed that 16.3% of their participants were victims of bullying in the workplace, and that older participants were more experienced with bullying. Also, 30% of their participants mentioned that they had experienced depression after bullying occurred, and there was a strong relation between sick leave and workplace bullying. Fattori et al. also indicated that worse health-related quality of life was linked with workplace bullying and those who already had medical conditions were more adversely affected by bullying. Workers who have experienced bullying in the workplace may experience health issues and other medical conditions (Fattori et al., 2015;Khubchandani & Price, 2015;Robert, 2018;Sveinsdttir et al., 2018). As noted, previous studies have indicated that bullying may cause physical or health issues or depression, which emphasizes the importance of the current study because without studying bullying which might cause other health issues, we cannot develop interventions and policies that could prevent workplace bullying and thus enhance the work environment. Ahmad and his colleagues (2023) studied how to provide a new perspective on how to limit bullying in workplace, and they found that perceived servant leadership assists in reducing the number of workers experiencing bullying in the workplace by supporting them with compassion. Chaudhary and Islam (2022) studied how despotic leadership affects workers' psychological suffering as bullying was a mediating mechanism; they found that despotic leadership (with bullying behavior) might impact workers' psychological suffering. These studies emphasize the notion that some leadership styles might be contributing causes of bullying as some leaders do not take on their roles to decrease and ensure by effective policies how to manage and prevent bullying behavior in their workplaces. --- Workplace Bullying Toward People With Disabilities People with disabilities face bullying in the workplace which often causes them to quit their jobs. Chiu and Chan (2007) found discriminatory behavior against people with mental illness in the health care, employment, and family domains. Thus, discrimination and bullying may occur intentionally or unintentionally in any workplace toward people with health issues or people with disabilities. Gunderson and Lee (2016) found that people with disabilities were paid 10% less than their peer workers without disabilities. These results imply that even if physical or verbal bullying does not occur in a workplace, it might be perpetrated by administrators using policies and other authorities to not pay or respect the workplace rights of employees with disabilities as they do those of workers without disabilities. Mann and Wittenburg (2015) --- 214 improve the employability and wages of people with disabilities. Thus, workplaces need to be inclusive of people with disabilities, and decisionmakers need to be aware of bullying that might occur in the workplace and find ways to improve the workplace for all workers with disabilities. Mitra and Kruse (2016) found that people with disabilities of both genders in the US were more likely to be replaced than people without disabilities by 75 to 89%, and they were more likely to lose their jobs involuntarily compared to people without disabilities. These findings show how interventions can be implemented by using laws to enhance and improve the workplace for employees with disabilities, prompting the exercise of their rights as others without disabilities exercise theirs. Fevre et al. (2013) found that workers with disabilities and other long-term health conditions are most likely to experience and suffer from ill treatment in the workplace. Their results also showed that workers with disabilities blamed for their ill treatment as why they believed the ill treatment happened in workplace. This means that workplace bullying might occur toward people with disabilities and other health issues as these individuals appear weak to speak about their rights as workers. They may be afraid to lose their jobs if they speak out, and they need to work to live. Also, their managers might exert more control over them, seeing them as workers with disabilities who are weak and lacking power. In another study, Maroto and Pettinicchio (2014) found that people with disabilities faced work segregation which limited their earning capacity, and workers with disabilities also worked in workplaces that required fewer skills and fewer chances to access education and experiences to improve their skills. In other words, these people with disabilities were neglected because the people in charge didn't give them a chance to improve their skills needed for other jobs that suited their abilities by training and educating them like their typically developing peers. Also, Snyder et al. (2010) stated that workers with disabilities experienced higher levels of discrimination overtly and subtlety, targeting people with disabilities with low job satisfaction levels. In summary, employees with disabilities experience more workplace bullying through injustice and ill treatment compared to their typically developing peers (Fevre et al., 2013;Mitra & Kruse, 2016;Snyder et al., 2010). --- Statement of the Problem I have witnessed bullying during my work in the field as a researcher working with people with disabilities to rehabilitate them to be able to work in keeping with their abilities and needs. This experience has prompted me to conduct research about the bullying issue in my country. Few studies have addressed the issue of bullying toward people with mild intellectual disabilities. Bullying is a pervasive issue that workers with disabilities face and experience, and its effects might cause serious health issues (Fattori et al., 2015;Khubchandani & Price, 2015;Robert, 2018;Sveinsdttir et al., 2018). This study addresses this issue and supports decisionmakers in increasing the quality of workplaces for individuals with disabilities as they need more attention not only regarding their employment rights but also their right to safe work environments and to clear and effective policies that build more awareness about workplace bullying and how to be protected from it. There are not many existing studies about bullying against people with intellectual disabilities that can help stakeholders improve the quality of workplaces for people with disabilities and learn how to prevent workplace bullying. This study is also important for the workplaces that people with disabilities work in it as employers need to be aware that bullying might be occurring intentionally and unintentionally in their workplaces toward people with disabilities. This study's results may also prompt decisionmakers to improve workplaces for people with disabilities by preventing bullying. This study might also bring attention to the phenomenon of bullying to protect people with disabilities from it. --- www.richtmann.org Vol 13 No 5 September 2023 215 5. Method --- Sample and Procedure Employees with mild intellectual disabilities in Saudi Arabia comprised this study's sample, believed to be appropriate survey respondents to share their opinions about workplace bullying. Several factors affected my investigation into this group of people as respondents to the Workplace Psychologically Violent Behaviors (WPVB; Dilek & Aytolan, 2008) instrument, sharing their views and opinions to contribute to this study (Creswell, 2012). This population of employees with disabilities could help policymakers at the government level, company level, or other workplaces to improve workplace conditions for all employees with disabilities. The ethics committee at Qassim University approved the study (number 22-09-01). I based the sample selection on eligibility of having mild intellectual disabilities and 1 year of work experience, and used a random sample technique to provide equal opportunity to the whole sample. I obtained email addresses of companies who had workers with mild intellectual disabilities, and then sent the survey's link through the companies' email. The employers then sent my invitation to participate in this study to about 350 workers with mild intellectual disabilities. The invitation included an informed consent letter with an explanation of participants' rights and assurance that participants would remain anonymous. Participants had 2 weeks to complete the survey. The response rate was roughly 40% of the sample. --- Measures This study was designed to determine the relation between the independent variables (IVs) and the dependent variable using a quantitative research approach with multiple regression (Mertler & Reinhart, 2017). I also used descriptive statistics for each dimension of the WPVB to collect means, standard deviations, skewness, and kurtosis data to estimate the normality of distribution. The IVs were age, education level, gender, and years of work experience. The dependent variable was workplace bullying toward people with mild intellectual disabilities in Saudi Arabia across four dimensions (i.e., attack on personality, attack on professional status, isolation, and direct negative behaviors) to determine which of the IVs might predict bullying against workers with mild intellectual disabilities in their workplaces. I also used work positions of assistant supervisor, assistant manager, area manager, and co-worker as independent variables and the WPVB dimensions as dependent variables to determine which of these variables might predict bullying against workers with mild intellectual disabilities in their workplaces. I conducted multiple regression analysis to reveal the correlation between variables to predict the best group of two or more independent variables by using dummy coding of variables and how that affected the dependent variables. Specifically, this study identifies factors associated with workplace bullying toward people with mild intellectual disabilities in Saudi Arabia to address the research question that guided this study: What work factors are associated with workplace bullying toward people with mild intellectual disabilities in Saudi Arabia? With permission from the authors, I used the WPVB (Dilek & Aytolan, 2008) to collect data from the participating workers with disabilities. The first part of the data collection tool was a researcher-developed demographic questionnaire designed to gather information about the sample on the variables of age, education level, gender, years of work experience, and the positions of assistant supervisor, assistant manager, area manager, and co-worker. The second part was the WPVB instrument used to collect data from the participants with mild intellectual disabilities who worked currently or previously. The WPVB includes 33 items with four categories: attack on personality (9 items), attack on professional status (9 items), isolation (11 items), and direct negative behaviors (4 items). The WPVB uses a six-point rating scale: I have never faced, I have faced once, I face this sometimes, I have faced several times, I frequently face this, I constantly face this. Dilek and Aytolan (2008) reported that the WPVB has a high reliability measurement (0.93) by Cronbach's alpha www.richtmann.org Vol 13 No 5 September 2023 216 internal consistency value. Also, the Cronbach's alpha (.97) of the Arabic version of the WPVB indicated high reliability. --- Pilot Study The pilot for this study was conducted in two phases. First, I translated the WPVB survey from English to Arabic after obtaining permission from the original authors, and then gave it to another colleague in the field of special education who holds a PhD degree to backtranslate the survey from Arabic to English to ensure accuracy. Next, I asked 10 faculty members to review the survey and provide comments and feedback to ensure the survey was ready for collecting data from the target sample. --- Results This part of this study report is organized according to descriptive statistics as well as multiple regression analysis results relevant to workplace bullying toward workers with mild intellectual disabilities in Saudi Arabia. --- Demographic Characteristics of the Sample The demographic characteristics of this study's sample include gender, education level, years of work experience, and age. See Table 1. --- Multiple Linear Regression Results This study used multiple linear regression analysis to predict bullying toward workers with mild intellectual disabilities in Saudi Arabia workplaces with the IVs of age, education level, gender, and years of work experience. Results of the linear regression analysis include a model summary of coefficients for each independent variable and each of the four dimensions of the WPVB (i.e., attack on personality, attack on professional status, isolation, and direct negative behaviors) as presented in Tables 3456. Additionally, with the WPVB dimensions as the dependent variables, I used the work positions of assistant supervisor, assistant manager, area manager, and co-worker to determine which of these variables might predict bullying against workers with mild intellectual disabilities in their workplaces. Results of this analysis are presented in Table 7. The results of regression analyses show that the dimension of isolation explains 26% of variance (F = 3.615; p <unk> 0.001) and the best predictors were High school (<unk> = -15.785, t = 6.758; p <unk>.05), Diploma (<unk> = -12.150, t = 5.978; p <unk>.05), Years of Work Experience from 11 to 15 (<unk> = 15.907, t = -6.659; p <unk>.05), Age from 18 to 25 (<unk> = 17.379, t = -7.475; p <unk>.05), and Age from 26 to 33 (<unk> = 22.123, t = -7.594; p <unk>.05). The coefficient of the High school variable was -15.7, indicating that workers with mild intellectual disabilities in Saudi Arabia who held high school diplomas showed lower levels of workplace bullying on the isolation dimension by 15.7 points. Also, the coefficient of the Diploma variable was -12.1, indicating that workers with mild intellectual disabilities in Saudi Arabia who held diplomas showed lower levels of workplace bullying on the isolation dimension by 12.1 points. The coefficients of the Years of Work Experience from 11 to 15, Age from 18 to 25, and Age from 26 to 33 variables were 15.7, 17.3, and 22.2, respectively, indicating that workers with mild intellectual disabilities in Saudi Arabia who had 11-15 years of work experience or were 18-25 or 26-33 years old showed higher levels of workplace bullying on the isolation dimension, by 15.7, 17.3, and 22.2 points, respectively. See Table 3 for a summary of the model. --- 218 Regression analyses results show that the dimension of attack on professional status explained 29% of variance (F = 4.034; p <unk> 0.001) and the best predictors were High school (<unk> = -17.222, t = 5.490; p <unk>.05), 11-15 Years of work experience (<unk> = 12.156, t = 5.388; p <unk>.05), Age from 26 to 33 (<unk> = 12.746, t = 6.144; p <unk>.05), and Age from 34 to 43 (<unk> = 9.900, t = 4.904; p <unk>.05). The coefficient of the High school variable was -17.2, indicating that workers with mild intellectual disabilities in Saudi Arabia who had completed high school showed lower levels of workplace bullying on the attack on professional status dimension by 17.2 points. Also, the coefficients of the 11-15 Years of work experience, Age from 26 to 33 years, and Age from 34 to 43 years variables were 12.1, 12.7, and 9.9, respectively, indicating that workers with mild intellectual disabilities in Saudi Arabia with 11-15 years of work experience, age 26 to 33 years, and age 34 to 43 years showed higher levels of workplace bullying on the attack on professional status dimension by 12.1, 12.7, and 9.9 points, respectively. See Table 4 for a summary of the model. The results of regression analyses show that the dimension of attack on personality explained 26% of variance (F = 3.552; p <unk> 0.001) and the best predictors were High school (<unk> = -11.360, t = 5.087; p <unk>.05) and 11-15 Years of work experience (<unk> = 10.740, t = 4.992; p <unk>.05). The coefficient of the High school variable was -11.3, indicating that workers with mild intellectual disabilities in Saudi Arabia who had completed high school showed lower levels of workplace bullying on the attack on personality dimension by 11.3 points. Also, results show that Saudi Arabia workers with mild intellectual disabilities who had 11-15 years of work experience showed higher levels of workplace bullying on the attack on personality dimension, by 10.7 points. See Table 5 for a summary of the model. --- 219 Regression analyses results show that the dimension of direct negative behaviors explained 7.1% of variance (F = 1.545; p > 0.146) and the best predictor was Age from 26 to 33 years (<unk> = 6.413, t = 3.143; p <unk>.05). The coefficient of the Age from 26 to 33 years variable was 6.4, indicating that workers with intellectual disabilities in Saudi Arabia who were 26-33 years old showed higher levels of workplace bullying on the direct negative behaviors dimension by 6.4 points. See Table 6 for a summary of the model. The results of regression analyses show that the WPVB dimensions explained 47% of variance (F = 16.765; p <unk> 0.001) and the best predictors of workplace bullying were the positions of Assistant manager (<unk> = 23.224, t = 9.232; p <unk>.05), Area manager (<unk> = 35.568, t = 10.795; p <unk>.05), and Co-workers (<unk> = 34.179, t = 8.487; p <unk>.05). The coefficient of the Assistant manager, Area manager, and Coworkers variables were 23.2, 35.5, and 34.1, respectively, indicating that workplace bullying toward workers with mild intellectual disabilities in Saudi Arabia occurred at higher levels of frequency by assistant managers, area managers, and co-workers by 23.2, 35.5, and 34.1 points, respectively. See Table 7 for a summary of the model. --- Discussion and Interpretation This study focused on predicting the relation between a group of independent variables and bullying toward workers with mild intellectual disabilities in Saudi Arabia workplaces. Results showed an association between high school completion and three dimensions of the WPVB: isolation, attack on professional status, and attack on personality, which means participants who had completed high school experienced lower levels of bullying on three dimensions: isolation, attack on professional status, and attack on personality. This might be because these workers who had only completed high www.richtmann.org Vol 13 No 5 September 2023 220 school and had mild intellectual disabilities might not have known the meaning of bullying or been able to recognize its occurrence in their workplaces because their mild intellectual disabilities may have limited their ability to recognize workplace bullying; thus they reported low levels of bullying experience. This finding opposes that of Marraccini et al. (2015) who found that 51% of their participating students had witnessed other students being bullied by staff members, and 18% of their sample had experienced bullying by staff members at least once. Also, the current study revealed that holding a diploma was associated only with the isolation dimension and lower levels of bullying. This result differs from Fattori and colleagues' (2015) finding that 16.3% of their participants were victims of workplace bullying, and that older participants were more experienced with bullying. Thus, the current study's finding of low bullying levels in workers with intellectual disabilities who finished high school or held dip lomas might be due to their unwillingness to admit to experiencing bullying so as not to negatively impact their work and for fear that their managers might fire them. Islam and Chaudhary (2022) found that workplace bullying was related toemotional exhaustion and knowledge hiding in workers in the health sector. Thus, workplace bullying occurs, and workers with intellectual disabilities might not be aware of the resulting emotional exhaustion and knowledge hiding and are not reporting the workplace bullying that might be happening to them. The current study on workplace bullying toward workers with mild intellectual disabilities in Saudi Arabia found that the variable of 11 to 15 years of work experience was associated with high levels of bullying on three dimensions: isolation, attack on professional status, and attack on personality. This study's results are supported by Sveinsdttir et al. (2018) who indicated that 66% of their participants experienced bullying, 39% faced violence, and 53% cited psychological distress as a common health issue among them. Also, Etienne's (2014) results were similar to the current findings as Etienne reported that 48% of nurse participants experienced bullying in their workplaces, and that most bullying acts they faced involved being ignored or excluded in their workplaces. In the current study, workers with mild intellectual disabilities experienced bullying in the workplace on the dimensions of isolation, attack on professional status, and attack on personality by other workers in the workplace. This finding is supported by Maroto and Pettinicchio (2014) who found that people with disabilities faced work segregation which limited their earning capacity. Workers with disabilities also worked in workplaces that required fewer skills and afforded them fewer chances to access education and experiences to improve their skills. In the current study, workers with mild intellectual disabilities who had more than 10 years of work experience faced higher levels of bullying in the workplace as they became more familiar with bullying and could recognize when it happened to them. Another study that supported the current study's results was conducted by L<unk>vvik et al. (2022) who found that 36% of their participants experienced bullying in their workplaces. The current study found that higher levels of workplace bullying toward workers with mild intellectual disabilities in Saudi Arabia were related to workers aged between 18 and 43 years across three dimensions: isolation, attack on professional status, and direct negative behaviors. This finding aligns that of Jones and colleagues (2018) who found that younger workers experienced lower rates of discrimination compared to older workers. Thus, it might be that workers with mild intellectual disabilities experience higher levels or different types of bullying as they get older and become more aware that bullying might occur against them in the workplace. Another study (Fattori et al., 2015) supported the current results as that study found that 16.3% of their participants were victims of bullying in the workplace, and that older participants were more experienced with bullying. Thus, as workers get older, they may become more aware that bullying might happen in their workplaces. Also, Islam and his colleagues (2021) found that workplace bullying had a negative impact on nurses, and this negative impact caused burnout in their workplaces. Likewise, the current study's workers with mild intellectual disabilities might experience the negative impact of bullying in their workplaces with potential resulting burnout. The current study also found that bullying toward workers with mild intellectual disabilities in the workplace is associated with various work positions (i.e., assistant supervisor, assistant manager, area manager, and co-worker). This study found three positions (i.e., assistant manager, area www.richtmann.org Vol 13 No 5 September 2023 221 manager, and co-worker) were related to higher levels of workplace bullying against workers with mild intellectual disabilities. This result is supported by Gardner et al. (2016) who found that 31% of their participants experienced bullying by their supervisors, employers, or managers; 48% experienced bullying by their typically developing peers; and 17% experienced bullying by subordinates. Thus, workers with mild intellectual disabilities may face workplace bullying by their managers and typically developing peer workers; this type of bullying might target individuals with disabilities because the bullies assume they cannot defend themselves. This finding is supported by Snyder et al. (2010) who stated that workers with disabilities experienced higher levels of discrimination overtly and subtlety targeting people with disabilities. Chaudhary and Islam (2022) found that despotic leadership might impact workers' psychological suffering through bullying behavior. Thus, leadership styles might negatively affect workers as they face bullying behavior without their managers preventing or reducing it in the workplace. The current study revealed that workers with mild intellectual disabilities faced bullying by their managers, which may imply that leadership style plays a major role in increasing or decreasing workplace bullying. --- Implications and Recommendations Based on this study's results, I recommend including more disabilities specialists when hiring people with disabilities in any workplace in order to determine the appropriate jobs for them based on their needs and skills. Moreover, I recommend that each workplace with workers with disabilities have a clear policy on bullying and explicit procedures on how to report it. I also recommend more workshops and training sessions about disabilities as intervention for managers and co-workers to teach them how to support their employees and peers with disabilities. Also, people with disabilities should attend workshops and training sessions with their families as effective intervention in their first week of work so that they know their rights in the workplace and the meaning and types of bullying behaviors that might occur, and so they can be aware of the employer's policy on bullying and the procedures to follow for reporting bullying. Lastly, I recommend that each workplace encourage their department of human resources to improve their policy on bullying prevention, and to hire staff members who have degrees in the field of disabilities to assist employers in supporting people with disabilities in all aspects of the workplace, enhancing and improving the workplace environment. --- Conclusion Employees with disabilities face workplace bullying, and this study examined the relations among factors (i.e., age, education level, gender, years of work experience, and the position of assistant supervisor, assistant manager, area manager, and co-worker) that might predict bullying on specific dimensions of the WPVB tool. This study found an association between workers with mild intellectual disabilities who had completed high school and lower levels of workplace bullying across three dimensions: isolation, attack on professional status, attack on personality; workers who held diplomas were associated with lower levels of workplace bullying only on the isolation dimension. The current study found that years of work experience from 11 to 15 were associated with higher levels of workplace bullying toward people with mild intellectual disabilities across three dimensions: isolation, attack on professional status, and attack on personality. The variable of age between 18 to 43 was associated with higher levels of workplace bullying toward people with mild intellectual disabilities across three dimensions: isolation, attack on professional status, and direct negative behaviors. The current study also found that three work positions (i.e., assistant manager, area manager, and co-worker) were related to higher levels of workplace bullying against workers with mild intellectual disabilities. This finding is supported by several studies concluding that worker with disabilities faced more bullying and unjust treatment compared to their typically developing peers (Fevre et al., 2013;Maroto & Pettinicchio, 2014;Mitra & Kruse, 2016;Snyder et al., 2010) 222 limitation of this study was that some of the participants had problems understanding some questions because they had mild intellectual disabilities, and, therefore, required some help from family members who explained the questions to them so that they were able to accurately answer questions based on their experience. Future research might consider other variables which could also influence bullying behaviors: the size of the company, cultural background, training and mentoring assistant in workplace, families' support, cyberbullying, and types of co-workers such as local workers and international workers.
The purpose of this study was to examine the association between bullying in the workplace toward people with mild intellectual disabilities in Saudi Arabia and demographic factors (i.e., variables such as age, education levels, gender, years of work experience, and the employment positions of assistant supervisor, assistant manager, area manager, and co-worker). This study utilized the Workplace Psychologically Violent Behaviors tool, and multiple regression analysis. Results found a significant relation between high school and lower bullying levels among three dimensions: isolation, attack on professional status, and attack on personality. Also, the study found that 11 to 15 years of work experience was associated with high levels of bullying among three dimensions: isolation, attack on professional status, and attack on personality. The variable of age (18 to 43) was associated with a high level of bullying among three dimensions: isolation, attack on professional status, and direct negative behaviors in the workplace toward people with intellectual disabilities. In addition, findings showed that three employment positions (i.e., assistant manager, area manager, and co-workers) were related to bullying against workers with intellectual disabilities. These findings prompt the recommendation that human resources personnel pay attention to work policies on bullying prevention, and that every workplace hire specialists to assist companies in supporting workers with disabilities.
U nderstanding the pattern of human activities has been received growing attention due to it's important practical applications from traffic management to epidemic control [1][2][3][4]. Several mechanisms with individual activities have been discovered based on statistics of huge amounts of data on human behaviors, such as queueing theory and adaptive interest [5][6][7]. However, mechanisms behind human activities with interacting individuals are far from well understood because of complex population structures which can be described by complex networks [8][9][10]. Apart from the statistic characteristics of human dynamics in space and time interval, abundant researches have focused on comprehending human activities in social networks such as making friends where people in the same class with similar feature are more likely to be friends. There are also common phenomena of seeking social partners belonging two classes in bipartite populations 11,12, such as mate choosing between men and women, commercial trading between buyers and sellers. The seeking processes which may be the base of building many social relationships can be described by matching model 13. Individuals are generally divided into two classes according to their natural status. Then they observe features of others belonging to the other class, and finally decide whether select the individual as a social partner. Although characters of individuals are too complex to be quantitatively described in bidirectional selection systems, personal quality and economic status can be viewed as the main characters of individuals 14,15. Zhang and his collaborators solve the bipartite matching problem in the framework of economic markets, finding that partial information and bounded ratio-nality contribute to satisfied and stable matches 16,17. Besides characters of individuals, the matching processes is also affected by structure of social networks [18][19][20]. Since social networks have emerged some common characteristics, such as smallworld phenomena, scale-free properties with power-law degree distributions 21,22. Questions naturally arise how properties of social networks affect matching processes, and what kind of the property improves the matching performance of networks. To answer these questions, a bipartite network is reconstructed from the original networks [23][24][25], where only connected nodes satisfying successfully matching conditions and their links are reserved. This allows us to investigate the bidirectional matching processes with mathematical analysis and computer simulation. In this paper, we researched the matching problem of two classes in the framework of complex networks. The analytical solution for the rate of successfully matching rate is presented, which is consistent with our simulation results of matching processes on social networks. It is observed that properties of networks greatly impact matching performance of networks, and the small-world effect improve rate of successfully matching more than scale-free properties. In addition, the small-world effect on matching performance of networks was quantitatively investigated with different rewiring rate in the small world network. --- Results For the given network, M nodes belong to the class A and N nodes belong to the class B (see Methods). After the characteristic state of each node is determined, for a node in the given network, only the neighbor nodes which can successfully match with the node are valuable to the node. Thus the original network is reconstructed as a bipartite network where only connected nodes satisfying successfully matching conditions and their links are reserved, as shown in top panels of Fig. 1 (a). In this way, we get a new bipartite network with m (m # M) nodes belonging to the class A and n (n # N) nodes belonging to class B, where any two connected nodes satisfy conditions of successfully matching. In the new bipartite network, k i is the degree of the ith node in class A, and k h denotes the degree of the hth node in class B connected to the ith node of class A. In order to get the probability on successfully matching of the ith node, we can firstly calculate the probability on unsuccessfully matching of the ith node. Because the degree of the hth node in class B is k h, the probability that the hth node can successfully match with the ith node is 1/k h. Therefore, the probability that the hth node can unsuccessfully match with the ith node is 1 2 1/k h. So the probability that the ith node in class A can not successfully match with all neighbor nodes is P k i h<unk>1 1<unk> 1 k h :<unk>1<unk> The probability that the ith node in class A can be successfully matched is 1<unk> P ki h<unk>1 1<unk> 1 k h :<unk>2<unk> If k i denotes the degree of node i in class B, k h presents the degree of hth node in class A connected to the i node of class B, and above equations also describe nodes in class B. As shown in Fig. 1 (b), the the probability that node i can not match with its neighbors is 0.0 for node A 1 of class A, 0.5 for node B 1 of class B, and 0.5 for node B 2 of class B (The three nodes are shown in the bipartite network of top panels in Fig. 1). If there are m nodes belonging to the class A, the expectation E of the total number of nodes in domain A matched successfully is E m,n,m <unk> <unk> A <unk>m<unk> X m i<unk>1 P ki h<unk>1 1<unk> 1 k h,<unk>3<unk> where the m denotes the number of types for nodes' characters. Similarly, the expectation E of the total number of nodes in class B matched successfully is E m,n,m <unk> <unk> B <unk>n<unk> X n j<unk>1 P kj h<unk>1 1<unk> 1 k h :<unk>4<unk> Because the matching between the two classes is one-to-one, E(m, n, m) A 5 E(m, n, m) B. Therefore, for a given network with M, N and m, the expectation E about the total number of successfully matching pairs in the model is E M,N,m <unk> <unk>m<unk> X m i<unk>1 P ki h<unk>1 1<unk> 1 k h <unk>n<unk> X n j<unk>1 P kj h<unk>1 1<unk> 1 k h :<unk>5<unk> According to (5), we can get 2E M,N,m <unk> <unk> <unk>m<unk> X m i<unk>1 P ki h<unk>1 1<unk> 1 k h zn<unk> X n j<unk>1 P kj h<unk>1 1<unk> 1 k h :<unk>6<unk> Further, we define the average successful matching rate of networks from the equation ( 6): E B <unk>n<unk> X n i<unk>1 P ki h<unk>1 1<unk> 1 k h <unk>1:0. L<unk>2 E M,N,m <unk> <unk> MzN <unk>m<unk> P m i<unk>1 P ki h<unk>1 1<unk> 1 k h zn<unk> P n j<unk>1 P ki h<unk>1 1<unk> 1 k h MzN,<unk>7<unk> where M 1 N represents the total population of two classes in the network, and the range of L is from 0 to 1. Therefore, L can quantify the matching performance of a network, and the large value of L reflects high matching performance of the network. To investigate the effects of population structures on the matching rates of networks, analytical results are performed from above equations, as shown in top panels of Fig. 2. Without loss of generality, m is fixed as 2 in the analysis, and four types of networks are applied to model different structured population (see Methods). One can find that there exists an optimal value of a (a o <unk> 0.5) to enhance matching performance of networks, revealing that balanced population between class A and B plays an important role in matching performance for the four networks. In addition, the average degree of networks K affects L greatly nearby the a o, and a larger average degree of networks induces better matching performance of networks. To confirm analytical results, we performed simulations of matching process on regular networks, small-world networks, random networks and scale-free networks respectively, as shown in the bottom panels of Fig. 2. In simulation, firstly of all, nodes are divided to the class A with probability a (0 # a # 1), and with probability 1 2 a belong to class B. Then, the state of each node is randomly assigned a kind of character from V following the uniform distribution and m is also fixed as 2 in the simulation. For example, the characters of node i are labeled as (c Ai, s Ai ) where c Ai is node i's own character and s Ai represents the character the node i attempts to select. It is found that our simulation results are consistent with analysis, and both results show peaks of the L all appearing at around a 5 0.5 where the peak of L on the scale-free network is the lowest. We therefor focus on the matching performance of different average degree K at a 5 0.5. Fig. 3 indicates the rate of successfully matching increases with enhancing of average degree of networks, and the L value of scale-free network is minimum at the same value of the average degree K compared to other networks. On the other hand, the successfully matching rate L of small-world networks is the maximum, reflecting that structures of small-world are more conducive to matching process than structures of scale-free networks. Since structures of small-world networks enhance the matching performance of networks greatly, we focused on the matching process on small-world networks with different rewiring probability b, as shown in Fig. 4. It is found that L monotonously decreases with increasing of b in the condition of a 5 0.1 where individuals in class B are much more than that of class A. When the balanced population between class A and class B is achieved, i.e. the value of a is nearby 0.5, there exists an optimal value of rewiring rate b to induce the highest rate of successfully matching for small-world networks, such as In order to study effects of population size on the rate of successfully matching, we conducted simulation of matching processes on random networks with the average degree of K 5 1, K 5 2, K 5 3, and K 5 4 respectively, as shown in Fig. 5. In the case of K 5 1 with m 5 n 5 1, the probability of successfully matching between the two connected nodes belonging two classes is about 0.125, which is consistent with the mathematical analysis result. If the two connected nodes match with each other successfully, they must simultaneously satisfy the two conditions. Firstly, the two connected nodes belong to different classes A and B. The second, c Ai <unk>sBj and s Ai <unk>cBj, where i and j represent the two connected nodes. The probability of satisfying the first condition is 1/2 and the probability of satisfying the second condition is 1/4, so the probability of successfully matching between the two connected nodes is 1/8. Limited by average degree of networks, the number of total populations in the four case of K 5 1, K 5 2, K 5 3, and K 5 4, starts from 2, 3, 4, and 5 respectively. One can find that successfully matching rate L decrease with increasing of M 1 N when the number of total populations is lower than 10, while the value of L tends to a stable value for M 1 N. 10. --- Discussion Although our results are obtained from mathematical analysis and computer simulation, there are some human subject experiments which support our conclusions. For example, in experiments on matching behavior 20, human subjects are connected on virtual complex networks with the interface of computers, including preferential attachment and small-world networks. Different from our model where all individuals are divided into two classes, participants in their experiments belong to one class to march as a single pair. In particularly, subjects in the human experiments are able to propose to match with a neighbor and accept a proposal from a neighbor, which is similar to matching process of our model. The experimental results show that the matching performance of small-world networks is better than that of preferential attachment networks, which are consistent with our conclusions. In addition, the similar observation is also obtained from the experimental data of the coloring games performed by Kearns et al. 18, where preferential attachment networks lead to worse performance than small-worlds networks. It is worth mentioning that our approach is also suitable for the condition of fully connected networks where the average degree of networks depends on the size of networks. In this case, the matching performance is determined by the size of networks, and the larger networks lead to the higher successfully matching rate, which is consistent with the result of real data 13. Compared to the previous work 13, the current model and analytical solutions can be used to solve the matching problem in complex networks, thus the illustrations of matching processes has been extended in more general situations. In particularly, the matching process on small-world networks with different rewiring probability was studied in details, because the structures of small-world networks obviously enhance the matching performance of networks. Summarizing, we have studied the bidirectional selection system on complex networks where nodes are occupied by individuals in two classes. The average matching rate is proposed to evaluate the successfully matching performance of networks. It found that high average degree of nodes and balanced population between the two classes contributes to enhance the matching performance of networks, and our analysis is consistent with the simulation results. We also observed that the small-world networks perform better than scalefree networks at a given average degree. Our approach to restructure the bipartite network may be also applied to spreading dynamics of information and diseases in bipartite populations [26][27][28], where some social partners would be successful in matching but others not. There are also future application of our research in co-volution of matching dynamics and social network structures 29,30. --- Methods The matching model in structured population. To model the structure of population, regular networks, small-world networks, random network and scale-free networks are conducted as follows: 1) Regular networks: Starting from a regular ring lattice with M 1 N vertices with K edges per vertex, each vertex connected to its K nearest neighbors by undirected links 21. --- 2) Small world network: Starting from regular network with degree of K, we randomly choose a vertex and its edge, then rewiring the link to an randomly selected node with probability b, until each edge in the original regular networks has been considered once 21. --- 3) Random network: Starting from a regular network with M 1 N nodes, we connect any two nodes with the probability K/(M 1 N 2 1), where the K is the average degree of network 31. --- 4) Scale-free network: First of all, a globally coupled network with K 1 1 nodes is built, where K is the average degree of a network 22. Then, the network grows with preferential attachment process with the probability that a new node will be connected to the node i is proportional to the degree of the node. The network keep growing until the size of the network is up to M 1 N. Simulations. For a given network, in the simulation trial of the model, nodes belong to the class A with probability a, and belong to class B with probability 1 2 a. The number of nodes in class A and class B are M and N respectively. Therefore, there are M 1 N nodes in the network. For the characters of each node, the c Ai, s Ai, c Bj, s Bj are randomly assigned a kind of character from set V with m types of characters. A bipartite network is reconstructed from the given network, where only the matched nodes and links of matched nodes are reserved. Then a new bipartite network is generated with m (m # M) nodes belonging to class A and n (n # N) nodes belonging to class B. In the new bipartite network, a node and one of its neighboring nodes are randomly chosen. The two nodes are determined as a pair, meaning the two nodes are matched successfully. Every node can be only chosen one time at most. The matching process is repeated until no pair can be determined any more. In this way, we can calculate how many nodes are matched successfully, and get the rate of successfully matching for the whole population. --- Author contributions --- Additional information Competing financial interests: The authors declare no competing financial interests.
The bidirectional selection between two classes widely emerges in various social lives, such as commercial trading and mate choosing. Until now, the discussions on bidirectional selection in structured human society are quite limited. We demonstrated theoretically that the rate of successfully matching is affected greatly by individuals' neighborhoods in social networks, regardless of the type of networks. Furthermore, it is found that the high average degree of networks contributes to increasing rates of successful matches. The matching performance in different types of networks has been quantitatively investigated, revealing that the small-world networks reinforces the matching rate more than scale-free networks at given average degree. In addition, our analysis is consistent with the modeling result, which provides the theoretical understanding of underlying mechanisms of matching in complex networks.
Introduction Non-communicable diseases (NCDs) are the leading cause of mortality worldwide, with nearly three quarters of NCD-related deaths occurring in low-and middleincome countries (LMICs) [1]. In addition, many LMICs are experiencing a double burden of disease with high prevalence of both NCDs and infectious diseases that stretch the priorities and funding of limited health systems [2]. These challenges are compounded when considering refugee populations. Humanitarian crises can cause disruptions in previously available health services, further weaken fragile health systems, and divert resources away from chronic disease management [2]. As a result, research has shown an increase in NCD complications in conflict settings [2,3]. As the prevalence of NCDs continues to rise and humanitarian crises persist, countries and organizations responding to humanitarian crises have an obligation to address long-term management of NCDs [4]. In the Middle Eastern region, gaps seen with NCD treatment in refugee settings tend to mirror overall challenges and weaknesses within national health systems [5]. Data on NCD care in refugee settlements in sub-Saharan Africa is much more sparse. A national prevalence survey in Uganda estimated 26.5% of adults had hypertension [6] and 1.4% had diabetes [7] and the majority were unaware of their underlying medical condition [6,7], highlighting the importance of expanding access to screening for NCDs. Only 80% of Ugandan health facilities offered blood glucose testing and 34% offered diabetes management in 2013 [8]. Rural clinics in Uganda continue to face challenges in training health workers and providing continuous support for diabetes care [9]. Additionally, the Ugandan essential medicines list includes medications for hypertension and diabetes management [10,11], but there is limited availability of essential medicines throughout Uganda and there are disparities related to less access at public hospitals compared to private for-profit hospitals [5]. In refugee settlements in Uganda, the burden of disease and degree to which there are unmet medical needs is unclear. Improved understanding could help identify unmet medical needs, inform public health policies and goals, and inform medical resource allocation for this vulnerable population. To assess the NCD burden within Nakivale Refugee Settlement in southwestern Uganda, we leveraged existing infrastructure for patients presenting to health centers and screened them for hypertension and diabetes. --- Methods --- Setting This research was conducted in Nakivale Refugee Settlement in southwestern Uganda. Over 100,000 refugees live in the settlement; the majority of refugees are from the Democratic Republic of the Congo (DRC), Rwanda, Somalia, and Burundi, and a small minority are from other nearby sub-Saharan African countries. This research was conducted at three health centers in the settlement, Nakivale Health Center, Kibengo Health Center and Juru Health Center. There is a fourth health center in the settlement but it was not an enrollment site given its remote location and difficult access compared to the other sites. Refugees and Ugandan nationals can access clinical services free of charge at health centers in Nakivale, including free prescription medications for diabetes, hypertension, and HIV when indicated. --- Study population and procedures These NCD data were collected as a part of a larger study on linkage to HIV care in Nakivale Refugee Settlement (PI: O'Laughlin, K23MH108440). Prior to initiation of the NCD component of this work, our research team met with the local implementing partner leadership team to ensure there was sufficient capacity to accommodate people newly diagnosed with hypertension and diabetes. We worked with these partners to create a referral protocol that specified who to refer (e.g. based on classification of blood pressure or diabetes diagnostic criteria) and the expected timeline for medical follow-up. Multilingual research assistants were trained in blood pressure and glucose measurement techniques prior to study initiation. Adults presenting for HIV testing were recruited from the outpatient department waiting areas at Nakivale, Juru, and Kibengo Health Centers. Inclusion criteria were 1) 18 years of age or older, 2) willingness to participate in routine clinic-based HIV testing, 3) not previously diagnosed with HIV, 4) no prior participation in the study in the preceding 3 months, and 5) able to understand the consent process and study procedures in Kiswahili, Kinyarwanda, Runyankore, or English. After giving written consent, participants verbally completed a questionnaire which was read to them by a research assistant who directly entered information into an electronic database. Data collected included sociodemographic information, medical history, and ongoing diabetes and hypertension treatment. Research assistants then measured participants' height, weight, blood glucose using the FreeStyle Optium Neo Blood Glucose and Ketone Monitoring System, and blood pressure prior to conducting HIV testing. For blood pressure measurements, participants were seated for 1 min before the measurement was taken using a stethoscope for auscultation and a Veridian Healthcare Pro Kit sphygmomanometer. For those with an elevated blood pressure, the measurement was repeated two additional times 5 min apart for each additional measurement according to Ministry of Health guidelines [12]. Research assistants then conducted point-of-care blood glucose and HIV testing. All participants included in these analyses were enrolled between January 16, 2019 through January 13, 2020. --- Definition of endpoints Endpoints were established using Uganda Ministry of Health guidelines [12]. We defined diabetes as a random blood glucose (RBG) <unk>11.1 mmol/L with self-reported frequent urination or thirst, or fasting blood glucose (FBG) <unk>7.0 mmol/L regardless of symptoms [12]. We used the lowest systolic and diastolic blood pressures to ascertain hypertension and defined it as both a binary and categorical outcome (including pre-hypertension), according to local guidelines (Table 1) [12]. In contrast to World Health Organization guidelines, all measurements were taken on the same day. We used standard definitions of body mass index (BMI) <unk> 18.5 kg/m 2 for underweight, 18.5 <unk> BMI <unk> 25.0 kg/m 2 for normal weight, and BMI <unk>25.0 kg/m 2 for overweight/obese [13]. --- Statistical analyses We tested for differences in descriptive statistics and study outcomes by refugee status using <unk>-square or Fisher's exact test for categorical variables and Student's ttest for continuous variables. We used the Agresti-Coull method to calculate 95% confidence intervals (CI) for the prevalence of diabetes and hypertension [14]. We also estimated the period prevalence of diabetes by including participants who met the criteria for diabetes or reported a prior diabetes diagnosis in the numerator. We estimated the number needed to screen as 1/prevalence. We used log-binomial regression models or Poisson regression with robust standard errors if the model failed to converge [15,16] to estimate the associations of immigration status and country of origin, respectively, with hypertension and diabetes while controlling for age, sex, education level, and BMI. We performed statistical analyses using SAS version 9.4 (Cary, NC). --- Results --- Study population Of the 2137 participants enrolled since NCD testing was introduced, 2127 (99.5%) received blood glucose testing and blood pressure measurement. Among these, 1379 (65%) were refugees or asylum seekers and 748 (35%) were Ugandan nationals. Ugandan nationals were more likely to be female (60% vs 54%, p = 0.005) and older (32.8 <unk> 11.2 vs 31.1 <unk> 11.0 years, p <unk> 0.001) compared to refugees and asylum seekers (Table 2). After Uganda, the most commonly reported countries of origin were the DRC (n = 481, 23%), Kenya (n = 462, 22%), and Burundi (n = 366, 17%). Somalia (n = 3, 0.1%), Sudan (n = 2, 0.1%), South Sudan (n = 1, 0.1%), and other (n = 22, 1%) were also reported. --- Hypertension Overall, 1067 (50%, 95% CI 48.0-52.2%) of participants met criteria for pre-hypertension at the time of their clinic visit and 187 (9%, 95% CI 7.7-10.1%) met criteria for hypertension. The number needed to screen to identify one new instance of hypertension was 15.3 people and did not vary substantially by refugee status or country of origin. Among those with hypertension, 129 were stage 1, 48 were stage 2, and 9 were stage 2 severe. At the time of screening, 112 (5%) of participants reported a prior diagnosis of hypertension. Among the 112 participants reporting previously diagnosed with hypertension, 48 (43%) were hypertensive at the time of screening reflecting uncontrolled hypertension and this did not differ by refugee status (Ugandan nationals 35/84, 42%; refugees and asylum seekers 13/28, 46%). Sustained anti-hypertensive treatment was uncommon with 31 (28%) reporting use of prescribed antihypertensive drugs in the past 2 weeks and 15 (13%) using traditional remedies. Use of a prescribed antihypertensive drug among those with a prior diagnosis did not vary by the presence of hypertension at the time of screening (p = 0.761). --- Diabetes Overall, 32 participants met the criteria for diabetes (1.5%, 95% CI 1.1-2.1%). The number needed to screen to identify one new case of diabetes was 78.7 persons. The large majority (n = 27, 87%) did not report a prior diabetes diagnosis and this did not vary substantially by refugee status (Ugandan nationals n = 12/15, 80%; refugees and asylum seekers n = 15/17, 88%). There were no significant differences in previously reported or new diagnosis of diabetes by refugee status or country of origin (Table 3). Overweight and obesity were more common among participants with diabetes (n = 10, 32%) than those without diabetes (n = 409, 20%), though this was not statistically significant (p = 0.08). The period prevalence was 2.3% (n = 48, 95% CI 1.7-3.0%). Among the 21 (1%) participants who reported a prior diagnosis of diabetes, 15 (71%) reported taking prescribed diabetes drugs within the prior 2 weeks, 10 (48%) reported ever visiting a traditional healer for diabetes and 9 (43%) reported current use of herbal or traditional remedies for diabetes. There were 5 individuals who reported a prior diabetes diagnosis and met the criteria for diabetes at the time of screening, all of whom --- Multi-morbidity A total of 116 participants tested positive for HIV infection. Few participants had multi-morbidity (Fig. 1). In multivariable models, diabetes and hypertension were associated with age but not refugee status (Table 4) or country of origin (data not shown). --- Discussion Among 2127 adults presenting for routine HIV testing at health clinics in Nakivale Refugee Settlement, the prevalence of pre-hypertension and hypertension were high, while the prevalence of diabetes was low. The burden of hypertension and diabetes were similar across refugee status and country of origin. The majority of participants did not suffer from multi-morbidity. Studies of NCD prevalence and interventions among refugee populations have overwhelmingly focused on the Middle East geographically [2,4,8,10,17]. There, health services are provided to those in refugee settlements through the existing urban health infrastructure of the host country. Our data showed that in Nakivale Refugee Settlement the opposite is happening; 35% of participants were Ugandan Nationals integrated into the refugee health system. Inadequate treatment for those with known conditions has also been observed previously [3,17]. Among Syrian refugees in Jordan, hypertension and diabetes were the most prevalent NCDs and observed at similar frequencies as in the general Jordanian population [17,18]. Similarly, we found no significant difference in burden of disease between refugee and Ugandan nationals. The period prevalence of 2.3% is higher than a 2014 national prevalence estimate of 1.4%, possibly due to higher diabetes prevalence in the countries of origin [7]. Interestingly, the prevalence of hypertension at these refugee clinics (8.8%) is considerably lower than the national prevalence estimate of 26.5% [6]. Blood pressure screening for pre-hypertension or hypertension in this population could be feasible and would have a high yield. A blood pressure test is noninvasive, low cost, and requires minimal training and infrastructure to administer. There is a strong association between hypertension and cardiac disease, as well as mortality [19]. Although technically the diagnosis of hypertension requires two or three high blood pressure measures at least one week apart, the ability to screen patients with a single visit in order to direct them to further care could have a large impact in diagnosing hypertension for earlier intervention and decreasing disease complications. Immigration status and country of origin were not significant predictors of hypertension indicating screening could be broadly implemented at all health centers in the refugee settlement. Outreach to better understand and address barriers to care faced by vulnerable and underserved populations, such as Somali nationals who are underrepresented in these data, could increase the impact of the program. Increasing screening will inevitably place a greater burden on the health system to provide medications and clinic visits to more individuals. This will put additional stress on an already under-performing system [8,9]. However, leveraging pre-existing HIV infrastructure as was done in this study already has been shown to be feasible and cost-effective when tailored to the appropriate population [20,21]. Task shifting by using community health workers or peers can also be a cost-effective way to provide hypertension and diabetes care [22][23][24]. Community health workers can provide a variety of services including questionnaire-based screenings and referrals for testing, education around non-pharmaceutical disease management, and developing client self-efficacy for medication adherence and home-based disease management (e.g. glucose or blood pressure monitoring) [22][23][24][25]. Such a program would need to be wellmanaged, provide ongoing training and salary support for the community health workers, and be developed in the context of broader health systems strengthening programs [25,26]. Additionally, Médecins Sans Frontières demonstrated the feasibility and effectiveness of providing hypertension and diabetes treatment for refugees and vulnerable host communities within a camp by conducting a clinical consult, lab draws, and drug delivery (including a three-month supply to stable patients) at the same visit [3]. Diabetes screening in this setting will need to be carefully considered given the large number needed to screen and the considerable resources required. Diabetes testing is complex, requiring a multi-step testing process and advanced laboratory capabilities. Additionally, established screening methods may not be as effective in this population. For example, thirst is a well-established symptom of diabetes, but may not perform well in this highly resource-constrained setting. Additional research is needed to identify sub-populations most at risk of diabetes and possibly develop modified screening guidelines in order to target diabetes testing for those most likely to benefit from treatment. Although not statistically significant in our data, there was a higher prevalence of diabetes among overweight or obese individuals, which could be an appropriate sub-population to screen. Approximately three quarters of patients previously diagnosed with diabetes had access to prescription diabetes drugs and did not have elevated blood glucose at the time of their clinic visit, suggesting successful pharmacologic disease management for the majority of cases. Notably, all five previously diagnosed participants with continued elevated blood glucose at the time of their clinic visit also reported recent medication use, indicating a role for more intensive diabetes management in select cases. It may be that refugees with complex medical needs should be considered for more urgent resettlement so they can better care for their health needs. Our study has several strengths. We are among the first to estimate the burden of hypertension and diabetes in a vulnerable refugee population in a settlement. We have a large sample size. We also have good ascertainment of hypertension, using three consecutive measurements. Our study also has several limitations. We had a small number of diabetes cases, despite our large sample size, which likely limited our ability to detect an association between BMI and diabetes in multivariable analysis. Additionally, diabetes is difficult to test for and we could not confirm hyperglycemia with a repeat glucose test or follow-up HbA 1c testing, as recommended [12], so the true prevalence is likely lower than estimated. Conversely, relying on blood pressure measurements all taken on the same day, instead of the recommended 2 days, may overestimate hypertension prevalence [27]. Utilizing local criteria rather than the more stringent WHO criteria makes generalizing these findings to other contexts difficult. Furthermore, adults presenting for HIV testing at an outpatient clinic may not be a representative sample of the local population. Those presenting for care are either sicker or more health-conscious and therefore more or less likely to screen in for hypertension or diabetes than those not presenting to the health center. To the extent that HIV infection and antiretroviral therapies increase the risk of diabetes and that HIV is prevalent in the community, our estimate of diabetes, which excludes people with a prior HIV diagnosis, may underestimate the true overall prevalence [27][28][29][30][31]. Lastly, given stigma around HIV testing and the fact this study was nested in an HIV study, some of the population was not screened and hence underrepresented. For instance, there were only three Somalis involved in our study, while the population of Somalis in Nakivale was 13,397, demonstrating the lack of true representation for this specific group of refugees. Individual or focus group discussions with community leaders and members could help identify barriers to care in this setting as a first step towards designing targeted interventions. --- Conclusions At health centers in Nakivale Refugee Settlement in Uganda attended by refugees and Ugandan nationals, elevated blood pressure was common and frequently unknown or uncontrolled. Testing could be incorporated into the clinic visit flow and, if sustained monitoring and treatment is provided, could improve long-term health outcomes. Diabetes prevalence was low. Given the challenges associated with diabetes screening and high frequency of severe outcomes associated with this disease, focused screening of higher risk individuals should be considered in this setting. --- Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. --- Competing interests The authors declare that they have no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: Diabetes and hypertension are increasingly prevalent in low and middle income countries, but they are not well documented in refugee settlements in these settings. We sought to estimate the prevalence and associated characteristics of diabetes and hypertension among adults presenting for clinic-based HIV testing in Nakivale Refugee Settlement in Uganda. Methods: HIV-negative adults presenting to outpatient clinics for HIV testing at three health centers in Nakivale Refugee Settlement were enrolled from January 2019 through January 2020. Multi-lingual research assistants administered questionnaires aloud to ascertain medical history and sociodemographic information. The research assistants used standardized procedures to measure participants' blood pressure to detect hypertension (systolic blood pressure ≥ 140 mmHg or diastolic blood pressure ≥ 90 mmHg), and conduct a point-of-care blood glucose test for diabetes (random blood glucose ≥11.1 mmol/L with self-reported frequent urination or thirst, or fasting blood glucose ≥7.0 mmol/L regardless of symptoms), as per Uganda Ministry of Health guidelines. We used χsquare or Fisher's exact test to test for differences in disease prevalence by refugee status and log-binomial or Poisson regression models to estimate associations of immigration status and country of origin, respectively, with hypertension and diabetes while controlling for age, sex, education level, and body mass index. Results: Among 2127 participants, 1379 (65%) were refugees or asylum seekers and 748 (35%) were Ugandan nationals. Overall, 32 participants met criteria for diabetes (1.5%, 95% CI 1.1-2.1%) and the period prevalence was 2.3% (95% CI 1.7-3.0). There were 1067 (50%, 95% CI 48.0-52.2%) who met the criteria for pre-hypertension and 189 (9%, 95% CI 7.7-10.1%) for hypertension. These proportions did not vary by immigration status or country of origin in univariate tests or multivariable regression models.
Introduction The COVID-19 pandemic has proven how social media can quickly create a parallel infodemic, impacting the health and wellbeing of global citizens and posing a challenge for the delivery of public health services, worldwide [1]. The World Health Organization (WHO) defines the term 'infodemic' as 'an overabundance of information', some of which is accurate and some not, which makes it difficult for citizens to identify trustworthy sources of information and reliable guidance [2]. At the start of the pandemic, citizens' consumption of news increased by 62% [3], with many being exposed to mass amounts of misinformation and fake news as they searched for information relating to COVID-19 [4,5]. An infodemic typically includes the dissemination of unclear and unreliable messages, rumors, and fake news, which affects the penetration of public health communication and causes mass anxiety and social panic, ultimately impeding effective crisis management [6][7][8]. In China, in the early stage of the pandemic, false information spread rapidly on social media [9], causing serious difficulties in managing the disease [10]. Combating COVID-19 requires the combined efforts of multiple stakeholders who disseminate accurate and authoritative information through different media channels in a timely manner [11,12]. For example, governments and public health agencies should provide up-to-date reliable information on COVID-19 and emotional support to citizens in order to reduce public anxiety and uncertainty [13]. There is a societal need for accurate information to be corroborated quickly to prevent the spread of misinformation resulting in mass panic [14]. Official social media accounts, such as those managed by governments, serve as an ideal medium for facilitating communication between official sources and citizens during public health crises [15], but their strict control invalidating and disseminating information may inhibit fast and effective dissemination and lead to public distrust towards such organizations [16]. Many scholars have noted the widespread adoption of the term 'infodemic' by the research community [17]. However, the term still requires some clarification, especially in terms of how it is measured, which is still not fully understood. To address this shortcoming, this study provides a measurement for the current and future infodemic. Previous infodemic studies have focused predominantly on how to control them, such as minimizing the spread of fake news, misinformation, and rumors, as well as controlling their impact on citizens' psychological health [18,19]. However, a critical question remains: How do official social media accounts affect the infodemic? The aim of this study, therefore, is multifold. First, we aim to create a conceptual framework and provide practical implications for reducing the severity of an infodemic and, second, we aim to explore the possible relationship between official social media accounts and the infodemic, in the context of public health crises. By doing so, this study contributes to both the processing of health information on official social media accounts and the understanding of how to respond to an infodemic. --- Literature Review and Hypotheses --- Theoretical Basis The Social-Mediated Crisis Communication (SMCC) model is widely used to study how and why citizens communicate about crises, especially in terms of how different sources and forms of initial crisis information are exposed and affect follow-up crisis communication. Similarly, it describes the relationships between organizations, citizens, social media, and traditional media, during and after crises have occurred [20]. Some researchers have used the SMCC model to explore how citizens cope with risk information disseminated by governments, such as their information processing behaviors, changes in emotions, and protective behaviors [21]. The SMCC model focuses on the format and sources of information, and social media effectiveness to improve social resilience. Further, it suggests that citizens use social media to meet their untapped social needs, such as to vent, socialize with friends and family, seek information, and to obtain emotional support [22]. The emotional support provided to citizens through different media sources can directly affect their feelings and their responses [23], and one important factor of socialization is communication through social media [24]. The SMCC model categorizes information sources into either official (i.e., from public organizations, such as governments, which share crisis information with citizens) or a third party (i.e., members of the public or groups of citizens that share unverified crisis information with other citizens) [25]. Information posted by official sources that has been verified is key to establishing credibility and trust among citizens [26], but the COVID-19 infodemic resulted in the frequent sharing of misleading information and false claims, such as the sharing of pseudo-scientific therapies, and discussions about the origin and spread of the disease [27]; these activities can undermine public trust in governments. A previous study in India found that focusing on Twitter sentiment was an important crisis management strategy [28]. Therefore, to reduce the harm caused by public health crises, government agencies and public health organizations can use social media to help deal with the dissemination of crisis information [29]. Extant research shows that government social media accounts are an important information source for promoting citizen engagement during COVID-19 [30]. We posit, therefore, that official social media accounts are a key facilitator in successfully communicating with citizens and act as an important information source and provider of emotional support. During the first wave of the COVID-19 pandemic in China, citizens knew very little about the disease, causing mass panic and anxiety. If an information source is unofficial with low information quality, it can affect the emotions of citizens and lead to an infodemic, which is shown as important variables (e.g., official social media, social support, and infodemic) in the theoretical model. However, the SMCC model does not mention the variable of information cascades. When citizens know little about a crisis, information cascades can easily occur. During the initial stage of the pandemic, citizens knew very little about COVID-19, so we have, therefore, introduced the variable, information cascades, into our theoretical model based on the SMCC model. --- Social Media in Public Health Crises (Official and Private) and the Infodemic During the COVID-19 pandemic, governments imposed frequent lockdowns with the aim of controlling the spread of the COVID-19 disease. During these times, citizens used social media more frequently than usual [31,32], becoming compulsive and often demonstrating an addictive behavior [33]. The information posted on social media acted as a double-edged sword. First, verified information relieved citizens' panic and anxiety and motivated them in the fight against COVID-19 [34]. Secondly, however, as the amount of information available grew, citizens became unsure about whether the information they were viewing was, in fact, true [35]. Throughout the lockdowns, social media and the Internet acted as a main source of information for citizens [36,37], with social media use rapidly increasing during the crisis [38]. Communication during the pandemic was characterized by knowledge communities, organized into hierarchies of subgroups with clear geopolitical and ideological characteristics [39]. Citizens used social media to obtain health-related information, such as to learn about necessary control measures, disseminate the latest information about the pandemic, and listen to critical announcements [40]. However, content posted to social media was not always censored like the state-controlled media [41]; this ultimately affected the spread of anxiety and fear among citizens [42]. As the pandemic evolved, information related to COVID-19 received far greater attention than non-COVID-19 information on commercial social media platforms [43]. The infodemic started to portray the characteristics of repeated fluctuations [44,45], resulting in vast amounts of misinformation and fake news, demonstrating different types of reconfigured and fabricated content and dubious ideas [46]. The emergence of new mobile platforms heightened the infodemic during the pandemic [47]. Media coverage also affected citizens' psychological state [48], while information exposure affected citizens' trust in governments, especially their experiences of lockdown measures [49]. During the initial stage of the pandemic, official social media accounts played an important role in disseminating authoritative information about COVID-19, which resulted in a reduction in uncertainty. Based on this, we propose the following hypothesis: Hypothesis 1 (H1). The information quality of official social media accounts has a significant negative effect on the infodemic. --- Information Cascades and the Infodemic As rumors and false information started to spread on social media, citizens' imitation behaviors began to influence information diffusion. Similarly, it triggered uncertainty and fluctuation [50], resulting in information cascades. Individuals with limited official information become reliant on the collective opinions of others as a reference to making their own decisions [19]. Information dissemination, therefore, quickly becomes a dynamic process in which one group imposes their ideas on another group and maintains them, stereotyping the negative characteristics of the group and thus covering up the other characteristics. When negative messages, conveyed by earlier rejection, begin to DOWN cascade, a person may become stigmatized [51]. Disturbances experienced during the initial stage of anxiety-related information processing may lead to subsequent cascades of processing biases [52]. Peer rejection and information processing problems may also have interactive influence, which can lead to intentions to spread rumors [53]. Among investors with a public profile, information cascades increase the offer appeal to early-stage investors who, in turn, attract later-stage investors [54]. Internet users usually imitate other users' behaviors online, regardless of their own messages [55], which is what happened during the initial stage of the pandemic. Based on this, we propose the following hypotheses: Hypothesis 2 (H2). Information cascades have a significant positive effect on the infodemic. Hypothesis 3 (H3). Information cascades have a mediation effect on the relationship between IQ and the infodemic. --- Social Support and the Infodemic The level of social support experienced by citizens affects their mental health far more than the actual structure of personal networks [56]. Social support refers to the feeling of being valued and cared for by a network [57], and is described as the support an individual receives through social connections with other individuals, groups, and the larger community [58], which, in turn, reduces anxiety and panic [59]. Adolescents suffering from severe mental health problems often experience low to medium levels of social support [60]. During the COVID-19 pandemic, citizens who self-isolated experienced significantly higher rates of loneliness and depression than those who did not, with some studies finding that social support is significantly associated with poorer sleep quality and an increased risk of depression [61]. During the outbreak of COVID-19, citizens experienced severe lockdown measures, which limited their social contact with others. As a result, the rates of loneliness, stress, worry, and anxiety grew rapidly [62], which necessitated increased social support. This resulted in some citizens sharing fake news online for different reasons; for example, to seek social support to reduce anxiety [63]. Based on this, we propose the following hypotheses: Hypothesis 4 (H4). Social support has a significant negative effect on the infodemic. Hypothesis 5 (H5). Social support has a mediation effect on the relationship between IQ and the infodemic. --- Mediation and Moderation Variables and the Infodemic Social media use, low e-health literacy, and rapid publishing processes are cited as major contributors to the COVID-19 infodemic [1]. Citizens that frequently use social media can experience information overload, which has a significant effect on their mental health [18,64]. Excessive use of social media to seek COVID-19 information may also lead to depression and anxiety [59]. Some people experience difficulties in finding and evaluating information [36], which becomes more serious during public health crises. Ultimately, the COVID-19 infodemic highlighted the poor health literacy of global citizens, which is defined as the cognitive ability of people. During the pandemic, health literacy was perceived as important for preventing COVID-19 with governments investing heavily in education and improved communication [65]. The perceived threat of COVID-19, lower levels of digital health literacy, and rejection of official government social media led to higher levels of COVID-19 misinformation [66]. In this regard, it is understood that health literacy and private social media use may indirectly affect the infodemic. Based on this, we propose the following hypotheses: Hypothesis 6 (H6). Private social media use moderates the relationship between IQ and the infodemic. --- Hypothesis 7 (H7). Health literacy moderates the relationship between IQ and the infodemic. Based on these hypotheses, this study aims to examine the effect of official social media accounts on the infodemic during the first wave of COVID-19 in China. Figure 1 presents our theoretical model. Specifically, we aim to understand how information cascades and social support mediate the relationship between official social media accounts and the infodemic. Moreover, how do health literacy and private social media use moderate the relationship between official social media accounts and the infodemic? Hypothesis 6 (H6). Private social media use moderates the relationship between IQ and the infodemic. --- Hypothesis 7 (H7). Health literacy moderates the relationship between IQ and the infodemic. Based on these hypotheses, this study aims to examine the effect of official social media accounts on the infodemic during the first wave of COVID-19 in China. Figure 1 presents our theoretical model. Specifically, we aim to understand how information cascades and social support mediate the relationship between official social media accounts and the infodemic. Moreover, how do health literacy and private social media use moderate the relationship between official social media accounts and the infodemic? --- Methods --- Questionnaire and Samples A questionnaire was written in Chinese, and composed of three sections. The first section explained that the survey was to be completed anonymously and that the data collected would be used for scientific research purposes only. The second section collected participants' perceptions about the variables amended from many references, including information quality, social support, information cascades, and the infodemic. The measurement items are shown in the Measures section, and the modified items of this section were reviewed by a panel of experts, including a Professor who studies government social media, one public health expert, and one data analyst. The third section collected socio-demographic information about participants, their social media use frequency, and their health literacy level, such as gender, age, education level, and household income, as shown in Table 1. As most citizens were isolated at home during the COVID-19 pandemic, survey invitations were sent electronically with responses being solicited online. The survey was carried out from March to April 2020, with 4152 citizens over the age of 18 years old being randomly invited, including those who have different levels of education and income, as shown in Table 1. Responses were collected anonymously using WeChat and Tencent QQ, both leading Chinese social media platforms. A random sampling strategy, focused on recruiting residents in the COVID-19 outbreak regions of Mainland China, was used. In the beginning, a pilot study was conducted to test the reliability and validity of the constructs. The Cronbach's <unk> values and KMO values showed good reliability and validity in the preliminary study (0.883 and 0.847, respectively). Then, we sent our --- Methods --- Questionnaire and Samples A questionnaire was written in Chinese, and composed of three sections. The first section explained that the survey was to be completed anonymously and that the data collected would be used for scientific research purposes only. The second section collected participants' perceptions about the variables amended from many references, including information quality, social support, information cascades, and the infodemic. The measurement items are shown in the Measures section, and the modified items of this section were reviewed by a panel of experts, including a Professor who studies government social media, one public health expert, and one data analyst. The third section collected sociodemographic information about participants, their social media use frequency, and their health literacy level, such as gender, age, education level, and household income, as shown in Table 1. As most citizens were isolated at home during the COVID-19 pandemic, survey invitations were sent electronically with responses being solicited online. The survey was carried out from March to April 2020, with 4152 citizens over the age of 18 years old being randomly invited, including those who have different levels of education and income, as shown in Table 1. Responses were collected anonymously using WeChat and Tencent QQ, both leading Chinese social media platforms. A random sampling strategy, focused on recruiting residents in the COVID-19 outbreak regions of Mainland China, was used. In the beginning, a pilot study was conducted to test the reliability and validity of the constructs. The Cronbach's <unk> values and KMO values showed good reliability and validity in the preliminary study (0.883 and 0.847, respectively). Then, we sent our questionnaire to all citizens invited; 1515 citizens completed the questionnaire; however, 117 were considered invalid responses. In total, 1398 valid responses were received that covered all provinces that experienced the first wave of the COVID-19 outbreak. --- Measures This study investigated the effects of the information quality of posts published by official social media accounts, information cascades, and the social support on the infodemic during the first wave of the COVID-19 pandemic in China. All scale items were measured using a 5-point Likert-type scale, where 1 = strongly disagree and 5 = strongly agree. --- Information Quality (IQ) of Official Social Media Content Information quality is the degree to which information satisfies users based on their perception [67]. In the context of COVID-19, the information quality of COVID-19-related content was assessed based on its usability and reliability, etc. [68]. In this study, we propose an Information Quality Evaluation Index for Official Social Media (IQEI-OSM) based on user subject cognition, including information expression level, information content, and information utility level. The information quality of COVID-19-related content on social media should be comprehensive (i.e., not omitting important information) and authoritative [69], up-to-date (timeliness) [70], and easy to access and read (accessibility) [65]. The measurement of IQ was completed using a 5-point Likert scale, where the authors asked participants about their perceptions toward the information quality of COVID-19 content posted by official social media accounts. Five statements were used to measure participants' perceptions, including their agreement toward the following characteristics: (1) authoritative; (2) timeliness; (3) comprehensive; (4) accessibility; and (5) usefulness [71][72][73][74]. The IQEI-OSM had a high internal consistency, as shown in Table 2; the higher scores indicate higher quality of information posted by official social media. --- Social Support Social support is defined as the support accessible to individuals through their social ties with other individuals, groups, and the wider community [58], which affects their pre-ventive health behavior and mental health [57]. The Multi-dimensional Scale of Perceived Social Support (MSPSS), proposed by Zimet et al. [61], is a 12-item measure of perceived adequacy of social support from three sources, including friends, family, and a significant other. Based on prior research, this study measures social support as a multidimensional concept, including information support, emotional support, and peer support, which citizens receive when obtaining COVID-19-related health information from official social media accounts. Seven items were included to measure social support [75,76], including: (1) I would rather visit official social media accounts for COVID-19-related information than ask someone in-person (prefer official); (2) on official social media accounts, I have obtained information about preventing COVID-19 that I never knew from anywhere else (study knowledge); (3) I used the official social media account to deal with stress caused by the COVID-19 pandemic (manage press); (4) while visiting official social media accounts, I felt I have fewer concerns (reduce worry); (5) the health information posted on official social media accounts alleviates my feeling of loneliness (alleviate loneliness); (6) I used official social media accounts to understand other's experience during the initial stage of COVID-19 (read experience); and (7) I shared the practical advice and suggestions about preventing COVID-19 found on official social media accounts with my friends and family (share advice). The social support index had a high internal consistency, as shown in Table 2, where the higher scores indicate more social support. --- Information Cascades Information cascades occur when individuals observe and act on the behavior of others, regardless of their own information. As a result, they follow the behavior of the preceding individual to reach an optimal state. In this scenario, cascades might cause individuals to make wrong decisions [77]. Because of the zero-sum nature of attention; the amount of information found on private social media accounts draws users' attention away from official social media accounts. Some information is prevalent while the rest is ignored, which is known as the typical and common 'long-tail' phenomenon on social media [78]. Information cascades are measured in the form of relational cascades and structural cascades [50,79]. Four items were used to measure information cascades: (1) I relied on the opinions of others to process information related to COVID-19 (relation cascades1); (2) I relied on the opinions of others to make preventative decisions about COVID-19 (relation cascades2); (3) I relied on social norms to process information about COVID-19 (structural cascades1); and (4) I relied on social norms to make preventative decisions about COVID-19 (structural cascades2). The information cascades index had a high internal consistency, as shown in Table 2, where the higher scores indicate higher levels of information cascades. --- The COVID-19 Infodemic The vicious circle of psychological problems and the spread of rumors were the main features of the COVID-19 infodemic [1]. The measurement of the infodemic was mainly derived from previous studies. Five items were used to measure the COVID-19 infodemic [34,80], namely: (1) during the COVID-19 pandemic, the information I received exceeded my capacity to cope with it (exceeded); (2) during the COVID-19 pandemic, I felt panicky when I saw the amount of information about COVID-19 from different sources (panicky); (3) during the COVID-19 pandemic, I constantly sought information about COVID-19 (excessive seek); (4) because of my excessive information seeking on different media channels, I often forgot to respond to other important messages (forgotten); and (5) during the COVID-19 pandemic, I found it difficult to obtain reliable information when I needed help (difficult). The infodemic index had a high internal consistency, as shown in Table 2, where the higher scores indicate higher levels of the infodemic. --- Partial Least Squares Structural Equation Modeling (PLS-SEM) PLS-SEM is used to estimate complex models with many constructs, indicator variables, and structural paths, without making distributional assumptions about the data, which is useful for exploratory research when examining a developing or less developed theory [81]. Similarly, it can deal with multi-collinearity problems. We used PLS-SEM to examine the effects of the information quality of official social media, information cascades, and social support, on the infodemic using Smart-PLS 3.3.7 software (www.smartpls.com, accessed on 1 April 2022), for two reasons. First, based on the SMCC theory, we added two variables (i.e., information cascades and social support) to explain the underlying mechanisms of the relationship between official social media accounts and the infodemic, which can be seen as a less developed theory. Second, PLS-SEM can report the R 2 values of each endogenous latent variable. --- Results --- The Measurement Model This study conducted the internal consistency test using Smart-PLS 3.3.7 statistical software. Composite Reliability (CR) values were calculated to test the reliability and internal consistency of the scale. The Cronbach's <unk> is another measurement of internal consistency reliability, which is a less precise measure than CR. Rho_A lies between Cronbach's <unk> and CR, which may represent a good compromise [82]. Average Variance Extracted (AVE) is used to assess the convergent validity of each construct's measure. The Cranach's <unk> of each subscale was >0.6, which indicates that the survey data were highly reliable. The outer loadings ranged from 0.637 to 0.812, as shown in Figure 2, which exceeds the minimum value of 0.60. The Cronbach's <unk> values ranged from 0.732 to 0.880, which showed a satisfactory internal consistency level. The CR values ranged from 0.833 to 0.907, with'satisfactory to good', indicating the instrument had good internal consistency. The AVE values were higher than 0.50, which indicates that the four constructs explain more than 50% of the variance of their own items [83]. Further, the range of the Variance Inflation Factor (VIF) was 1.280 to 2.068, as shown in Table 2, and all were less than 5, which shows that there was no significant multi-collinearity risk. Discriminant validity was completed using Smart-PLS by the criterion provided by Fronell-Larcker [84]. Table 3 shows that the minimum square root values of the AVE along the diagonal line were higher than the correlation values between latent constructs in each column, which meets the above criterion and indicates that the measurement model had an acceptable discriminant validity level. Results show that PLS-SEM had an adequate fit since the model's fit indicates that d_ULS (=0.592) and d_G (=0.223) were less than 0.95, SRME (=0.051) was less than 0.08, and NFI index was 0.893. The R 2 values of the Infodemic, Information Cascades, and Social Support were 0.753, 0.611, and 0.741, respectively, indicating that the model was moderate and had a substantial fit [85]. --- The Structural Model Our results show that the information quality of content posted by official social media accounts (<unk> = -0.294, p <unk> 0.001) had a significant negative effect on the infodemic, while information cascades (<unk> = 0.242, p <unk> 0.001) had a significant positive effect on the infodemic. Social Support (<unk> = -0.387, p <unk> 0.001) had a significant negative effect on the infodemic. Thus, hypotheses H1, H2, and H4 are supported. Approximately 75.3% of the variance in the COVID-19 infodemic (see Figure 2) was driven by the significant influence of information quality, information cascades, and social support. The path coefficients, t-statistics, and p-values, of the hypotheses, are shown in Figure 2 and Table 4, respectively. All path coefficients are standardized enabling us to compare their absolute value. The absolute value of social support (0.387) is higher than information quality (0.294) and information cascades (0.242), indicating that social support has the greatest negative direct effect on the infodemic. This means that the higher level of social support received, the lower level of the infodemic. The absolute value of information quality is lower than social support, indicating a greater negative direct effect on the infodemic, which means that the higher level of information quality, the lower level of the infodemic. However, it should be noted that information cascades have a positive effect on the infodemic, which means that the higher the level of information cascades, the higher the level of the infodemic. --- Mediation Analysis The two models have two mediation effects, namely: Information quality <unk> Social Support <unk> Infodemic, and Information quality <unk> Information cascades <unk> Infodemic. To examine the significance of indirect effects, the bootstrapping method was used with either data normality distribution or not [86]. Since the results show that official social media accounts were the key factor in controlling the infodemic, the underlying mechanisms of this factor are further explored through meditation analysis using Smart-PLS and Bootstrapping with 5000 subsamples. The bootstrap method was used to examine the hypothesized relationships and sampling distribution as a measure of accuracy using random sampling methods to ensure consistency in results [81]. Mediation analysis was conducted to better understand the relationship between the information quality of official social media accounts and the infodemic, and the mediation effects of information cascades and social support were shown with 95% Confidence Intervals (CI). Our results revealed that with 95% CI, if information cascades were taken as the mediator variable, the mediation effect was significant (<unk> = -0.189, Boot CI (-0.227, -0.151)). If social support was taken as the mediator variable, the mediation effect was significant (<unk> = -0.333, Boot CI (-0.388, -0.280)). In general, the information quality of content posted by official social media accounts has both a direct and indirect effect on the infodemic, and mediation through both information cascades and social support. This indicates that official social media accounts contained the COVID-19 infodemic and, therefore, hypotheses H3 and H5 are supported. --- Moderating Analysis The two-factor analysis of ANOVA with interaction was used to conduct a moderation/interaction effect analysis with visualization [87]. The results of the moderation effect are shown in Figures 3 and4. To further examine the effect of information posted by official social media accounts on the infodemic under different conditions, this study coded the official social media information quality into two groups (1 = low level, 2 = high level). Similarly, private social media use was coded into two groups (0 = low usage, 1 = high usage), as shown in Figure 3. Then, a 2 (two groups of IQ) <unk> 2 (two groups of private social media use) off our groups ANOVA was performed, taking different levels of information quality perceptions and private social media use as independent variables, and the infodemic as the dependent variable. The main effect of both information quality (F = 142.347, p <unk> 0.001) and private social media use (F = 68.177, p <unk> 0.001) were significant. The interaction effect of information quality and private social media use was also significant (F = 85.637, p <unk> 0.001). The line of high usage and the line of low usage of private social media intersected, and the slope of high usage was greater than the low usage (see Figure 3). This finding indicates that private social media usage positively moderated the relationship between information quality and the infodemic. acy) (see Figure 4) of four groups ANOVA was performed, taking different levels formation quality perceptions and health literacy as independent variables, an infodemic as the dependent variable. The main effect of information quality (F = 118 p <unk> 0.001) was significant. However, the main effect of health literacy was not sign (F = 0.040, p > 0.05). It is well-known that the significance of moderating effects can judged by simply relying on the significance of the product terms. It should be ju comprehensively by whether they have interaction points in the interaction graph o Therefore, the interaction graphs were drawn, as shown in Figure 4, with results r ing that the line of two levels of health literacy had an intersection while the slope high level is smaller than the low level. This indicates that health literacy nega moderated the relationship between information quality and the infodemic and, fore, hypotheses H6 and H7 are supported. At the same time, health literacy was coded into two groups (0 = low level, 1 = high level), as shown in Figure 4. Then, a 2 (two groups of IQ) <unk> 2 (two groups of health literacy) (see Figure 4) of four groups ANOVA was performed, taking different levels of information quality perceptions and health literacy as independent variables, and the infodemic as the dependent variable. The main effect of information quality (F = 1182.015, p <unk> 0.001) was significant. However, the main effect of health literacy was not significant (F = 0.040, p > 0.05). It is well-known that the significance of moderating effects cannot be judged by simply relying on the significance of the product terms. It should be judged comprehensively by whether they have interaction points in the interaction graph or not. Therefore, the interaction graphs were drawn, as shown in Figure 4, with results revealing that the line of two levels of health literacy had an intersection while the slope of the high level is smaller than the low level. This indicates that health literacy negatively moderated the relationship between information quality and the infodemic and, therefore, hypotheses H6 and H7 are supported. --- Predict Partial Least Squares (PLS) Model The PLS predict algorithm uses training and hold out samples to generate and evaluate predictions from PLS path model estimations, which means it combines aspects of outof-sample prediction and in-sample explanatory power [88]. The PLS predict results are shown in Table 5. The Q 2 _predict statistic values of PLS-SEM outperform most LM benchmarks [89]. Meanwhile, only in addition to the indicator of read experience, the other indicators in the PLS-SEM analysis have lower RMSE and MAE values compared to the LM benchmark [85], which indicates that the structural model has higher explanatory power and predictive power. --- Discussion This study provides valuable insights into the effects of official social media accounts on the infodemic during the initial stage of COVID-19. Recent studies have focused their efforts on examining the effects of the infodemic on citizens' psychological issues and mental health [6,34], and how private social media use has affected the infodemic [1,12,45]. For example, some studies have found that commercial media positively affects psychological anxiety, but that official government media has no effect on psychological anxiety [90]. However, the underlying mechanisms of how official social media accounts affect the infodemic have received little attention. Specifically, during the pandemic, it has not previously been understood how official social media accounts affect the infodemic by the mediation effects of information cascades and social support, and the moderation effects of private social media use and health literacy. Our results show that the information quality of content posted by official social media accounts and the social support provided have a significant negative effect on the infodemic. Information cascades have a significant positive effect on the infodemic. Mediation analyses were conducted to explore the underlying mechanisms of the relationship between IQ and the infodemic with results revealing that both information cascades and social support mediate the relationship between IQ and the infodemic. In addition, moderation analyses were completed to explore the underlying mechanisms of the relationship between IQ and the infodemic, with results indicating that private social media use and health literacy moderate the relationship between IQ and the infodemic. These findings demonstrate the underlying mechanisms of the relationship between official social media accounts and the infodemic. In the context of public health crises, citizens tend to seek information to alleviate uncertainty (e.g., public health, personal and family safety, and recovery efforts) [91]. Our findings show that official social media accounts have controlled the infodemic and have increased the social support provided to citizens. In other words, it has alleviated citizens' uncertainty regarding COVID-19. It is noted, however, that it is necessary to guide citizens in using and promoting the use of official public health organizations' websites and official social media accounts when seeking health information related to COVID-19 [92]. On the other hand, the rational use of official social media should be promoted to prevent the dissemination of misinformation. Similarly, social media users should be trained to identify misinformation by using official information sources only and scientific digital health literacy cultivation [36]. Information cascades and social support were found to be important mediation variables in explaining how official social media accounts affected the infodemic during the first wave of COVID-19. When citizens were exposed to excessive information related to COVID-19, they tended to choose information that was useful to themselves.
How Official Social Media Affected the Infodemic among Adults during the First Wave of COVID-19 in China.
underlying mechanisms of the relationship between official social media accounts and the infodemic. In the context of public health crises, citizens tend to seek information to alleviate uncertainty (e.g., public health, personal and family safety, and recovery efforts) [91]. Our findings show that official social media accounts have controlled the infodemic and have increased the social support provided to citizens. In other words, it has alleviated citizens' uncertainty regarding COVID-19. It is noted, however, that it is necessary to guide citizens in using and promoting the use of official public health organizations' websites and official social media accounts when seeking health information related to COVID-19 [92]. On the other hand, the rational use of official social media should be promoted to prevent the dissemination of misinformation. Similarly, social media users should be trained to identify misinformation by using official information sources only and scientific digital health literacy cultivation [36]. Information cascades and social support were found to be important mediation variables in explaining how official social media accounts affected the infodemic during the first wave of COVID-19. When citizens were exposed to excessive information related to COVID-19, they tended to choose information that was useful to themselves. Meanwhile, they often sought social support (e.g., information support and emotional support) to alleviate the uncertainty experienced. If citizens made decisions by relying on the opinions of others or social norms, it indicates that information cascades have occurred. During the first wave of COVID-19, commercial media circulated an overload of information, with epidemic information being pushed to users continuously [90], with reliable and authoritative information being important for designing and conducting preventive measures to raise health-protective awareness [14]. This study shows that official social media can provide high-quality epidemic information to citizens, which can increase social support and reduce information cascades. This study also confirmed that the greater use of social media can lead to more social support [19]. Different information sources were shown to have different effects on the infodemic [90]. Private social media use is found to be a double-edged sword, which was found to be a major source of rumors or misinformation during emergencies in prior studies; on the other hand, it also plays a key role in communicating health information [93]. This finding shows that private social media use positively moderates the relationship between IQ and the infodemic, indicating that excessive use of private social media increases public anxiety and leads to an infodemic [19]. Therefore, this finding proves that citizens should take a break from private social media, and rationality use both official social media and private social media during public health crises. Health literacy was also found to negatively moderate the relationship between IQ and the infodemic. When citizens are faced with uncertainty, they are in a state of anxiety and depression, and their health literacy provides little help in identifying valid health information. --- Theoretical and Practical Implications This study constructed a theoretical model to uncover the underlying mechanisms of the relationship between official social media accounts and the infodemic during the first wave of COVID-19 in Mainland China. The proposed model made several contributions to previous studies by integrating two further variables (i.e., information cascades and social support) to investigate the infodemic problem ongoing with the COVID-19 pandemic. We treated official social media accounts as a cue to controlling the infodemic, while information cascades and social support were used as the mediation variables, which, as we know, have not been proven in prior research. Similarly, we considered the moderation effects of private social media use and health literacy. Thus, this paper calls for more research into the underlying mechanisms of the determinants of the infodemic. Policy implications can also be derived from this study to develop strategies for controlling future infodemics during public health crises. The outbreak of COVID-19 was accompanied by the mass dissemination of unvalidated information by private social media accounts. If authoritative information about COVID-19 was not published in a timely fashion, it may have caused the further dissemination of false information. This study provides evidence-based implications to control the COVID-19 infodemic. First, the public agencies that manage official social media accounts should improve the usefulness, timeliness, availability, and authoritative nature of the information provided through enhancing the professionalism of practitioners. Public agencies should also establish a release system to control information quality, and use social media and the Internet rationally to encourage citizens to interact with public agencies [94], and to pay greater attention to credible and authoritative sources and fact-checkers about the COVID-19 pandemic [95]. Secondly, official social media accounts should set an example for private social media users and commercial media accounts, such as forging a user-centered, fact-based, and collaborative response to the pandemic. Official social media accounts not only alleviate citizens' anxiety and uncertainty through the dissemination of authoritative reports, but also carry out useful preventive measures and touching epidemic stories to increase social support, inspiring citizens to fight against COVID-19 together. In special, official social media accounts should establish information communication mechanisms for sharing pandemic information resources between official social media accounts and big individual private social media accounts. Therefore, official social media accounts can help manage citizens' stress and health risks and should, therefore, convey information to citizens with empathy, scientific and rational evidence, and personal experience, and encourage them to share the content with friends, family, and peers, etc., so as to increase the social support and reduce potential information cascades. Thirdly, when faced with large amounts of media and health-related information, local governments should formulate rules to regulate the dissemination of pandemic information by private social media accounts. Health literacy may also help citizens better understand the reasons behind governments' and public health agencies' preventive recommendations and make protective and preventive actions quickly. In the later stages of COVID-19, citizens showed a good level of knowledge about the disease [96] but, in the initial stages, citizens almost knew nothing about it. Hence, local governments and public health agencies should popularize common health knowledge, while citizens should enhance their health literacy to enable larger-scale psychological prevention of fake news [97]. Meanwhile, local governments should encourage citizens to take social responsibility and encourage people to take the initiative in pandemic prevention. --- Limitations and Future Studies This study has several limitations. First, the survey was administered during the first wave of the COVID-19 pandemic in Mainland China, which relied on respondents' self-reporting data online. It lacked in-depth interviews with participants, which could help deepen cognition and the importance of official social media accounts and how they helped control the infodemic during the pandemic. Second, in most cases, information cascades were calculated using big data, with the study measuring it using four items. Future research may mine public comments from official social media accounts, and calculate their forwarding, interaction, and liking behaviors. Similarly, studies can analyze their emotional tendency to estimate what support is provided to citizens and uncover how official social media accounts affect citizens' emotions. Third, only one round of the online survey was conducted. Fourth, some respondents might have been relatively calm and objective to participate in the survey, but some might have felt very anxious and uncomfortable when completing the questionnaire. Fifth, we only examined the effect of information quality of official social media accounts on the infodemic; future study will try to examine the impact of official social media accounts' response strategies on the infodemic. Finally, there have been several subsequent waves of COVID-19 outbreaks in China since the survey was conducted and, therefore, longitudinal and comparative studies can be conducted in the future. --- Conclusions This study provided empirical evidence on the effects of official social media accounts on the COVID-19 infodemic and gave insights for uncovering the underlying mechanisms of the infodemic by analyzing the essential roles of the information quality of official social media accounts, the mediation effects of information cascades and social support, and the moderation effects of private social media use and health literacy. Our findings provide policy implications for controlling future infodemics and can help public health agencies that manage official social media accounts improve their information quality, increase social support, and decrease information cascades. --- Data Availability Statement: The raw data presented in this paper are available from the authors, without undue reservation. --- Author Contributions: H.L. was responsible for conceptualizing and designing the study, acquisition, analysis, and interpretation of data, and writing and reviewing of the manuscript. Q.C. provided guidance about the research methods and reviewed the manuscript. R.E. reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This research was funded by the National Social Science Fund of China (Grant Number 18CXW003). The information presented in this paper is the responsibility of the authors and does not necessarily represent the views of the National Office for Philosophy and Social Sciences. Institutional Review Board Statement: Ethical Approval was received from the Biomedical Ethics Committee of the School of Medicine at Xi'an Jiaotong University (NO. 2022-1278). All participants and/or their legal guardians gave informed consent prior to the collection of data. Informed Consent Statement: Not applicable.
How Official Social Media Affected the Infodemic among Adults during the First Wave of COVID-19 in China.
Introduction Obesity is a major public health concern particularly as it leads to increased risk for premature mortality and chronic diseases, including type 2 diabetes, cardiovascular disease, hypertension, stroke, some cancers, as well as soaring healthcare costs [1][2][3][4]. Emotional eating, which refers to the tendency to overeat in response to negative emotions, has been studied extensively over the last decades as a risk factor for obesity and an impediment to weight loss [5][6][7][8][9][10]. Studies have employed various research methods to demonstrate how negative emotions, including sadness, anxiety, stress, or anger, are related to the urge to overeat. For example, laboratory studies indicate that priming a negative affect among obese binge eaters via exposure to a sad film, induces overeating [11], and a meta-analysis of 36 ecological momentary field studies [12], confirmed an increase in negative emotions prior to binge eating episodes. A seminal study by Kaplan and Kaplan, focusing on the psychosomatic interpretation of obesity, posits that eating in response to negative emotions is a learned behavior that aims to diminish the negative state that one is in [13]. Furthermore, research has found a link between emotional eating and weight gain [14] and that enhancing emotional regulation skills should be the focus of interventions aimed at weight loss rather than caloric restriction alone [15]. Hence, it is important to describe the prevalence of emotional eating at a national level, factors predicting it, as well as its corollaries (e.g., associated health-related behaviors), particularly as emotional eating has been linked to adverse health outcomes [10,16]. In this study, we describe emotional eating among a large U.S. sample of adults by individual and socioeconomic factors, health behaviors (e.g., fast-food intake, physical activity), and a key indicator of self-regulatory performance, namely, temporal discounting [17]. Findings help to elucidate factors that are related to emotional eating and might, therefore, inform future intervention programs focused on emotional regulation while eating. --- Materials and Methods The current study cross-sectionally examines the relationship of sociodemographic factors, lifestyle behaviors, and self-regulation (independent variables) with emotional eating (dependent variable). This is explored using data collected in 2011 from the Family Health Habits Survey (FHHS), which is described elsewhere [18]. Briefly, households from the Nielsen/Information Resources Inc. Consumer Panel were asked to participate in an internet-based survey (i.e., FHHS), which aimed to assess obesity and lifestyle behaviors in families [18]. In the present study, we utilize individual-level data on 5863 adults aged 21 years and above from the FHHS with information pertaining to the independent and dependent variables. The current study received ethics approval from the University of Haifa Institutional Review Board (IRB) as well as exempt status from the Morehouse School of Medicine IRB. Individual and socioeconomic variables consist of age (21-39, 40-49, 50-59, <unk>60 years), race/ethnicity (non-Hispanic White, non-Hispanic Black, Hispanic, other), annual household income (<unk>$30,000, $30,000-44,999, $45,000-69,999, <unk>$70,000), household size (continuous), college education (yes/no), marital status (married: yes/no), and self-reported health status (low, medium, high). In addition, body mass index (BMI) was computed using the standard formula (kg/m 2 ) based on reported weight and height. BMI was then dichotomized based on obesity (BMI <unk> 30): yes/no [4]. Additionally, participants' sex was missing for a large proportion (73.9%) of participants [19]. Consequently, a multipleimputation approach, where the covariates along with participants' height are considered, was used to impute the missing sex variable [20]. This approach is consistent with a previous FHHS study [19]. The physical activity measure is described elsewhere [21]. Briefly, this measure is adapted from the International Physical Activity Questionnaire (IPAQ) [22], where the metabolic equivalent for task (MET) minutes per week (min/week) are computed based on the frequency, intensity, and duration of the activity [21]. MET min/week were then dichotomized according to the Health and Human Services Physical Activity guidelines (<unk>500 MET min/week): yes/no [23]. In addition, the frequency of fast-food consumption (eat-in and take out) was based on the reported times per week frequenting these establishments [24]. Participants were also queried regarding the frequency of eating at sit-down restaurants. Both variables were categorized into the following three groups for consistency with previous research: 0-1; 2-3; and <unk>4 times per week [19]. We used an established proxy of self-regulatory performance, namely, delay discounting [17,25]. Delay discounting measures assess the ability to exert patience-the extent to which one is willing to forego a smaller, more immediate reward for a larger, but later reward. Thus, delay discounting measures gauge the ability to suppress present-moment impulse in the service of valued longer-term goals with higher patience indicating higher self-regulatory performance [26,27]. In this study, we utilized a survey question on monetary tradeoffs related to delayed discounting. Specifically, participants were asked whether they would prefer to receive $10 in 30 days or larger monetary sums ($12, $15, $18) in 60 days [19]. Based on responses, we calculated delta values, indicative of one's ability to delay immediate gratification, using the standard exponential discount model [28,29]. As described elsewhere [29], delta values, computed by dividing $10 by the lowest monetary sum one is willing to receive in 60 days, were grouped into three categories: (1) patience (delta = 0.83); (2) medium patience (delta = 0.56 <unk> 0.67); and (3) impatience (delta <unk> 0.56). Whereas patience served as the reference group, the medium patience and impatience categories referred to varying levels of one's (in)ability to delay gratification. Participants were asked to state the frequency with which they feel the desire to eat when emotionally upset or stressed. This question was adapted from the emotional eating scale of the Dutch Eating Behavior Questionnaire (DEBQ) [30]. Specifically, participants were asked: "When you are emotionally upset or stressed, how often do you feel the desire to eat?". They were then asked to choose one of the following verbal expressions of frequency [31]: Never, rarely, sometimes, often, and very often". Due to its ordinal nature, this variable was entered into ordered logistic regression models as the dependent variable. The relationship among socioeconomic factors, self-regulation, lifestyle behaviors and emotional eating is examined utilizing two, ordered, logistic regression models. The first model includes socioeconomic variables and self-regulation as independent variables and emotional eating as the dependent variable. The second model adjusts for variables in the first model with the addition of health and lifestyle behavior variables (e.g., obesity, physical activity, frequency of fast-food consumption). In both models, the ordered regression is indicative of the odds of reaching a higher emotional eating score versus remaining in the same score according to the independent variables. Odds ratios (OR) and 95% confidence intervals (CI) were computed. Stata version 15.1 (StataCorp LP, College Station, Texas, USA) was utilized for the analyses, with alpha below 0.05 regarded as statistically significant. --- Results Participants' baseline characteristics are described in Table 1. Briefly, 59.2% of individuals were aged 50 years and older, with the largest (81.6%) racial/ethnic group being non-Hispanic White, followed by non-Hispanic Black (7.3%), and Hispanic (5.2%). Less than half (45.6%) were college educated, and 62.7% earned an annual household salary of below $70,000. Regarding participants' lifestyle variables, 33.6% were obese, 21.5% met physical activity guidelines, and 25.9% frequented fast-food establishments twice a week or more. Moreover, 27.1% were regarded as being impatient; that is, having difficulties in delaying immediate gratification. Finally, 20.5% of participants indicated a tendency for emotional eating often or very often. Figure 1 depicts the relationship between socioeconomic factors and self-regulation to emotional eating. Analysis reveals that being female, non-Hispanic White, and of younger age were all related to a higher likelihood of emotional eating. For example, non-Hispanic Blacks and Hispanics were less likely (OR = 0.58, 95% CI 0.48-0.70; OR = 0.64, 95% CI 0.52-0.79; respectively) to report higher emotional eating rates than their non-Hispanic White counterparts. Further, having a college education was significantly associated with emotional eating (OR = 1.23; 95% CI 1.12-1.36). Additionally, those who were impatient and had medium levels of patience were 19% (95% CI 1.07-1.33) and 18% (95% CI 1.05-1.33), respectively, more likely to have higher emotional eating scores. Marital status and annual household income, however, were not significantly related to emotional eating. 1.05-1.33), respectively, more likely to have higher emotional eating scores. Marital status and annual household income, however, were not significantly related to emotional eating. Figure 2 presents the association between lifestyle behavior variables and emotional eating while adjusting for co-variables. Analysis reveals that more frequent fast-food consumption and obesity were each significantly related to emotional eating. For example, those frequenting fast-food establishments 2-3 times a week were <unk>24% (95% CI 1.10-1.40) more likely to have a higher emotional eating score in comparison to those with a fastfood consumption of 0-1 times weekly (reference group). Full-service restaurant consumption and physical activity as well as self-rated health were not related to emotional eating. Figure 2 presents the association between lifestyle behavior variables and emotional eating while adjusting for co-variables. Analysis reveals that more frequent fast-food consumption and obesity were each significantly related to emotional eating. For example, those frequenting fast-food establishments 2-3 times a week were <unk>24% (95% CI 1.10-1.40) more likely to have a higher emotional eating score in comparison to those with a fast-food consumption of 0-1 times weekly (reference group). Full-service restaurant consumption and physical activity as well as self-rated health were not related to emotional eating. b The model adjusts for age, sex, race/ethnicity, marital status, annual household income, education, household size, and self-regulation. --- Discussion Obesity is a risk factor for chronic diseases and premature mortality [4]. Emotional eating, the tendency to eat in excess when experiencing negative emotions, is related to weight gain and thus obesity risk [14]. Emotional eating also hinders weight loss and weight maintenance [5][6][7][8][9][10]. In the current study, we seek to describe rates of emotional eating among a national sample of adults, while illuminating potential contributing factors to this phenomenon. Findings suggest that approximately one-fifth of adults reported a tendency for emotional eating often or very often, thereby potentially contributing to the obesity epidemic in the U.S. [1,15]. It should be noted that emotional eating was determined via a single survey item assessing the desire to eat when upset or stressed. While it might have been preferable to utilize the complete 13-item emotional-eating subscale of the Dutch Eating Behavior Questionnaire [30], this information was not available in the dataset. Beyond describing prevalence rates, the present study explores sociodemographic factors related to emotional eating. Specifically, multivariable analysis indicates that younger adults (21-39 years old) were markedly more likely to be emotional eaters. One possible explanation for this finding is that older adults might have a tendency to adhere to routine meal schedules (i.e., breakfast, lunch, dinner) [32], which facilitates meal planning and enhances eating self-efficacy in social situations (e.g., when tempting food is in front of them). Moreover, eating disorders (which are associated with high rates of emotional eating) are more prevalent among younger rather than older adults [33]. Notably, non-Hispanic Blacks and Hispanics reported lower emotional-eating rates than their non-Hispanic White counterparts did. These findings are supported by research suggesting that despite a high prevalence of obesity among African Americans and Hispanics [34], the prevalence of disordered eating behaviors (e.g., emotional eating) among these minority groups is relatively low [35][36][37]. Scant research, however, has specifically examined the underlying mechanisms as to why the prevalence of emotional eating might differ by race/ethnicity. Diggins and colleagues, for example, examined the relationship between stress and emotional eating among African-American female college students --- Discussion Obesity is a risk factor for chronic diseases and premature mortality [4]. Emotional eating, the tendency to eat in excess when experiencing negative emotions, is related to weight gain and thus obesity risk [14]. Emotional eating also hinders weight loss and weight maintenance [5][6][7][8][9][10]. In the current study, we seek to describe rates of emotional eating among a national sample of adults, while illuminating potential contributing factors to this phenomenon. Findings suggest that approximately one-fifth of adults reported a tendency for emotional eating often or very often, thereby potentially contributing to the obesity epidemic in the U.S. [1,15]. It should be noted that emotional eating was determined via a single survey item assessing the desire to eat when upset or stressed. While it might have been preferable to utilize the complete 13-item emotional-eating subscale of the Dutch Eating Behavior Questionnaire [30], this information was not available in the dataset. Beyond describing prevalence rates, the present study explores sociodemographic factors related to emotional eating. Specifically, multivariable analysis indicates that younger adults (21-39 years old) were markedly more likely to be emotional eaters. One possible explanation for this finding is that older adults might have a tendency to adhere to routine meal schedules (i.e., breakfast, lunch, dinner) [32], which facilitates meal planning and enhances eating self-efficacy in social situations (e.g., when tempting food is in front of them). Moreover, eating disorders (which are associated with high rates of emotional eating) are more prevalent among younger rather than older adults [33]. Notably, non-Hispanic Blacks and Hispanics reported lower emotional-eating rates than their non-Hispanic White counterparts did. These findings are supported by research suggesting that despite a high prevalence of obesity among African Americans and Hispanics [34], the prevalence of disordered eating behaviors (e.g., emotional eating) among these minority groups is relatively low [35][36][37]. Scant research, however, has specifically examined the underlying mechanisms as to why the prevalence of emotional eating might differ by race/ethnicity. Diggins and colleagues, for example, examined the relationship between stress and emotional eating among African-American female college students [38]. They did not explore, however, how stress might have differentially impacted emotional eating among Whites or Hispanics. It could be plausible that ethnic minorities are more resilient to life stressors [39], and thus less prone to emotional eating in comparison to their ethnic majority counterparts. This supposition, however, warrants future empirical research. In addition, we examined the relationship between emotional eating and lifestyle behaviors, such as fast-food consumption and physical activity. Study findings indicate that unhealthy lifestyle behaviors (e.g., fast-food consumption) are related to emotional eating while more healthful behaviors (e.g., physical activity) are not. Prior evidence suggests that low distress tolerance (inability to cope with negative emotions) is related to emotional eating [40]. Moreover, the link found between fast food and emotional eating is consistent with previous studies showing that emotional eaters often have a preference for energydense foods with abundant saturated fat [41,42]. With regard to physical activity, our findings corroborate a study by Koenders among 1562 U.S. adults observing no significant association between emotional eating and exercise [14]. Thus, while emotional eating and lack of insufficient physical activity are each related to weight gain and maintenance [10,43], they appear not to be directly linked to each other. Furthermore, current study findings underscore the independent and significant relationship between patience time preferences and emotional eating. That is, those who had difficulties delaying immediate gratification for a larger delayed reward were markedly more likely to eat when emotional than their more patient counterparts were. This finding is consistent with psychological research linking emotional eating behaviors to impulsiveness and self-control [44,45]. These studies, however, measured self-control via a self-report instrument asking participants to rate their ability to resist temptation [46], which might be influenced by conscious or unconscious factors to reinforce self-image [27]. While this approach is widely accepted, eliciting self-control through assigning an objective task, such as in psychological experiments (e.g., crossing out the letter "e" in a text) [47], or multiple list price methodology (in economics) will likely yield a more valid assessment [48]. Hence, in the present study, we utilize the latter approach (i.e., multiple price list methodology), which provides a more robust assessment of self-regulation [49]. The current study has several limitations that should be noted. Its design is crosssectional, therefore a temporal (and subsequent causal) relationship between the independent variables (e.g., lifestyle behaviors) and dependent variable (emotional eating) cannot be substantiated. Thus, subsequent longitudinal research is needed to establish a causeeffect relationship. Moreover, study variables such as lifestyle behaviors and emotional eating, were self-reported; thus, under or over reporting could have occurred due to social desirability [50]. Nonetheless since standard measures were used to collect information from participants, non-differential misclassification could have occurred which causes a bias of the point estimates toward the null [51]. In addition, the sex variable was missing for a large proportion of the sample; thus, we utilized a multiple-imputation approach to address this limitation. Finally, the data were derived from a U.S. survey that is not nationally representative, and the racial/ethnic minority composition in this sample is lower than that in the U.S. population at large. --- Conclusions The current study significantly contributes to the literature by determining the prevalence of emotional eating among a national sample of U.S. adults and examining predictive factors of this behavior. Findings reveal that approximately one-fifth of U.S. adults report emotional eating behavior often or very often, and it is more common among younger adults, non-Hispanic Whites, those with a college degree, and with difficulty delaying immediate gratification. Furthermore, an emotional eater might have an increased tendency for obesity and to eat at fast-food establishments more often. Future longitudinal research among large samples is clearly warranted to determine cause-effect relationships. Moreover, as emotional eating is related to obesity and other unhealthy behaviors, program planners might need to develop targeted interventions aimed at addressing these maladap-tive health behaviors (e.g., fast-food intake) alongside improving emotional-regulation skills with the goal of obesity prevention and chronic disease prevention. --- Data Availability Statement: The data used for this study are not publicly available. For data requests, please contact the Nielsen Consumer Panel. --- Author Contributions: R.E.B. and K.S. led the writing, conceived, and designed the study, contributed to the analytic approach, as well as interpretation of the study findings. Q.L. contributed to the analytic approach and led the statistical analyses. Q.L., R.O., J.D., A.L.Y., B.M.F., and M.H. participated in the study design, interpretation of results, as well as critical revisions of the manuscript drafts. All authors have read and agreed to the published version of the manuscript. Funding: The current study did not receive funding. Data collection efforts were supported by grant #69294 from the Robert Wood Johnson Foundation through its Healthy Eating Research program. --- Institutional Review Board Statement: The current study received ethics approval from the University of Haifa IRB and exempt status from the Morehouse School of Medicine IRB. Informed Consent Statement: Study participants are from the Nielsen Consumer Panel for which participation is voluntary. Individuals opt-in in order to participate in the panel. More detailed information on privacy, data use policy and consent appear here: https://www.nielsen.com/us/en/ legal/privacy-statement/privacy-policy-tv-only/ --- Conflicts of Interest: The authors declare no conflict of interest.
Background: Emotional eating, the tendency to overeat in response to negative emotions, has been linked to weight gain. However, scant evidence exists examining the prevalence and correlates of emotional eating among large samples of adults in the United States (U.S.). Hence, we examine the relationship among individual and socioeconomic factors, health behaviors, and self-regulation with emotional eating patterns among U.S. adults. Methods: Cross-sectional analysis of 5863 Family Health Habits Survey participants. Multivariable, ordered, logistic regression was employed to examine the relationship between the frequency of the desire to eat when emotionally upset (never, rarely, sometimes, often, and very often) and the independent variables. Results: Analysis reveals that 20.5% of the sample tended to emotionally eat often or very often. Being female, non-Hispanic White, and of younger age were all related to a higher likelihood of emotional eating. Additionally, inability to delay gratification (impatience) was related to an 18% increased likelihood (95% confidence interval (CI) 1.05-1.33) for emotional eating. Finally, emotional eating was significantly related to more frequent fast-food consumption. Conclusions: Program planners might need to develop targeted interventions aimed at enhancing emotional regulation skills while addressing these less healthful behaviors (e.g., fast-food intake) with the goal of obesity and chronic disease prevention.
Introduction Aggression in high school students is a problem in many countries [1][2][3][4][5][6][7][8] and adolescents are especially vulnerable to its consequences [9]. Bullying, victimization and fighting illustrate different types of involvement in violence during adolescence. Bullying involves negative physical or verbal action that has hostile intent, causes distress to the victim and includes a power differential between bullies and their victims [6]. According to Olweus, it is also bullying when a person is teased repeatedly in a way he/she does not like. Victimization by bullying occurs when a person is made the recipient of aggressive behavior [10]. Typically, it is someone less powerful than the perpetrator. Fighting is an aggressive behavior and, in most cases, the people involved are of a similar age and equal strength. Demographic, social, academic achievement and substance use (alcohol drinking, tobacco smoking, drug use) have shown association with violent behavior in adolescents [11,12]. According to a report released in 2016, the prevalence of fighting among adolescents aged 15 from Europe and North America varies between 22% and 69% in boys and between 9% and 25% in girls [13]. Physical fighting was strongly associated with alcohol consumption and drug use [14][15][16]. The social developmental model states that the youth behavior is learned through a continuous process starting from childhood. The social agents that play an important role in their behavioral development are families, schools, peers, communities [17][18][19]. Adolescents who maintain a stronger, healthier relationship with their families and their education are less likely to participate in unacceptable behaviors, such as violence [20,21]. The relationship with parents, including poor parental monitoring and low parental support, has also been mentioned as risk factors for violent behavior among adolescents [22,23]. Physical fighting has also been associated with poor peer relationships [24,25]. One of the most important factors in sculpting and defining the adolescent behavior represents the time they spend with their peers and the relationships they establish with them. Numerous studies have shown that adolescents tend to engage in similar behaviors as their peers (smoking, drinking, fighting and/or engaging in sexual behavior) [26][27][28]. Many adolescents have at least one friend that uses substances, but when most of their peers engage in behaviors such as drinking, smoking, or even illegal activities, the risk of them doing the same increases. While engagement in peer group activity is normative for adolescents, it is when a person has high support from peers and low support from parents, that substance use is particularly elevated [28]. Beside the immediate effects, bullying, victimization and fighting have long-term negative consequences for the bullies, victims, fighters and those who observe the interaction [29,30]. Some studies have shown that children who are bullies tend to still be bullies as adults. Additionally, an interesting observation is that adult bullies with children of their own tend to raise them as bullies [30]. Because it relates to students, school violence has received substantial media, research, and political attention [31]. In Romania, no systematic studies on aggressive behavior among high school students have been published so far. According to the Health Behaviour in School-aged Children (HBSC) study conducted in Romania between 2005 and 2006 [32], 6% of the girls and 24% of the boys aged 15 have been involved in a physical fight at least three times in the last 12 months. Another survey performed by HBSC between 2009 and 2010 [33] found that 4% of the girls and 19% of the boys aged 15 have been involved in a physical fight at least three times in the last 12 months. However, there is increased concern about violent behaviors amongst adolescents in the school setting and on the community level; hospital, primary health care and ambulatory data shows increased numbers for adolescent victims of aggressive behaviors [34,35]. The 2007 [36] and 2011 [37] European School Survey Project on Alcohol and Other Drugs (ESPAD), which examined 35 and 36 European countries, respectively, including Romania, provided an opportunity to study aggressive behavior in a large national sample. The first aim of this paper is to examine patterns of aggressive behavior of 15-16-year old high school students in Romania, and compare the collected data in two national representative samples from 2007 and 2011. The second aim of the study is to identify factors (gender, social, behavioral and school performance) associated with physical fights in adolescents in Romania. We hypothesized that physical fighting is associated with different types of factors, such as demographic (gender), social (relationship with parents and friends, parental control), school performance (grades), problem behavior (truancy) and substance use. --- Materials and Methods --- Population, Sampling Design and Representativeness The ESPAD target population is defined as regular students who turn 16 during the calendar year of the survey and are present in the classroom on the day of the survey [36,37]. This definition includes students who are enrolled in regular, vocational, general or academic studies but excludes those enrolled in either special schools or special classes for students with learning disorders or severe physical handicaps. Part-time and evening students and military high schools were also excluded. Sampling in the ESPAD project is based on the class as the final sampling unit. A total of 104,828 students participated in the 2007 ESPAD study and 103,076 students in the 2011 study. More details about methodology are available in the ESPAD Reports [36,37]. Among all Romanian inhabitants born in 1991 and 1995, roughly 87% and 94%, respectively, were still enrolled in regular schools. The remaining students were enrolled in either a vocational, theological or military school, or in schools where the teaching language is not Romanian. The Romanian sampling frame included 9th and 10th graders and covered approximately 99% of the ESPAD target population (the remaining students were in the 8th grade). The sampling frame was nationally representative for students from regular schools and covered all 42 counties. A simple random sampling procedure was applied to a list of 1459 schools in 2007 and 1499 schools in 2011, in order to obtain an adequate geographical distribution. Both lists were provided by the Ministry of Education. These lists did not include information about school size, meaning that all schools had the same probability of being sampled. From these schools, one class per grade was randomly selected to participate without class size being considered. The samples are representative for Romanian students born in 1991 and 1995 enrolled in grades 9 and 10 at regular schools. Using the detailed information about school and class size provided by the schools contacted, a weight has been introduced to adjust for school size. --- Organization of the Study Once classes had been selected, the parents received information about the study in order to give their active consent; the schools received a folder with methodological information and the headmasters were asked to make plans for the data-collection procedure. The questionnaires and response envelopes were distributed by ordinary post to the research assistants. Research assistants collected data in the classrooms where the students answered the questionnaires anonymously. They received standard instructions and individual sealable response envelopes to put their questionnaires in. The completed questionnaires were brought by the research assistants to the county center where the data were entered. This study was approved by the Research Ethics Committee of Victor Babes University of Medicine and Pharmacy (No. 03/2013). --- School and Student Participation Students and schools were informed that participation in the survey was voluntary. The overall response rate was 84% in 2007 and 79% in 2011. Only 2% of the research assistants experienced that some of the students found the questionnaire difficult to complete. A total of 2289 Romanian students were included in the final database in 2007 and 2770 in the 2011 database. --- Instrument, Measurement and Data Processing The translation of the questionnaire was made by a team of professional translators, after which it was back-translated and reviewed by a psychiatrist and public-health specialists. The questionnaire was pre-tested at ten schools, which led to some modifications. Cronbach's alpha reliability coefficient was determined for the main parts of the questionnaire investigating social support, substance use, violence, etc. Results ranging from 0.77 to 0.81 were found, indicating that participants responded consistently to questionnaire items. Aggressive behaviors were assessed through the following questions: How many times during the last 12 months he/she had: experienced a physical fight, hit one of the teachers, got mixed into a fight at school or at work, took part in a fight where a group of friends were against another group, hurt somebody badly enough to need bandages or a doctor, used any kind of weapon to get something from a person, participated in a group teasing an individual, participated in a group bruising an individual, participated in a group starting a fight with another group, started a fight with another individual. Victimization was assessed through questions like: How many times during the past 12 months have you been: individually teased by a whole group of people, bruised by a whole group of people, in a group that was attacked by another group, or individually involved in a fight started by someone else. The use of tobacco, alcohol and illicit drugs was assessed through questions that aimed to establish whether these substances have ever been used by the participants, age of first use and the possibility of consumption during the past 30 days. Binge drinking was assessed by asking how many days out of the last 30 the respondent had five or more drinks in a row. The answers for all the question from above were dichotomized to not at all and once or more times. Relationship with parents and perceived parental behavior were assessed as follows: relationships with parents (satisfied, neither nor, not satisfied, not at all satisfied, there is no such person); family control was assessed by the questions "Do your parent(s) set definite rules about what you can do at home?", "Do your parent(s) set definite rules about what you can do outside the home?", "Do your parent(s) know whom you are spending your evenings with?", " Do your parent(s) know where you are in the evenings?" (almost always, often, sometimes, seldom, almost never), "Do your parent(s) know where you spend Saturday nights?" (always, quite often, sometimes, usually do not know); emotional support and caring from mother and/or father (almost always, often, sometimes, seldom, almost never). One item analyzed relationship with friends (satisfied, neither nor, not satisfied, not so satisfied, not at all satisfied, there is no such person) and two items assessed emotional support and caring from best friend (almost always, often, sometimes, seldom, almost never). Students were also asked about their school performance, mainly their grades at the end of the last term and about absenteeism during last 30 days. The variable, "How often during the last 12 months have you experienced physical fight?" was dichotomized and the new variable was grouped as follows: never/one or more physical fights during the past 12 months. This question was introduced for the first time in the 2011 survey. The data were entered manually in each county during a five-week period and then centrally merged by the National School of Public Health, Management and Professional Development, Bucharest, Romania. --- Statistical Analysis All analyses were performed on weighted data. The results are presented as absolute and relative frequencies. All analyses were conducted with Stata 9.2 (Statacorp, Texas, TX, USA) using the svy commands. Descriptive statistics were conducted using frequencies and proportions. Chi-square tests were performed to compare values between 2007 and 2011. A logistic regression analysis was used to estimate factors associated with physical fight experienced during previous 12 months. A p <unk> 0.05 was considered statistically significant, and odds ratios (OR) with their respective 95% confidence interval (CI) were calculated. --- Results A total of 2289 students (1009 males-44.08% and 1280 females-55.92%) were included in the survey in 2007, and 2770 students (1279 males-46.17% and 1491 females-53.83%) in 2011. The present study revealed that 1000 students (35.87%) had experienced a physical fight during the previous 12 months. Univariate analysis showed important differences between students who experienced physical fight and those who did not (Table 1). The following variables were significant factors associated with physical fighting: male gender, poor relationships with mother and father, parent(s) do not know where adolescents spend their Saturday nights, parent(s) do not know where and with whom the adolescents spend their evenings, poor caring and emotional support from parent(s), serious problems with parents, poor caring and emotional support from best friend, serious problems with friends, low school grades, high truancy, marijuana lifetime use, current smoking and binge drinking during the previous 30 days. We did not find any association between physical fight experienced during the last 12 months and definite rules set by parent(s) regarding what adolescents can do at home and outside home. Using stepwise logistic regression, the most parsimonious multivariate logistic model was produced for factors associated with physical fight experienced during the last 12 months (Table 2). The following factors associated with physical fight remained: male gender, parent(s) do not know where and with whom the adolescents spend their evenings, poor parental caring, serious problems with friends, low school grades, high truancy and binge drinking during the previous 30 days. A decrease in almost all aggressive behaviors was noticed in 2011 compared to 2007 (Table 3). Statistically significant differences were observed for: taking part in a fight where a group of friends went against another group, participating in a group teasing an individual, participating in a group starting a fight with another group (only for females), starting a fight with another individual. The only violent behavior that increased in 2011, compared to 2007, was using any kind of weapon to get something from a person, but this was statistically significant only for females. Regarding victimization, we also found a decrease of prevalence in 2011 compared to 2007. Statistically significant differences were observed for: being individually teased by a whole group of people, being bruised by a whole group of people (only for males), being in a group that was attacked by another group, being individually involved in a fight started by someone else. --- Discussion The ESPAD surveys among high school students provided us an opportunity to study aggressive behavior in two large national samples in 2007 and 2011. The present study revealed that 35.87% of the students experienced physical fight during the previous 12 months. This result was higher than the prevalence of physical fight for the entire European ESPAD database [38] that was 31.25%, the lowest value being recorded in Denmark (19.68%) and the highest in Malta (47.74%) [37]. The 2011 questionnaire included a new variable regarding the physical fighting experienced by the students during the previous 12 months. We investigated different types of factors associated with physical fighting: demographic (gender), social (relationship with parents, parental control, emotional support and relationship with friends and their support), school performance (grades, skipped classes) and substance use (marijuana use, smoking and binge drinking). The present study has demonstrated that the combination of male gender, binge drinking during the previous 30 days, having serious problems with friends, parent(s) who do not know where and with whom the adolescents spend their evenings, poor parental caring, low school grades, and high truancy were predictive of physical fighting in this adolescent population. However, the results do not provide information about a causal relationship. Similar to other studies [16,[39][40][41][42][43], physical fighting was more prevalent in boys than in girls. Boys usually engage in undisguised violence to gain influence, money or power. Girls resort to relational aggression and may become violent when it comes to emotional situations, such as peer and/or romantic relationships, family arguments or outsiders' instigation [44][45][46]. According to our findings, students with poor school performance (grades between 6 and 6.99) were the most exposed to experience physical fighting compared to the highest school performance (grades between 9 and 10). The multivariate analysis showed they were three times more likely to engage in physical fight (OR = 2.16, 95%CI: 1.29-3.62, p = 0.002). This is similar to other studies' findings [47,48]. Absenteeism has also been found to be associated with youth violence [14,49,50]. In our logistic regression model, students who have been missing 5-6 days from school in the last 30 days were almost seven times more likely to engage in physical fighting (OR = 6.71, 95%CI: 2.20-20.44, p <unk> 0.001). The development of bullying and victimization might be influenced by individual and family factors. Aggressive behavior research has shown that children's socialization experiences within the family have a major role in aggressive behaviors development [36]. The following family influences on the development of aggression have been studied: family demographics (income, family type), parenting techniques (punitive and inconsistent discipline), and relationships between parent and child (positive and negative interactions) [37]. We found that the adolescents who are more likely to experience physical fights (OR = 2.22, 95%CI: 1.47-3.36, p <unk> 0.001) usually have parents who rarely know with whom their children spend their evenings. Poor parental care and serious problems with friends were another two important predictors in our model. Certainly, a wide variety of factors contribute to today's adolescents' exposure to violent behaviors, including family structure, social environment and peer behavior. Two of the most common correlates of violent behavior are alcohol and drug use [51,52]. For example, alcohol may suppress inhibitions against violent behavior or may affect the brain in such a way as to produce aggressive behaviors [53,54]. A competing theory proposes the reverse causal relationship, i.e., people who plan on being violent may drink to give themselves courage or an excuse for the violence [55][56][57]. Finally, a third theory states that drug use, alcohol use and violence are all outcomes of an unobserved third factor, for example, a risk-taking personality [39,51,58]. Risk taking is frequent during adolescence, and is associated with adverse outcomes including substance use. It is likely to be influenced by an individual's cognitive development, social development, and experiences with dangerous situations [59]. The inability to recognize warning signs in dangerous situations can make drinkers easy targets for perpetrators [52]. Our study showed that alcohol consumption and drug use is a significant predictor for developing an aggressive behavior. We observed declines in almost all aggressive behaviors in 2011 compared to 2007. This trend is also confirmed by the two HBSC studies that took place in the same periods in Romania [32,33]. After gaining access to the EU in January 2007, new legislation has been enacted and previous rules have been reinforced in areas related to youth violence. For instance, the Romanian Parliament adopted two laws on improving safety in schools: Law No. 35 [60,61]. This new legislation may have played a role in reducing violence, but there is no proof of causality. --- Strengths and Limitations of the Study There are certain limitations of this study that must be considered when interpreting the results. First, the findings reported here are only relevant to high school students from Romania who turn 16 in the calendar year of the survey and may not be generalized to other adolescents in the same age group that are not included in scholastic institutions. Second, survey methods are frequently criticized because they rely on the validity of self-reporting of sensitive and highly stigmatized behavior, thus error based on self-reported behavior might have been generated. Third, adolescents who were not available to complete the questionnaire due to truancy or dropout are likely to be at higher risk for aggressive behavior and other risk behaviors. Despite the limitations mentioned above, the study has strengths. We used a standardized questionnaire employed in other European countries in similar settings. The prevalence estimates we obtained are likely to closely represent the aggressive behavior prevalence amongst adolescents going to school, as we used probability methods for selecting the sample. --- Conclusions Physical fighting amongst the young adolescents that we evaluated was higher than the prevalence of physical fighting for the entire European ESPAD database [38], and was associated with several factors. A combination between male gender, binge drinking, problematic relationships with friends and family members, low school grades, absenteeism was found to be associated to the violent behaviors of adolescents. The development of a theoretical model which separates problem behaviors from adolescent experimental or risk-taking behaviors might be useful for future evaluations. The novelty of this study lies in analyzing patterns of associations, using a large sample with national representation. These findings may be useful to support and guide policy makers regarding the improvement and implementation of strategies to further prevent aggressive behaviors in teenagers. As in other European countries, Romania managed to reduce aggressive behaviors among high school students. New legislation may have played a role in reducing violence, but there is no proof of causality. The Ministry of Education encouraged the development of partnerships between representatives of the County School Inspectorates and the County Police Inspectorates to fight against violence in schools. In addition, the increase of alcohol excise played an important role, especially for children with limited access to their parents' funds; this was coupled with the banning of alcohol advertising and clear rules for TV content for children and youth. Concomitantly, various guides concerning violence prevention in school were published. A school intervention strategy must provide a detailed presentation of the objectives pursued, including the expected results, the activities to be carried out, the actors involved and their responsibilities, the time horizon, the necessary resources, monitoring and evaluation modalities. These interventions should provide students and teachers information about violence, change the way adolescents feel and think about it, and teach non-violent skills in order to resolve disputes. Skill enhancement training with parents could be an important factor in controlling violence and creating a stronger family bond. Parent-skill and family-relationship approaches, providing caregivers with support and teaching communication skills, might offer problem-solving techniques and behavior-management skills. Additionally, the school psychologists should provide therapy sessions for students in order to strengthen their problem-solving skills and resistance to negative peer influence. --- Conflicts of Interest: The authors declare no conflict of interest.
The aim of this paper is to examine aggressive behaviors among Romanian high school students between 15 and 16 years old, to compare data in two national representative samples and to identify factors associated with physical fighting. This study investigates the association of selected factors (social, school performance and substance use) with physical fighting. A total of 2289 Romanian students were included in the 2007 database and 2770 in the 2011 database. This study revealed that 35.87% of the teenagers have taken part in a physical fight during the previous 12 months, as compared with the European average of 31.5%. Romania has the highest prevalence of violent behavior by participating in a group bruising of an individual in both surveys, 2007 and 2011. A logistic regression analysis performed for the 2011 study revealed the following factors associated with physical fighting: binge drinking during the previous 30 days, male gender, serious problems with friends, parent(s) who do not know where and with whom the adolescents spend their evenings, poor parental caring, low school grades, and high truancy. A decrease in almost all aggressive behaviors was noticed in 2011, compared to 2007. These findings may be useful to support and guide policy makers regarding improvement and implementation of strategies to further prevent aggressive behaviors in teenagers.
Introduction My first experience with a tattoo artist was back in 1995. I was a teenager, and tattoo studios or parlors were rare in Mexico City and expensive. In my childhood neighborhood of Villa Coapa, located in the south, however, I had a friend who claimed to be a tattoo artist and inked people at his home. His nickname was Lua; he was a punk rocker, a self-taught tattoo artist who learned his trade from magazines and his experience practicing daily with his clients. He may not have been the best artist in town, but he was dedicated and offered an option to individuals who did not have the resources to get a "proper" tattoo in the expensive parlors. I got my first two tattoos with him, paying a symbolic fare. Most of my friends followed suit and got their tattoos with him, too. Suffice it to say that over time, my tattoos did not look great, and finally, in 2010 and 2014 I got them covered. I lost contact with Lua in the early 2000s. However, when I walk through my old neighborhood on a hot spring day, I can spot some people my age or older still showing some tattoos made by Lua in the 1990s, which have resisted the passing of time. I tell this story because the situation today in Mexico, but also in many cities of the world, is very different. Tattoo parlors and artists abound and compete in a saturated market. In Mexico City, new studios open their doors every year, and some close or merge with others. The tattoo culture in Mexico has grown exponentially in the last decade, and it has turned into a hierarchical community marked by status, elitism, and exclusion, mirroring the stratified configuration of Mexican society. On the one hand, there are famous artists with peculiar nicknames like "Dr. Lakra"1 (Jerónimo López Ram<unk>rez, son of the late Oaxacan artist Francisco Toledo), "Pira<unk>a", "Chanok", and Wilson Posada, who were members of the collective Dermafilia in the middle of the 1990s, and who have a long trajectory and influence in the Mexican tattoo scene, even if some of them do not live permanently in Mexico, like Posada. On the other hand, some artists have their studios in their homes in middle-class and lower-class neighborhoods, and many more do not work full-time as tattoo artists or have just started tattooing recently. It is difficult to give a detailed account of the tattoo community in Mexico, and even the concept of community is debatable as the limits of what constitutes it are not clear. It involves artists, tattooed persons, and owners of magazines and supplies; all are part of this subculture, but it is by no means homogenous. However, despite the differences and divisions, a sense of solidarity and communion exists between artists and consumers of tattoos. Still, the history and anthropology of tattoos in Mexico are a work in progress. In this article, I focus mainly on the apprenticeship process of tattoo artists in Mexico. My interest centers on how they learned their art, got their knowledge about tattooing, and perfected their craft. In essence, this article addresses the question of how a person becomes an expert through practice and how they transmit and share their knowledge with aspiring tattoo artists. Basic anthropological inquiries appear here, like the conformation of a tradition, the accumulation of knowledge and techniques, the visual and sensual appeal of the body, the experience of pain, and the mystical and spiritual connotations of tattoos. My theoretical approach is based on the anthropology of ritual, experience, and performance of embodied practices, popularized by authors like Victor Turner (1985Turner (, 1991)), Paul Stoller (1989), Thomas Csordas (1997Csordas (, 2002)), and David Le Breton (2013). The framework of what could be called an anthropology of skills and embodied knowledge also benefits from the insights developed by Tim Ingold. In this sense, experience, skills, and embodied practices appear under a phenomenological perspective, which attempts to show the importance and meaning of tattoos in culture beyond a sociological simplified explanation. The methodology implemented in this article draws on my long-term involvement in the tattoo scene in Mexico for more than twelve years. It is based on my participation in tattoo conventions, multiple tattoo sessions with artists as a client, and following the lives of tattoo artists for more than ten years. Long-term fieldwork in urban settings of your own culture allows developing a different kind of awareness of cultural practices, a slow-knowledge approach that takes years to assimilate and understand, as Paul Stoller has argued recently (Stoller 2020), but which provides a more in-depth approach to the development of anthropological insights about embodied practices. The perspective of the artists presented here is by no means representative of all the tattoo culture in Mexico, it offers only a window to what anthropologists like Clifford Geertz (1974) have referred to as the "Native's point of view". This methodological strategy is inspired by the works of Victor Turner (1991) on ritual (whose main informant was a "local" ritual expert, Muchona), Paul Stoller (1989) (who became a sorcerer's apprentice), and the embodied study of charismatic healers by Thomas Csordas (1997). It is through practice, participant observation, and commitment to the artist <unk>s perspective that it is possible to draw an ethnographically driven theoretical approach from the bottom up and not the other way around. It is important to mention that anthropological works on tattoos tend to focus on the client's perspective. Sociological and communication science works have followed this trend by trying to delve into the causes, meanings, and symbolism of certain tattoos that people have. Although the client's perspective is relevant, I argue that the view of the artist is also essential in getting an understanding of the tattoo culture today. In this sense, the artist-client relationship becomes an indissoluble pair that signals the process of ephemeral interactions that culminate in intimate, fluid body biographies2. I argue that these inked biographies become a roadmap for the self. The first section of the article describes general aspects of the anthropology of tattooing, the history of this art, and the development of the tattoo culture in Mexico. The second section deals with the process of apprenticeship of tattooing as part of an embodied experiential practice, where tattoo sessions represent performative public acts and painful rites of passage for clients and moments of innovation for the artists. The third section connects the process of apprenticeship of tattoo artists with their experiences with their clients and how a bodily biography emerges from this relationship. The fourth and final section focuses on the commercial side of the tattoo culture, and the fierce competition artists face in a saturated market. --- The Art of Tattooing The practice of tattooing, according to Aaron Deter-Wolf and Lars Krutak, appears in numerous archaeological records. They argue that tattooing as a decoration dates back to the Fourth Millennium (Krutak and Deter-Wolf 2017: 3). Therefore, it is an ancient practice that has existed all over the planet. It is not exclusive to a region or a particular culture, and there is not one exclusive place of origin. Archaeological evidence is based on skin remains in mummified human remains, in skin preservation, in graphically documented records, and in the tools discovered to make tattoos (Krutak and Deter-Wolf 2017: 5-6). There is always the question of why cultures practice tattooing as decoration or as a religious or political symbol of power and social identification. There is no simple explanation and no such thing as a unilinear historical or evolutionary development of tattoos. In the social imaginary of modernity, we could portray a development from ancient cultures, passing through indigenous and tribal people, and finally arriving in the tattoo practices in the West today. This is a fictional narrative that does not correspond with reality. There is no such thing as "a history of tattoos"; multiple overlapping histories exist. Some of the narratives about tattoos are told to justify a new tradition for a community. For instance, in her work Bodies of Inscription (2000), Margo DeMello argues that the development of specific tattoo representations in the culture of the United States operates under the narrative of an evolutionary process that goes from tribal non-Western societies to the appropriation of tattoos in America in the early Twentieth Century. She says: In the case of non-native American tattooing, the tradition first came from the islands of Polynesia within the context of colonialism, then was adapted by various subcultures within the working class, and was once more reinvented in the 1980s, primarily by middle-class artists and wearers. Through each step of this evolution and re-invention, the participants must rework the tradition to make it fit the sensibilities of the new community. (DeMello 2000: 11). For DeMello, the narratives of tattoo artists and clients correspond to recent developments of a cultural change that made tattoos more accessible and fashionable among the American middle-class since the 1980s. However, the evolutionary view of tattoos differs in Europe. Gemma Angel (2017) mentions that the European gaze on non-Western tattoo practices unavoidably confronts its prejudices about otherness (Angel 2017: 107). The tattoo collections in different museums, like the Wellcome Collection in London, one of the most important in the world, contain the material culture of tattoos, meaning the preserved skin of tattooed people and artifacts for the inking process. However, Gemma Angel points out the lack of archival information about tattoos, at least in the Nineteenth Century (Angel 2017: 108). Just recently, academics have been trying to overcome the exoticism of otherness in the analysis of tattoos and body decoration and have moved away from a straightforward, unilinear historical narrative about this art. Today, if it is possible to portray an "evolution" of tattoos, this relates to techniques and tools (utensils, machines, pigments, needles, and aftercare treatment). This imaginary trajectory traces a line that goes from the use of single hand-poke techniques popular in Japan and other Asian countries, passing through coil machines (commonly used during the Twentieth Century), and finally arriving in complex lightweight rotary systems like the Stigma Rotary machines invented in the first decade of the Twenty-First Century3. Techniques bring refinement and help tattoo artists; however, this doesn't mean that artists depend exclusively on advanced tools to perfect their craft. The wide range of techniques used to create a particular tattoo style (an artist's "signature") relate to multiple factors, like prestige, more "authentic" and "ancestral" methods and procedures, the demands of clients, and individual preferences. In Mexico, academic tattoo research focuses less on its indigenous pre-Hispanic origins and more on its urban development in the Nineteenth and Twentieth Centuries. Although there is archaeological evidence of the uses of body paint and tattoos in human remains found in Oaxaca and the Yucatán Peninsula, archaeologists like Enrique Vela argue that the evidence is scant and more work needs to be done, particularly in the analysis of ceramic humanoid figures depicting lines and decoration, in mural painting, and stone engraving (Vela 2010). The anthropological and historical studies about tattoos in Mexico haven't attracted many specialists. Research interests lie with art historians, journalists, activists, dermatologists, doctors, and tattoo artists themselves. Experts in criminalistic science and anatomy provided the first known works about tattoos in Mexico, according to <unk>lvaro Rodr<unk>guez Luévano (2016). The pioneering work of Francisco Mart<unk>nez Baca ( 1899) is, perhaps, the most known work. He was a military physician who did research on prisons in the state of Puebla and wrote the first book about tattoos in Mexico in 1889 called Tattoos: A Psychological and Legal-Medic Study with Criminals and the Military (See, Luévano 2016: 112; and Mart<unk>nez Baca 1899). Mart<unk>nez Baca kept correspondence with the famous Italian criminologist Cesare Lombroso, who was the leading figure in tattoo research at the time. According to Rodr<unk>guez Luevano (2016: 115), Mart<unk>nez Baca thought that through the analysis of tattoos in penitentiaries, it was possible to deduce the degree of moral degradation of individuals. For him, tattoos belonged exclusively to the lower criminal classes, and by making a registry of the tattooed people, it was possible to guess their origins and association with gangs. His theory got into methodological difficulties as tattoos also proliferated in the military. There, soldiers used tattoos as a form of identification, and by no means did this translate into moral degradation. Mart<unk>nez Baca ended up creating spurious classifications of tattoos in both contexts, the military and prisons, related to the psychological character of individuals. Due to his prejudices, he also did not delve into the tattoo artists themselves, for him, they were just rudimentary technicians with no skills, prisoners who tattooed people just to kill time. Despite his shortcomings, his classification also helped him to distinguish motives, figures, religious adscriptions, and ethnic belonging through tattoos. Rodr<unk>guez Luevano mentions that most of the tattoos registered by Mart<unk>nez Baca were rudimentary, with poor aesthetics, and used low-quality materials (Rodr<unk>guez Luévano 2016: 123-124). Mart<unk>nez Baca's book set a precedent for tattoo research in Mexico. However, it inevitably contributed to the stigmatization and discrimination of tattooed persons. This stigma hasn't completely disappeared and still exists in certain contexts of Mexican society. Approximately twenty-five years ago, more investigations in Mexico carried out by anthropologists, psychologists, science communication experts, and sociologists appeared which focused on contemporary urban tattoo culture. At the same time, more tattoo artists began to express their own views on tattoos in magazines, narratives, and interviews. The interest in some of these works turns around the tattooed people, their styles, the use of tattoos as a form of social identity, the psychological reasons for having inked art in their bodies, and the impact of tattoos in society (Perdigón Casta<unk>eda y Robles Aguirre 2019; Priego D<unk>az 2022; Rojas Bola<unk>os 2009). For instance, Samira Rojas Bola<unk>os focuses on the notion of stigma and the psychological impact it has on tattooed people. Her research demonstrated that in certain professional occupations in Mexico, like doctors, lawyers, social workers, and psychologists, tattooed persons are still linked to criminality, even more so if the tattoos lack quality (Rojas Bola<unk>os 2009: 72-73). Melissa Priego D<unk>az, on the other hand, focuses on the gradual acceptance of tattoos and how artists have contributed to such acceptance. She describes how tattoos become commodified as art in social media, and the interventions of bodies can carry certain altruism, like tattoos offered for free to women who have experienced breast cancer to cover their scars (Priego D<unk>az 2022: 7). Finally, Katia Perdigón y Bernardo Robles analyze the tattoos related to the cult of the Santa Muerte in Mexico City. Their research focuses on the different reasons people give to have tattoos of the Santa Muerte in visible parts of their bodies. It shows that tattoos are part of a religious devotion that provides direct protection to individuals. Here, tattooed images are seen as part of a global phenomenon of acceptance in society of the Santa Muerte cult, which exists not only in Mexico but also in many parts of the United States. Among tattoo artists who write about their profession in Mexico, topics range from the popularization of tattoos today to consumerism, cultural appropriation, and their own personal histories. Famous artists like "Dr. Lakra" often give interviews in newspapers and have exhibited their artistic works as painters in different national and international galleries. Tattoo artists have become the main producers of discourses about tattoos today, although these narratives do not derive strictly from academic publications. --- How to Learn a Craft: Tattoos as Embodied Contemporary Rituals Although there is no evidence to claim an "evolutionary" approach to tattoos, the division between the study of non-Western and "contemporary" tattoos mirrors the classic distinctions between the "primitive" and the "modern" perceptions of the body. On the one hand, tattoos in indigenous settings like those studied by Alfred Gell or evoked by Claude Lévi-Strauss on the M<unk>ori (but also in his analysis of face painting in the Amazon) belong to cultures where tattoos or body decoration and modification are part of a collective enterprise, and it is not a matter of individual choice; there is a ritual and even religious component that signifies identity, social belonging, and change of status as David Le Breton has argued recently (Le Breton 2013: 13). The often-quoted remark of Lévi-Strauss concerning the M<unk>ori, stress the value and power of the collective over the individual, he says: "The purpose of Maori tattooing is not only to imprint a drawing onto the flesh but also to stamp onto the mind all the traditions and philosophy of the group" (Lévi-Strauss, 1963: 257). On the other hand, tattoos in contemporary societies appear as a matter of personal choice, often dislocated from a territory or place of origin, and sometimes imprinted as a form of reappropriation of a lost tradition. David Le Breton argues that the tattoo artist hovers above culture and becomes a shaman of modern times as he can connect through his art different settings and cultural traditions (Le Breton, 2013: 41). Modifying the quote of Lévi-Strauss mentioned above, we could argue that a contemporary tattoo artist stamps in the mind of the tattooed a set of hybrid traditions selected from deterritorialized references. Pain is a quality inherent in tattoos. It is inescapable and represents a temporary inner sacrifice. It is localized singularly in the person; therefore, it is a highly individualized act. In contemporary urban settings like Mexico City, the experience of pain is a choice that the tattooed accepts, and it signifies a rite of passage. Pain is what makes the process of getting a tattoo a contemporary ritual art form. As Le Breton mentions, pain anticipates change, a metamorphosis that the new tattoo brings to a person (Le Breton 2013: 33). The artist understands pain as he or she has tattoos, too. It is very rare to find a tattooist who does not have tattoos. Therefore, there is a shared body experience between the artist and his clients. This momentary bond, which lasts until the piece is done, stays in the memory of the person getting the tattoo, and for the artists, it becomes a further development in their craft and proof of their expertise. The process of getting a tattoo turns into a liminal experience intensified by localized pain. As Victor Turner mentions, liminality is an in-between state marked by ambiguity, social invisibility, lack of status, and camaraderie in the ritual process (Turner 1991). Getting involved in a tattoo session means subjecting oneself to a liminal, intense body experience where pain is predominant. For some hours, the subject is left in the hands of the artist, so there must be mutual trust between the parties. The artist, on the other hand, needs to be careful not to make mistakes, so concentration is paramount. If modern tattoos constitute personal neo-rituals, then the tattoo artist symbolizes the role of the religious leader in initiation rituals. He or she is the conveyor of knowledge, transformation, and permanence inscribed into others. His role is like the charismatic healer described by Thomas Csordas, where such a healer is a channel that controls the body experience of participants (Csordas 1997). The cultural phenomenology of healing in religious settings has other kinds of connotations and motivations. However, tattoo artists sometimes heal other persons through tattoos, covering scars, signifying important moments of their lives, and marking, sometimes forever, a memory in time through pain. Therefore, there is a legitimate anthropological question about the cultural importance of tattoo artists, how they have become skilled professionals whose craft has changed drastically in the last thirty years, and whose work remains in high demand. In Mexico City, the most famous tattooists have a background in arts, graphic design, or visual arts. Some are self-taught, like "Dr. Lakra", born in 1972, who says, "I started just drawing. At school, I liked to draw a lot. I did comics, and then I entered the Friday Workshop at Gabriel Orozco's house, where he taught painting. At the same time, I started tattooing. A friend started tattooing me, and I saw that the machine was very easy to make. Then, in the late 80s and early 90s, I started tattooing other friends. So, since everything went a little hand in hand, I was doing painting, drawing, comics, tattoo, I don't have it so separated" (López Ram<unk>rez 2018). Wilson Posada, for instance, began tattooing in 1994 when he joined the collective Dermafilia. He learned his craft with the people of the collective, and his uncle taught him jewelry too. In 2007, he moved to San Francisco, California. Apart from being a tattoo artist, he is also a musician and won the legal battle to use the name "Dermafilia" in his art collective (Valencia 2014). "Chanok", another of the most famous Mexican artists in Mexico City, who started tattooing in the middle of the 1990s, is also a self-taught person. He said in an interview for the website Tattoo Life: When I started, I didn't know anything about tattoos. I had seen it done and thought I could do it too since I knew how to draw...A lot of the Punk iconography that we used came from the blue Tattoo Time (Music & Sea Tattoos). We copied designs out of the few magazines we could get, gangster stuff too "street style", images of the Virgin, skulls, etc... Getting books, colors, or even just some good advice was very difficult back then but conventions started happening in Mexico and that began to open things up. That's when we could meet other artists, watch them work, get tattooed, and buy supplies. 4Interviews with other tattoo artists offer similar responses. In the case of Marco "Panké" Nicolat, he transitioned from skateboarding to tattoos in an almost natural way. He was part of a family of artists; his deceased father was a famous painter who lived in Oaxaca for many years, and his two brothers were also artists and were interested in tattoos and mural painting. Marco told me he began learning about tattoos with his friend Iván, who had been living in Berlin since the late nineties. Marco decided in 2007 that he wanted to learn for real about the art of tattooing so he could make a living from it, as he was growing old and was piling injuries from skateboarding. He tried his luck and decided to make Berlin his home. Although he did not start his learning process in Mexico, he regularly traveled to his home country and there he got to know many tattoo artists in places like Mexico City, Queretaro, and Guadalajara. He owes his expertise to the help of Iván, who began tattooing in the late 1990s in Mexico City. In the case of Alfredo Chavarria, he studied visual arts at the ENAP (Escuela Nacional de Artes Plásticas5 ) in the late 1990s. He was always interested in drawings, the culture of comics (he is a big fan), and design. In an interview he gave for the YouTube channel La Casa del Tatuador (Tattoo Artist House) in March 2023, he mentions that he learned about tattoos from his friend Rodrigo López de Lara (Roy)6. Through Roy's influence, Alfredo became involved and interested in tattoos from a creative perspective. Roy gave Alfredo his first inked piece in 1998, a tribal design. Alfredo tried his luck in different graphic design firms where he worked as a freelancer illustrator until that work almost dried up, and he had to do something else, so he began working as a sales assistant and tattoo apprentice in Tatuajes México at the trendy Colonia Roma neighborhood, where his friend Roy worked at the time. Tatuajes México was a studio and a shop that sold materials, tools, and designs, and it was a place where artists went to buy products. Through this job, Alfredo got to know slowly the thriving community of tattoo artists, and this studio allowed him to try out his first works as an apprentice. These brief sample stories of tattoo artists who began tattooing in the middle of the 1990s and early 2000s show a cultural tradition in the making, almost starting from scratch. There was not much cultural background to rely on beyond the tattoo magazines and the trips that emerging artists with resources could make to the United States. This is important for anthropology as the transmission of skills normally assumes that there is a corpus of ancestral knowledge on which the initiates depend. For instance, when Victor Turner asked about the meaning of symbols in Ndembu rituals, there were experts like Muchona who gave detailed explanations to him about ritual procedures, uses of colors, rules, and taboos. Even in so-called "invented traditions", there was a manipulation of cultural references used for political reasons (Hobsbawm and Ranger 2000) and creative responses that depended on a set of cosmological innovations (Sahlins 1999). With tattoo artists in Mexico in the 1990s and early 2000s, their craft developed as a form of experimentation where their only solid ground was their artistic background in graphic design, painting, architecture, or the simple curiosity they had. However, as tattoo artists say, to work on human skin is very different; it has a depth that challenges any drawing project, it faces the resistance of another human being, it also tests their endurance as tattoo sessions tend to be long and both artists and clients must remain seated in uncomfortable positions. Learning a craft for these emerging artists was not a case of an "invention of tradition" without any cultural "authentic" reference. Here, it is important to remember the words of Marshal Sahlins about the importance of the specificities of cultural formations, "From what I know about culture, then, traditions are invented in the specific terms of the people who construct them" (Sahlins 1999: 409). In the case of emerging tattoo artists in Mexico City, they invented their tradition and craft by giving it motives, themes, images, symbols, and artistic techniques, which came from their Mexican traditional backgrounds (Skulls from D<unk>a de Muertos, Pre-Hispanic imagery, religious motives, wrestlers, masks, names, and tribal styles). In this way, their development was quite unique. They did not have a master tattoo artist to rely on, and they worked with hybridizing techniques, experimenting with tools, and even learning from some self-made tattoo artists in jail. They would become experts years later, and they would lead a new generation of emerging tattoo artists in the 2010s and the coming years. It was at the beginning of the 2010s that the figure of the tattoo apprentice gained traction in studios and parlors all over Mexico. --- Body Biographies in Mexico City Today, individuals learn about tattoos by working as apprentices with more experienced artists or by transitioning from other graphic or visual arts to tattooing. What is true is that there is no institutionalized apprenticeship at a university or a specialized college where one can get a formal education in tattooing. An artist becomes so by practicing and learning from others via the oral transmission of knowledge. Though advertisements about learning to tattoo online have grown in the last five years, most artists are reluctant to use this form of learning. Today, self-taught has its limits. Many reject the idea or even the possibility of learning to tattoo online. To become an artist, you must practice and be close to the studios and the people who know the craft and are more experienced. Besides, famous artists are unwilling to share their trade secrets with people who know nothing about tattoos, even less for free. However, these artists may offer seminars and tutorials for more experienced fellow artists, sometimes online or during tattoo conventions. To be a tattoo artist means to work with ink lines inscribed into the skin. Like many other arts, like drawing, sculpture, calligraphy, pottery, and painting, tattoos form lines, and the artist's creativity sparks from techniques and appliances on the body of such lines. Here, it becomes inevitable to relate tattoos to the reflection on lines developed by Tim Ingold (2007). For Ingold, lines become essential to show human creativity and movement through history. In his work, he describes the process of writing by hand and drawing as experiments in innovation. He highlights the importance of human movement as leaving linear traces on a surface, as "tantamount to a way of life" (Ingold 2007: 80). For him, writing goes beyond the modern association with inscription provided by typography in a computer or a mechanical device, and to understand its value we need to appreciate it as a form of a drawing of lines, which follows a trajectory traced by the artist's hand on a surface (Ingold 2007: 128). Surprisingly, Ingold does not make any direct reference to tattoos in his book. I find this omission puzzling, considering that other anthropologists like Alfred Gell have dedicated a substantial analysis of indigenous tattoos as art forms (Gell 1998). We can only take some of Ingold <unk>s references to lines and adapt them to the context of tattoos, like the technological implementation of tools and the creative display of the artists as they develop their art as an unpredictable unfolding process (Ingold 2007: 142-143). In the case of Gell, indigenous tattoos among the M<unk>ori form part of a particular artistic style, which relates tattoo patterns to other forms of representations of culture inscribed in artifacts, like ceramics, shields, canoe carving, and face painting. However, tattoos are manufactured objects that convey agency. Gell mentions that: Manufactured objects are 'caused' by their makers, just as smoke is caused by fire; hence manufactured objects are indexes of their makers. The index, as a manufactured object, is in the 'patient' position in a social relationship with its maker, who is an agent, and without whose agency it would not exist. (Gell 2007: 23). Gell warns the reader that not all cultural objects appear to be manufactured by a human artist; some are believed to be divinity creations. Sometimes, the origin of such artifacts dissipates or is forgotten by people. Traditional Indigenous tattoo art had religious and ritual motives, and the tattoo artist did not always have to be remembered. What was of relevance was that the tattoo offered protection to a person in the afterlife; so, maze tattoos, for instance, protected women's bodies when they died and guided them in the afterworld, as documented in many parts of India (Gell 2007: 90). However, there are exceptions. In other cultural contexts, the importance of the tattoo artist was recognized and praised by people. Among the M<unk>ori, an artist got fame and prestige through his art, where his skills and techniques were highly appreciated, though their artistry was related to how well they could faithfully reproduce a particular cultural style and not about expressing their individual creativity (Gell, 2007: 158). For Gell, the limits of creativity imposed by what he called "tradition" differentiated tattoo art in indigenous settings from the West, where tattoo art focuses on individual innovation. Although there are standardized styles in contemporary tattoo production, these do not necessarily relate to the artist's cultural milieu and are often borrowed or taken out of context from other cultures. Thus, a Mexican artist may become an expert in Japanese imagery, as is the case of "Dr. Lakra", whose work is inspired by Asian cultures, or a German artist may become a specialist solely in the realism of horror Hollywood movies. The freedom of individual expression makes possible these types of cultural hybridization. Not all Mexican tattoo artists work under a unique stylistic framework; the majority use diverse techniques and patterns depending on the client's demand. The question is how they develop their individuality through tattoos and how people assess their quality. To begin with, it is worth mentioning that most, if not all, tattoo artists have tattoos themselves made with a different array of fellow artists and friends. Their bodies become biographies and like with their clients, they transform their bodies into vast canvases for experimentation, where reciprocal links are constructed in what they argue is the karma they pay for being tattoo artists themselves. They ink others, but they need to be inked and experience the pain, too. The reciprocity of tattooing each other is a form of solidarity among artists. It shows commitment, trust, and respect for others. Alfredo Chavarria, for instance, has mentioned that artists and clients may become collectors of tattoos, showing pieces of art from a wide diversity of creators. In the tattoo world, collected tattoos also add to the status of a person. To show a tattoo made by a famous artist enhances the body representation of a person and the admiration of others. For instance, a tattoo made by "Dr. Lakra" or "Chanok" gives prestige to the tattooed person who wears it. However, it is rare to find a tattoo artist who has tattoos only from one colleague. The body of the artist is a body biography that traces a trajectory in the tattoo environment and is a symbol of a life story. It also affects how clients perceive them. A heavily tattooed artist may bring more confidence to persons who decide to have their first tattoo as it shows passion and commitment to the art and the seriousness of the profession. A committed artist also inspires apprentices to learn the intricacies of the trade. Artists continue learning after they master their trade. In the case of the artists mentioned in the previous section, the complexity of tattoos, their history, specialization, and development of techniques demand ongoing actualization. One never stops learning and experimenting. Artists sometimes foray into other arts like painting and photography to develop new skills. Learning new techniques involves getting acquainted with new technologies, like social media, particularly Instagram. It is through this last platform that tattoo artists advertise their work to the world. I will describe the impact of social media in the next section. To update their artistry, tattoo artists invest a substantial amount of time in good-quality products, updating their knowledge of ink brands and cartridges, machines, needles, equipment in general, and hygiene. --- Competition in a Saturated Market Early tattoo artists in Mexico initially did not know they could make a living tattooing people. In the middle of the 1990s and early 2000s, there was not much information available in Mexico about tattoos; only a couple of magazines existed, mainly in English, no widespread Internet access was available, and just a few tattoo parlors were around, and few people got tattooed. With time, the tattoo culture in Mexico expanded. New companies like Tatuajes Mexico emerged, which offered good quality products for the Mexican market at a large scale and accessible prices. New magazines in Spanish began circulating, and tattoo studios opened their doors all over the country. In Mexico City, the tattoo scene grew because of the increasing
Based on the author's long-term fieldwork experience in Mexico, this article describes the apprenticeship experience of tattoo artists. It deals with the learning process of a craft, how artists develop skills and techniques, and how they share their knowledge with others. The text argues that the solidarity that the tattoo community creates passes not only through the relationship between artists and clients but also through the exchange and reciprocity between professionals through the mutual inking of their bodies, in what the author calls body biographies. The article also depicts the importance of social media in promoting an artist's work and how a person becomes an expert or a professional. Finally, it analyzes the growing popularity of tattoos in Mexico and the saturated market it creates, where artists compete for clients, prestige, and money.
. Learning new techniques involves getting acquainted with new technologies, like social media, particularly Instagram. It is through this last platform that tattoo artists advertise their work to the world. I will describe the impact of social media in the next section. To update their artistry, tattoo artists invest a substantial amount of time in good-quality products, updating their knowledge of ink brands and cartridges, machines, needles, equipment in general, and hygiene. --- Competition in a Saturated Market Early tattoo artists in Mexico initially did not know they could make a living tattooing people. In the middle of the 1990s and early 2000s, there was not much information available in Mexico about tattoos; only a couple of magazines existed, mainly in English, no widespread Internet access was available, and just a few tattoo parlors were around, and few people got tattooed. With time, the tattoo culture in Mexico expanded. New companies like Tatuajes Mexico emerged, which offered good quality products for the Mexican market at a large scale and accessible prices. New magazines in Spanish began circulating, and tattoo studios opened their doors all over the country. In Mexico City, the tattoo scene grew because of the increasing demand and the social acceptability of tattoos in society. More people in Mexico are getting inked today than ever before. Many companies in the public and private sectors employ workers with tattoos. In theory, no employment agency should discriminate by advertising jobs explicitly banning tattoos. Nevertheless, there is still a stigma in some sectors of society about tattoos, particularly those that look poor in quality or are placed on the face, head, or hands. However, because of their acceptability, they have acquired a high demand and have become a symbol of status. "Dr. Lakra" mentions that the popularity of tattoos today is mainly due to the hegemony of the visual culture over the written one, the desire to create stronger bonds between people, and as an act of social and individual memory preservation (López Ram<unk>rez 2022)7. Nowadays, more people are trying to make a living from tattoos, not only as artists but also as business partners, distributors, and sellers of equipment, in advertisements, magazines, consultancy, and social media as influencers. Therefore, there is much competition, and the tattoo scene in Mexico today has created a saturated market with a hierarchical structure where old-school tattoo artists compete with the new generation, who may have other interests and perspectives on tattoos. There is a generational gap, and to survive, many tattoo artists organize themselves in collectives. For instance, "Dr. Lakra" together with other artists from his generation, opened the "Sigue Sigue Sputnik" (SSS) collective in 2016. Located initially in the Colonia Guerrero, and since September 2023 in the trendy Colonia Roma in the Center of Mexico City, SSS is a tattoo studio and art shop that also serves as an art gallery. In a saturated market, many anthropological issues arise. How do we identify a true artist from a fake one? Price, location, and trends might not depict an artist's quality. Therefore, there are multiple factors to consider. One is the history of the artist him/herself, particularly who taught him/her and mentored him/her. Second, how long has he/she worked in the trade, and in which studios? Third, the quality of his/her portfolio. Fourth, the recommendations he/she gets from other artists or clients. Even though it is still difficult to assess the real professionals from emerging artists and amateurs. In this saturated market, the creation of a persona (character) on Instagram and Facebook is as important as the work of the artist itself. How many followers the artists have, how these artists interact with their virtual audience, and how often they are under demand and on tour in other cities or countries are essential factors to consider in the world of tattoos. The virtual persona adds a layer of legitimation to a tattoo artist. Being part of an influencer culture plays an important role, too. If a famous influencer talks about a particular artist or gets a tattoo from him or her, it elevates the artist's prestige and demand. Therefore, today, not only in Mexico City but also in other cities, tattoo artists depend on social media to get work and to advertise their products. There is no escape from that, and if they want to make a living from tattoos, they need to have an active online presence, at least on Instagram. Artists like Alfredo Cahavarr<unk>a mentioned that due to the saturated market in Mexico, established artists do not take apprentices too often. Either because of previous bad experiences, because they do not want more competition, or simply because they do not have the time, these artists don <unk>t share their knowledge with beginners. Less common even for artists is to share their knowledge without remuneration. They may give courses, seminars, and tutorials for a fee, but they will think twice before mentoring the uninitiated unless they are family members or people they genuinely trust. As far as I know, Marco, Alfredo, "Chanok" and "Dr. Lakra" do not have apprentices today, mainly because they do not have the time to teach. Other tattoo artists from a younger generation I have met in Mexico City and San Luis Potos<unk> were more willing to have apprentices, but in these cases, they were their sentimental partners or people they had known for many years. The reluctance to share knowledge may impact the tattoo culture in Mexico City and beyond. Because tattoos are an art passed through oral transmission and practice, the lack of proper guidance may have a negative influence on the quality of an artist. New artists are impatient, too, and they are often very eager to start making a living from tattoos and rush the process without taking care of the proper development of their artistry. A saturated market also impacts the remuneration an artist gets. As more artists and studios are available, some lower their prices in order to get clients quickly. However, Alfredo and Marco argue that professionals should always charge a fair fee, depending on the characteristics of the tattoo. Still, they shouldn't overcharge or sell their art cheaply. Normally, low prices in a studio should be delegated to apprentices or people who are beginning their careers. However, what is true is that to get inked by artists from the old guard like "Chanok" and "Dr. Lakra" is usually expensive, and they have a waiting list of many months to arrange a booking. There are no census or statistics about how many tattoo studios exist in Mexico City and less information about how many professional artists make a living from tattoos. The closest one can get to surmising the dimensions of the tattoo scene is to attend one of the many tattoo conventions in Mexico City and other cities. It is at the conventions when artists get together to showcase their work. Studios use this opportunity to promote their business by handing out cards, merchandise, and stickers. The conventions include tattoo exhibitions, contests, live music, and quick tattoo sessions. It is a place for artists to get to know people, make connections, and meet artists and suppliers from other regions. Although it shows the competitiveness of the trade, the conventions also help promote tattoos as an art available to everybody to make them more accessible and acceptable to society in general8. --- Conclusion The tattoo scene in Mexico has changed substantially since I was first introduced to it in the middle of the 1990s. Today, there are more professional tattoo artists than ever before in Mexico. What does it make an artist a professional? This question lingers throughout the text, and it has no simple answers. As there are no formal institutions that legitimate professional tattoo artists nationally or internationally, or even if they exist, they are not universally recognized, artists legitimate themselves through their work, experience, and time spent performing tattoos. To know their trade means to learn from somebody willing to teach them. The lucky ones, like my friend Alfredo, learned when the tattoo scene was not so overcrowded, and he had the support of his friend Roy, who introduced him and kindly taught him the basics of tattooing. Other artists were selftaught, like Marco and "Dr. Lakra", who learned from different sources or got into tattoos by transferring previous skills learned in other arts, like painting and graphic design. For those who learned as apprentices, the road to mastering a trade involves constant training, being associated with studios, and getting to know more established artists who may recommend their work (the apprentice's work). The success of a career depends on the skills artists gain, their gift, and, in some cases, the development of a unique style (their "signature"), self-discipline, a business-oriented approach, and their own personalities. In the world of tattoos, creativity is important, and in Mexico, the creative skills of an artist may define his or her popularity. The quality of the final product is also a determinant. Still, the life paths of tattoo artists have not been easy, and they currently face fierce competition in a saturated market. Despite the commodification of tattoos, for many, the experience of having one is still a rite of passage. Pain is unavoidable, and tattoo artists inflict this pain for artistic purposes. As artists see their craft as technical prowess, sometimes, if they are not too excited about the design a person wants, if it is too repetitive or something they have tattooed many times, their craft becomes mechanical, a mastery that does not involve their full interest. When an artist develops his or her own project or a tattoo that he or she considers aesthetically pleasant, he or she will become more engaged and interact more with the client. Here, the intensity of the experience involves both the artist and the tattooed. The analysis of the apprenticeship experience of tattoo artists and the ritual of getting a tattoo are important aspects of the anthropology of embodied practices. Artists, apprentices, and clients form the core of the social relations that exist in a tattoo performance (performance in the sense of a social practice between at least two persons in a public sphere). The body as a canvas for the experimentation of lines, as Tim Ingold would say, allows the artist to innovate and leave a mark. As "Dr. Lakra" mentions: "Tattooing another person's skin and leaving a mark that will stay for the rest of his life is special; it is not done in solitude, but in coexistence, and that's fun" (López Ram<unk>rez 2022) In this article, I have focused mainly on the artists' perspective. The focus on the artists forms part of an anthropological strategy that considers the perspective of the experts as a vital component of human creativity. Like in ritual, performance, dance, and a variety of social practices, where the views of the experts form the core of the transmission of knowledge, in tattoo contexts, the artist embodies the craft of an art form. By using lines, like Ingold correctly says, an artist leaves an indelible impression on the bodies of others, either for a short period or for the rest of one's life.
Based on the author's long-term fieldwork experience in Mexico, this article describes the apprenticeship experience of tattoo artists. It deals with the learning process of a craft, how artists develop skills and techniques, and how they share their knowledge with others. The text argues that the solidarity that the tattoo community creates passes not only through the relationship between artists and clients but also through the exchange and reciprocity between professionals through the mutual inking of their bodies, in what the author calls body biographies. The article also depicts the importance of social media in promoting an artist's work and how a person becomes an expert or a professional. Finally, it analyzes the growing popularity of tattoos in Mexico and the saturated market it creates, where artists compete for clients, prestige, and money.
Background Self-perceived uselessness represents a negative evaluation of one's usefulness or importance to others and a general understanding about the aging process [1][2][3][4][5]. Self-perceived uselessness, or its opposite, usefulness, is a major component of self-perceived aging: for example, it is one of five items of the Attitude Toward Own Aging subscale of the Philadelphia Geriatrics Center Morale Scale [3]. The feeling of uselessness shapes older adults' thoughts and behaviors [1][2][3][4][5][6][7][8][9][10][11][12], which in turn influences psychological and physiological well-being [1,2,13]. Empirical studies in both China and Western societies have consistently reported that self-perceived uselessness, a negative self-perception of aging, is a robust predictor of high mortality risk [2,3,5,11,[13][14][15][16][17][18] and a wide range of poor health indicators such as functional impairment, disability [1-3, 10, 19, 20], chronic conditions [21,22], lower rates of recovery from illness [23], poorer cognitive and mental health function [20,[24][25][26], and lower rates of good self-rated health and life satisfaction [20,[27][28][29][30]. Studies further indicate that older adults who have higher levels of self-reported uselessness tend to have lower levels of social engagement, physical activity, self-efficacy and self-esteem as well as higher levels of depression [1][2][3][4]. Lower levels of self-perceived uselessness with aging are associated with a greater likelihood of survival, better functioning and good life satisfaction [3,5,15,[31][32][33][34]. These studies have improved our understanding about the significant pathways through which self-perceived uselessness is associated with healthy longevity and successful aging [20]. Researchers have proposed several psychological, physiological and behavioral pathways to explain the possible channels through which self-perceived uselessness affects health and mortality at older ages [18,20,[34][35][36]. From a psychological perspective, self-perceived uselessness could diminish beliefs about self-control and self-efficacy that could lead to low resilience capacity and depression, thus preventing psychological well-being [1,2]. Self-perceived usefulness, by contrast, could lead to a positive appraisal of one's capacity to deal with adversity or difficulties in daily life [2]. From a physiological perspective, self-perceived uselessness could lead to neuroendocrine and neurohumoral changes, immune alterations, autonomic and cardiovascular dysregulation or central neurotransmitter system dysfunction because of cardiovascular stress [37,38]. All these could contribute to cardiovascular diseases and subsequent symptoms and disabilities in older age [36,39]. From a behavioral perspective, attitudes toward aging have the potential to influence responses to illness or physical experiences [31]; self-perceived uselessness could lead to less optimal healthcare seeking behaviors [40] and less engagement in preventive and health-promoting activities [41], subsequently influencing one's health or leading to more rapid declines in health [35]. On the other hand, positive perceptions of usefulness to families or others would help older adults adapt to age-related changes [42]. One inadequacy of the existing literature is that the majority of research is from non-Western cultures [20,43,44]. With a couple of exceptions [18,20], quantitative research on self-perceived usefulness or uselessness among older adults in China is almost nonexistent; this is primarily due to lack of data on self-perceived uselessness, despite several studies on self-perception of aging [12,[45][46][47]. It is also unclear whether the risk factors associated with self-perceived uselessness found in Western societies still hold in non-Western nations. It has been argued that different cultures likely have different social views about aging because of different social norms about the social roles of older adults and their role in family systems, which could alter patterns of self-perceived uselessness [48]. The existing literature on self-perceptions of aging and usefulness is also limited by small datasets with a narrow range of age groups and covariates. With a few exceptions [49][50][51], it is rare to analyze risk factors for the oldest-old. Numerous empirical studies in other areas of aging have shown that the oldest-old aged 80 or older, including centenarians, are likely to have a better capacity to cope with the adversities encountered in daily life [52][53][54][55][56]. Because those who live to advanced ages have had to adapt to many changes and challenges over time, their self-perception of uselessness may differ from that of the young-old aged 65-79 who have experienced fewer challenges. Comparative data from older adults at different levels of longevity may reveal important implications for achieving healthy longevity and successful aging across older ages [20,52]. Furthermore, most previous studies included relatively small sample sizes, either from local or non-population-based studies [5,31,34], which limits the generalizability of the findings. Finally, almost all existing studies only focus on one or two sets of factors; no studies so far have investigated a wide range of theoretically motivated risk factors from a multidimensional perspective. A more holistic understanding of risk factors would offer a large range of social, demographic, health and behavioral factors to identify older adults who are most likely to need intervention programs to address health problems related to self-perceived uselessness. Given the power of a single self-rated item like selfperceived uselessness to reflect a wide range of markers related to aging and health, identifying its risk factors may have important implications for public health surveillance and health services research aimed at achieving successful aging and healthy longevity [20]. A growing body of research has investigated factors associated with self-perceived uselessness and aging, as reviewed above, but there are several ways that new research can add to this literature. To extend existing research in healthy longevity, this study aims to investigate which socioeconomic resources, social environments, health statuses, fixed attributes and health behaviors are associated with self-perceived uselessness among older adults in mainland China (hereafter China). Data come from the Chinese Longitudinal Healthy Longevity Survey (CLHLS), the largest ongoing nationally-representative sample and the only nationwide survey in China that collects data on self-perceived uselessness in addition to demographics, resources, environmental factors and health status. The focus on Chinese older adults has profound significance. In contemporary China, around 20% of adults aged 65 years or older, more than 25 million older adults, feel useless always or often [20]; about 50-70% of older adults reported feelings of being a family burden, getting older and falling behind social progress [20]. This large population of older adults with a negative perception of usefulness is likely to experience higher mortality [18], higher risk of disability and cognitive impairment [20], and higher prevalence of depression and loneliness [56,57]. Self-perceived uselessness is becoming a public health challenge for China. A systematic investigation of factors that may be closely linked with self-perceived uselessness at older ages would help to identify risk factors and target appropriate interventions for subpopulations at highest risk. In the next section, we provide a brief review of risk factors for feeling useless at older ages, organized with a new conceptual framework that guides the present study. --- Factors associated with uselessness and the REHAB framework The existing literature on factors associated with selfperceived uselessness is very limited. However, there have been quite a few studies that have examined factors associated with self-perception of aging [47,48,58]. Because self-perceived uselessness is a key component of self-perceived aging, our review includes both selfperceived uselessness and self-perceived aging [3,10]. Overall, empirical studies have shown that a number of factors are independently associated with self-perceptions of uselessness or aging [45,48,58]. We classified these factors as resources (R), environments (E), health (H), fixed attributes (A) and behaviors (B). Resource factors mainly include socioeconomic status (SES); environmental factors mostly refer to social environments that include family/social supports and cultural factors; health conditions could include various indicators measuring different dimensions of health; fixed attributes mainly include age, gender, ethnicity, predisposition and some biological components; and behavioral or lifestyle factors usually consist of smoking, drinking, involvement in leisure activities and social participation. Accordingly, we propose a conceptual framework named REHAB to systemically examine how these sets of factors are associated with self-perceived uselessness. We follow a conventional approach in the literature and begin with fixed attributes (mainly demographics) (Fig. 1). --- Fixed attributes (A) Most studies have revealed that, among older adults in various populations, increasing age is associated with more negative perceptions of aging and uselessness [47,49,[59][60][61]. However, several studies have found that age is not associated with self-perception of aging [58,62], even when health conditions are taken into account [63]. Gender differences are also inconclusive. Some studies have found that men tend to have a more positive perception about their own aging than women [58,64], while others have found opposite results [65], and still others have found no gender differences [49][50][51][52]59]. Racial differences in self-perception of aging are well-documented, but such differences are largely due to cultural practices and norms [66]. Individual predispositions such as optimism and self-control may help develop good skills to cope with daily challenges and promote social engagement [67]. Both optimism and self-control are associated with positive perceptions of aging and usefulness [64,68]. --- Resources (R) One's self-perception of aging is contingent upon socioeconomic resources available to that person [68]. Studies have shown that lack of resources could lead to a negative self-perception about aging, while adequate or sufficient resources could lead to positive perceptions about aging [67]. This is because older adults with more resources have more opportunities to be involved in various social connections and feel useful to others. Wealthier people are also likely to feel more excited and hopeful about their lives ahead [69]. However, some studies have found no differences by resources such as education [47,70]; others have found that higher income and educational attainment are associated with less positive self-perceptions of aging because of relative losses perceived after retirement [47,59,70]. Access to other resources such as greater medical care tended to be associated with more positive perceptions about aging [61]. Additional studies have revealed that there is a negative association between neighborhood-level socioeconomic development and self-perception of aging in more advanced societies due to increased individual independence and weakened multi-generational family structure that develop with industrialization and modernization [45,71]. The Health Environments (family/social support and cultures) --- Self-perceived Uselessness --- Resources --- Behaviors --- Fixed Attributes Fig. 1 Conceptual framework for the multidimensional study of self-perceived uselessness. Note: The underlined letter of each set of factors was used to name the framework: REHAB. Bold solid arrows represent possible linkages under study, while grey dashed arrows represent possible linkages beyond the scope of this study socioeconomic resources of family members and significant others are also important factors influencing one's own resources, physical health and quality of life [72,73]. --- Environments (E) Social environments include family/social support and cultural conditions. The individual assessment of one's usefulness to others at older ages is a social process that reflects the internalization of culturally appropriate attributes [74]. This social process could be influenced by family members that either reinforce or challenge previous perceptions, thus affecting self-perceived aging or usefulness [75]. --- Social support Social relations with family and friends are a central source of social support in later adulthood [58]. Selfperceptions of aging and usefulness may be influenced by social comparisons with network members (relatives, friends and neighbors) surrounding older adults [46]. The existence of strong social ties and support from others may bolster older individuals' self-esteem, positively influence their self-perception of aging and health [67], and make people aware of positive age-related changes [76]. Older adults who are socially connected generally report more positive feelings about their aging process [77]. The contact hypothesis posits that social contact and interactions could lead to a reduction in negative perceptions of aging and uselessness through improved communication and interaction with members in the network [78]. Studies have shown that fewer social ties and low frequency of interactions are associated with increased perceptions of uselessness [2,14,71,76]. For older men, marriage is an important basis of social support, with spouses both sustaining health behaviors and facilitating physical care, especially when there is a reduction in network size of family and friends [67]. The socioemotional selectivity theory argues that social network sizes may decline in later ages, but family ties remain important as older adults shift their focus to more emotionally meaningful intimate relationships (i.e., family members and close friends) [1,79]. However, when social support includes personal care, the receipt of care services from spouses, children, family members or friends could increase negative self-perceptions of aging through intensified feelings of dependence on others, which implies a loss of control and burden [80]. Studies on the association between social services and self-perception of aging are almost nonexistent. --- Culture Cultural meanings are essential for self-perception of aging or usefulness [58]. Identity theory emphasizes the influence of society on individuals [78]. Because cultural systems shape one's views about aging [80][81][82], selfperception of aging is a product of societal beliefs [5] that differ across cultures [58,64,82]. Scholars have argued that Eastern cultures emphasize respect to one's elders [50,76]; for example, societies influenced by Confucian values and the practice of filial piety promote positive views of aging and usefulness in old age [50,53,[83][84][85]. In contrast, Western societies hold more negative views about the aging process due to youth-oriented value systems [45,58,82,84,85]. Consequently, self-perceptions of aging are more positive in Confucian countries like China compared to Western cultures [45,84]. However, the societal attitude toward older adults in China is changing because of industrialization and rapid population aging [48]. --- Behaviors (B) There is a consensus that healthy behaviors such as frequent participation in leisure activities, exercise and social engagements could lead to positive perceptions of aging, whereas low participation and inactivity may erode feelings of usefulness [47,48]. This is because activities imply regular commitments, membership, identity and integration [58]. Social engagements may also stimulate multiple body functions (e.g., cognitive, cardiovascular, neuromuscular), protect against cognitive decline [86], bolster active coping strategies, and, lower the risk of mortality. These activities thus can be important contributors to feelings of meaningfulness, purposefulness and usefulness; in turn, these feelings can reinforce individuals' desires to maintain social connections and engagement [1]. Regular involvements in leisure and physical activities at late ages could buffer against the negative impacts of mishaps, age-related physical changes and life events, and provide opportunities to successfully cope with these challenges and adversities in daily life [34,58]. Meaningful social roles for older adults could promote the image of older adults at the societal level [58]. On the other hand, no participation in leisure and social activities could cause increased feelings of loneliness, isolation, abandonment, distress and negative perception of aging. --- Health (H) Health can be considered the most important element in the self-assessment of aging and usefulness [5,45,58,84]. Declines in functioning and health status may prohibit older adults from providing meaningful services to others, and thus negatively impact perceptions about their level of usefulness [2]; better physical health (few chronic conditions, no functional disability) can be associated with more positive feelings about aging [77]. One recent study revealed that the presence of various health problems (in terms of chronic conditions, poor functioning and greater disability) was associated with more negative perceptions of aging or uselessness [67]. Evidence further shows that physical health may play a more central role in selfperceptions of aging than cognitive function [45]. Psychological well-being could reduce disease, disability and mortality through protective behaviors and thus eventually improve positive perceptions of aging [58]. --- Methods --- Study sample We pooled four waves of the Chinese Longitudinal Healthy Longevity Survey (CLHLS) in 2005, 2008-2009, 2011-2012 and 2014 to increase the sample size to obtain more reliable results. The pooled datasets were constructed longitudinally, similar to some recent studies [20]. Three waves in 1998, 2000 and the 2002 were not included in this analysis because many important variables were not available. The CLHLS is conducted in a randomly selected half of the counties/cities in 22 provinces where Han is the majority ethnicity. Nine predominately minority provinces were excluded to avoid inaccuracy of age-reporting at very old ages (e.g., ages 90+) among minorities [87]. The total population of these 22 provinces accounted for 82% of the total population of China in 2010. The analytical sample for this study consisted of 26,624 respondents who contributed 48,476 observations from 2005 to 2014. The sampling procedures and assessments of data quality of the CLHLS can be found elsewhere and thus are not detailed here [20,87]. --- Measurements --- Self-perceived uselessness The CLHLS designed a single question to collect data on self-perceived uselessness: "As you age, do you feel more useless?" The wording is almost identical to the wording of the "As you get older, you are less useful" item in the Attitude Toward Own Aging subscale of the Philadelphia Geriatrics Center Morale Scale [3,10]. There are six response categories for self-perceived uselessness based on frequency: always, often, sometimes, seldom, almost never or never and unable to answer. To obtain more reliable results, we reclassified them into three levels of frequency plus one special category: always/often (high frequency), sometimes (moderate frequency), seldom/never (low frequency) and unable to answer. The main purpose of keeping "unable to answer" as a response category was to keep original information intact and to better reflect true associations with levels of self-perception, including being unable to assess due to poor health. Of the participants who selected "unable to answer," about 90% were unable to answer due to poor health [20]. --- Factors associated with self-perceived uselessness Based on the REHAB framework proposed above, we modeled the following six sets of factors to examine whether they are associated with self-perceived uselessness: resources (R), environments (E), health conditions (H), fixed attributes (A) and behaviors (B). The fixed attributes (A) included age, sex (men vs. women), ethnicity (Han vs. non-Han) and two predisposition variables. The variable age (in years) was grouped into 65-79, 80-89, 90-99 and 100+. Optimism was measured by the question "do you look on the bright side of things?" and self-control was measured by the question "do you have control over the things that happen to you?". Both predisposition variables have six response categories: always, often, sometimes, seldom, never and not able to answer. We combined always and often into one category (high), and combined sometimes, seldom and never into another category (low). For the respondents who were not able to answer the questions, we imputed them into one of the five categories by assuming that their answers would be the same as those who answered the question if they had the same demographics, resources, family/social support, behaviors and health conditions. Resources (R) were mainly measured by the respondent's socioeconomic status (SES) that included residence (urban vs. rural), years of schooling (0, 1-6 and 7+), lifetime primary occupation (white collar occupation vs. others), economic independence (having a retirement wage/pension and/or own earnings vs. no), and family economic conditions (rich vs. fair/poor). Education of other family members, including years of schooling of spouse (0, 1-6, 7+ and missing/no spouse), coresident children/grandchildren (0, 1-6, 7-9, 10+ and missing/no children/grandchildren), and father (0, 1+ and missing) were also considered as SES factors. Around 15-40% of the respondents did not provide information for educational attainment levels of other family members because they could not remember or the question was not applicable (e.g., no coresident children/grandchildren, never married), so we kept a category of missing to fully reflect the data. Considering urban-rural residence as an SES factor is a common practice in China due to significant rural-urban differences in economic development [88]. Social environmental factors (E) were measured by family/social support and cultural context. The former included marital status (currently married vs. no), most frequently contacted person (family member, friend/relative and nobody), most trusted person (family member, friend/relative and nobody), most helpful person (family member, friend/relative and nobody), availability of community-based care services in the neighborhood (yes vs. no), and availability of community-based social activities and entertainment services in the neighborhood (yes vs. no). Proxy factors for culture included coresidence with children (yes vs. no) and match between expected living arrangements (coresidence with children, living alone or with spouse only, and institutionalization) and actual living arrangements (concordance vs. discordance). Other measures of culturally expected support include receiving financial and instrumental support (money or food) from children (yes vs. no), and giving financial and instrumental support to children (yes vs. no). In the literature on aging and social gerontology, coresidence has been used either as a proxy of social connectedness and social support [89] or as a cultural tradition [90][91][92][93][94][95][96]. Many studies argue that the high prevalence of coresidence with adult children among older parents in China and other East Asian countries is mainly due to the long history of Confucianism [97]. In the present study, we considered coresidence as a cultural tradition. Behavioral factors (B) were measured by currently smoking (yes vs. no), currently consuming alcohol (yes vs. no), regularly exercising (yes vs. no), and frequency of leisure activities and social participation. Levels of leisure activities were constructed from the sum of frequencies of six items, including doing housework, gardening, raising domestic animals or poultry, reading books/newspapers, watching TV/listening to radio and any other personal outdoor activities. Each item was measured on a five-point Likert-scale from never to almost daily. The reliability coefficient of these seven items is 0.66. The tertile was applied to classify the sample into three equal-sized groups: low level, moderate level and high level of leisure activity. Social participation was measured by two questions "do you participate in social activities?" and "do you play cards/mah-jong?". We similarly classified the sample into three groups: low level (never involved in these two activities), high level (involved in one of the two activities 1-7 times per week), and moderate level (the rest of the sample). Health conditions (H) included activities of daily living (ADL) disability, instrumental activities of daily living (IADL) disability, cognitive function, chronic disease conditions and subjective wellbeing. ADL disability was measured by self-reported ability to perform six daily activities (bathing, dressing, indoor transferring, toileting, eating and continence). Following the common practice in the field [18], we classified the respondents into two groups: needing assistance in any one of the six tasks (ADL dependent/disabled) versus needing no assistance in any of the six tasks (ADL independent/not-disabled). IADL was measured by self-reported ability to perform eight activities: (a) visiting neighbors, (b) shopping, (c) cooking, (d) washing clothes, (e) walking one kilometer, (f ) lifting 5 kg, (g) crouching and standing up three times, and (h) taking public transportation. In a similar vein, we dichotomized the respondents into two groups: needing help in performing any of these eight IADL items (IADL disabled/dependent) versus needing no help in performing any of the eight activities (IADL notdisabled/independent). Cognitive function was measured by a validated Chinese version of the Mini-mental State Examination (MMSE), which included six domains of cognition (orientation, reaction, calculation, short memory, naming and language) with a total score of 30 [87]. We dichotomized the respondents into impaired (scores <unk>24) and unimpaired (scores 24-30) based on the cutpoint commonly used in aging research [87]. An alternative cut-point score (18) was also examined and yielded very similar results. Chronic disease condition was dichotomized into whether the respondent reported any disease at the time of survey from a list of more than twenty conditions (hypertension, heart diseases, stroke, diabetes, cancer, etc.) versus none. Fewer than 5% of the respondents had 2+ conditions and the prevalence of disease conditions was comparable to those found in other nationwide surveys [87]. Subjective (psychological) wellbeing was measured by two variables: "do you feel lonely?" (loneliness), and "do you feel as happy as you did when you were younger?" (joyfulness). Scoring for these variables is identical to optimism and self-control, the two predisposition variables (high vs. low). --- Analytical strategy Because the outcome variable of self-perceived uselessness included four categories (high frequency, moderate frequency, low frequency and unable to answer), multinomial logistic regression models were employed to examine what factors were associated with frequency of self-perceived uselessness compared to the low level (reference group). The results were reported as a relative risk ratio (RRR) [98]. Results for "unable to answer" were not presented to better focus on the research objectives. In order to obtain more robust and reliable results, we pooled all four waves of the data together and adjusted for intrapersonal correlation across waves. Seven different models were analyzed, including six models for each individual set of factors (two models for environmental factors) and one full model that included all sets of study factors. Because fixed attributes include demographics that are the most basic characteristics of respondents, and because there are substantial differences in health and resources between demographic groups [87], fixed attributes were included in all seven models. A variable reflecting survey year was also included in all models to account for possible trends over time. With few exceptions that we noted above (i.e., educational attainments of spouse and father, two fixed attributes and two subjective wellbeing variables), the proportions of missing values for other variables under study were less than 2%. To minimize biases, we used multiple imputation techniques to impute these missing values although the mode of each categorical variable produced very similar results. Sampling weights were not applied in the regression analysis because the CLHLS weight variable does not reflect the national population distributions with respect to variables other than age, sex and urban or rural residence [99]. Weighted regressions could unnecessarily enlarge standard errors [100], so we chose to present the unweighted regression models that produce unbiased coefficients when including variables related to sample selection (i.e., age, sex and urbanicity) [101]. We found that multicollinearity among variables was not a problem, with all variance inflation factors less than 3 [102,103]. All analyses were performed using Stata version 13.1 [98]. derived from all observations, although the distributions were similar if they were based on the number of respondents. In the sample, low frequency of self-perceived uselessness was most prevalent (33.0%), followed by moderate frequency (31.2%), and high frequency (23.0%). About 12.8% were not able to answer the question. The weighted distribution of self-perceived uselessness was 19.2% for high frequency, 34.0% for moderate frequency, 43.8% for low frequency, and 3.0% for unable to answer (not shown). These weighted estimates suggest that about one-fifth of older adults in contemporary China often or always feel useless. The weighted percentage for high frequency was 22% for women and 16% for men. --- Results --- Prevalence of self-perceived uselessness --- Factors associated with self-perceived uselessness Tables 2 and 3 present relative risk ratios (RRR) from multinomial logistic regression models of REHAB factors associated with high frequency and moderate frequency of self-perceived uselessness relative to low frequency. We summarize several major findings below. Fixed attributes were strongly and consistently associated with self-perceived uselessness Model I in Table 2 shows that all fixed attribute factors are associated with risk of high frequency of selfperceived uselessness. Compared to younger ages 65-79, octogenarians (ages 80-89), nonagenarians (ages 90-99) and centenarians (ages 100+) experienced increased risk of high frequency of self-perceived uselessness relative to low frequency by 69, 76 and 76%, respectively. These risk ratios were slightly attenuated in Models II through IV when resources and environmental factors were taken into account. However, when behavioral factors were considered (Model V), these risk ratios were substantially reduced and non-significant for the centenarian group. Interestingly, when health conditions were considered in the analysis (Model VI), octogenarians and centenarians tended to have 8 and 22% lower RRR for high frequency of self-perceived uselessness, respectively; these results were even more pronounced in the full model, with reduced risks of 20% for nonagenarians and 35% for centenarians compared to young-old adults aged 65-79 years old (Model VII). The reduced risk at oldest ages, independent of health statuses and health behaviors, was similar but weaker for moderate frequency versus low frequency (Table 3). Male gender was associated with 18-30% lower RRR for high frequency of self-perceived uselessness relative to low frequency, compared to women, when each set of factors was added individually (Models I to VI). However, no gender difference was found when all sets of factors were simultaneously included in the model (Model VII). Results for moderate frequency versus low frequency in Table 3 were similar despite reduced RRRs. Participants of Han ethnicity tended to have 38-54% greater RRR for high frequency of self-perceived uselessness relative to low frequency, compared to minority ethnicity (Table 2); no ethnic difference was found for moderate frequency versus low frequency (Table 3). High level of optimism and selfcontrol were associated with 48-66% and 11-29% lower RRR for high frequency relative to low frequency of selfperceived uselessness, respectively (Table 2), although their RRRs were reduced when comparing moderate frequency with low frequency (Table 3). People with more resources tend to report low frequency of self-perceived uselessness Model II in Table 2 shows that more socioeconomic resources were associated with lower RRR for high frequency of self-perceived uselessness relative to low frequency. Specifically, compared to the zero years of schooling, 1-6 years and 7+ years of schooling were associated with 16 and 31% lower RRR for high frequency of self-perceived uselessness relative to low frequency, respectively. Such RRRs were only mildly attenuated yet still significant in the full model (Model VII in Table 2). Higher educational levels of spouse and father were also independently associated with reduced RRR for reporting high frequency of self-perceived uselessness relative to low frequency, but such associations were weaker compared to the respondent's own educational level. When predicting risk of moderate frequency selfperceived uselessness versus low frequency, these RRRs were slightly attenuated (Table 3). Living in an urban area, white-collar occupation, economic independence and good family economic condition were associated with 12-37% lower RRR for high frequency of self-perceived uselessness relative to low frequency, compared to counterparts with lower levels of resources. The reduced risk ratios for economic independence and good family economic status were moderately attenuated yet still significant in the full model while the urban residence and white collar occupation effects remained stable. This is also the case in Models II and VII of Table 3 when comparing moderate with low frequency of self-perceived uselessness. --- Risk of self-perceived uselessness was lower in supportive and culturally traditional social environments Results in Model III in Table 2 reveal that as a component of social environment, family/social support factors were significantly associated with self-perceived uselessness. Specifically, married older adults had a decreased RRR for high frequency of self-perceived uselessness relative to low frequency by 18% compared to unmarried counterparts. Compared to having a family member as the most frequently contacted person, having a friend/relative and having no one to contact were associated with 19 and 80% higher RRR for high frequency of self-perceived uselessness relative to low frequency, respectively. Results for the most trusted person were marginally significant. Compared to having a family member as the most helpful person, having a friend/relative as the most helpful person or having no one to ask for help was associated with 26% or 22% higher RRR for reporting high frequency of self-perceived uselessness relative to low frequency, respectively. Having available community-based services for social activities and entertainment, but not for care, was associated with 24% lower RRR for reporting high frequency of uselessness relative to low frequency. However, most of these RRRs were not significant when all other sets of factors were simultaneously controlled for in the model (Model VII). The findings in Model III in Table 3 are similar to those in Table 2 except that some of these variables were still significant in Table 3. Results in Model IV represent cultural environmental factors that were associated with self-perceived uselessness. Coresidence with children was associated with 13% lower risk ratio for reporting high frequency of self-perceived uselessness relative to low frequency, compared to non-coresidence with children. Concordant coresidence (respondent wants to live with children and does live with children) was associated with 11% lower RRR for high frequency of self-perceived uselessness relative to low frequency, compared to those who did not fulfill their expectation of coresidence or were institutionalized (discordance). Giving financial and instrumental support to children was associated with 38% lower RRR for high frequency of self-perceived uselessness relative to low frequency, compared to those who did not provide for children. Interestingly, receiving financial and instrumental support from children was associated with greater RRR for high frequency of self-perceived uselessness relative to low frequency in Model IV, but this upward financial transfer was not significant in the full model. The RRRs of moderate frequency relative to low frequency in Table 3 were similar to those for high frequency relative
Background: Self-perceived uselessness is associated with poor health and high mortality among older adults in China. However, it is unclear which demographic, psychosocial, behavioral and health factors are associated with self-perceived uselessness. Methods: Data came from four waves (2005, 2008, 2011 and 2014) of the largest nationwide longitudinal survey of the population aged 65 and older in China (26,624 individuals contributed 48,476 observations). This study aimed to systematically investigate factors associated with self-perceived uselessness based on the proposed REHAB framework that includes resources (R), environments (E), health (H), fixed attributes (A) and behaviors (B). Self-perceived uselessness was measured by a single item: "with age, do you feel more useless?" and coded by frequency: high (always and often), moderate (sometimes) and low (seldom and never). Multinomial logistic regression models with low frequency as the reference category were employed to identify REHAB risk factors associated with self-perceived uselessness. Results: Most factors in the REHAB framework were associated with self-perceived uselessness, although some social environmental factors in the full model were not significant. Specifically, more socioeconomic resources were associated with reduced relative risk ratio (RRR) of high or moderate frequency of self-perceived uselessness relative to low frequency. More environmental family/social support was associated with lower RRR of high frequency of self-perceived uselessness. Cultural factors such as coresidence with children and intergenerational transfer were associated with reduced RRR of high frequency of self-perceived uselessness. Indicators of poor health status such as disability and loneliness were associated with greater RRR of high or moderate frequency of self-perceived uselessness. Fixed attributes of older age and Han ethnicity were associated with increased RRR of high frequency of self-perceived uselessness; whereas optimism and self-control were associated with reduced RRR. Behaviors including regular consumption of alcohol, regular exercise, social participation and leisure activities were associated with reduced RRR of high frequency of self-perceived uselessness. Conclusions: Self-perceived uselessness was associated with a wide range of factors in the REHAB framework. The findings could have important implications for China to develop and target community health programs to improve self-perceived usefulness among older adults.
that some of these variables were still significant in Table 3. Results in Model IV represent cultural environmental factors that were associated with self-perceived uselessness. Coresidence with children was associated with 13% lower risk ratio for reporting high frequency of self-perceived uselessness relative to low frequency, compared to non-coresidence with children. Concordant coresidence (respondent wants to live with children and does live with children) was associated with 11% lower RRR for high frequency of self-perceived uselessness relative to low frequency, compared to those who did not fulfill their expectation of coresidence or were institutionalized (discordance). Giving financial and instrumental support to children was associated with 38% lower RRR for high frequency of self-perceived uselessness relative to low frequency, compared to those who did not provide for children. Interestingly, receiving financial and instrumental support from children was associated with greater RRR for high frequency of self-perceived uselessness relative to low frequency in Model IV, but this upward financial transfer was not significant in the full model. The RRRs of moderate frequency relative to low frequency in Table 3 were similar to those for high frequency relative to low frequency. --- Good behaviors were associated with reduced risk of self-perceived uselessness Good health behaviors were associated with lower risk of high or moderate frequency of self-perceived uselessness (Model V in Tables 2 and3), independent of all other study factors (Model VII in Tables 2 and3). Specifically, current consumption of alcohol, regular exercise, participation in leisure activities and social participation were associated with 18-54% lower risk ratio for reporting high frequency of self-perceived uselessness relative to low frequency (Model V in Table 2) while smoking was associated with 10% higher risk ratio for high frequency versus low frequency; with one exception for current smoking, these RRRs were still significant in the full model despite attenuated associations. Slightly weaker associations were found for these health behaviors in the case of moderate frequency versus low frequency. --- Health conditions were most strongly related to self-perceived uselessness Health conditions were all significantly associated with self-perceived uselessness (Model VI in Tables 2 and3), even when controlling for all other factors in the REHAB model (Model VII in two tables). ADL and IADL disability, cognitive impairment and having 1+ chronic disease conditions were associated with increased RRR for high frequency of self-perceived uselessness relative to low frequency by 37-120% (Model VI in Table 2), and increased RRR of moderate frequency relative to low frequency by 12-57% (Model VI in Table 3). These RRRs were only attenuated to 22-96% (Table 2) and to 21-52% (Table 3) in the full model. Loneliness was associated with 7 times higher RRR for high frequency of self-perceived uselessness relative to low frequency (Table 2) and 2 times higher risk ratio for having moderate frequency relative to low frequency (Table 3), while high joyfulness reduced RRRs in both cases by half. These effects were only mildly weakened in the full model in both cases. --- No clear trend over time in self-perceived uselessness The year of survey was also significant in some cases, yet without a clear trend over time. Overall, the respondents in the 2014 wave had 21-71% greater RRR for high frequency of self-perceived uselessness relative to low frequency, compared to the 2005 wave. No difference was found for the other waves compared to the 2005 wave. However, in the case of moderate frequency versus low frequency, respondents in the 2014 wave had a 9-32% higher RRR than those in the 2005 wave and the 2008 wave tended to be associated with 5-18% lower RRR for high/moderate frequency of self-perceived uselessness compared to the 2005 wave. The sample strategy was slightly different between waves, so verification of such trends deserves a closer analysis. --- Discussion Self-perceived uselessness, i.e., individual assessment or perception about one's usefulness to others at older ages, is a social process [1][2][3][4][5]58] that can be influenced by several types of factors. Based on a unique very large multiwave nationally representative dataset of older adults in China, the present study developed the multidimensional REHAB framework to examine factors that could be associated with self-perceived uselessness. To our knowledge, the present study is among the first to address calls to systematically examine predictors of selfperceived uselessness [18,20,76]. Overall, we found that a wide range of variables within the factors of socioeconomic resources (R), environments (E), health (H), fixed attributes (A) and behaviors (B) were associated with self-perceived uselessness. Specifically, high and moderate frequencies of self-perceived uselessness were more likely among individuals who were older, women, Han ethnicity, less optimistic, less self-controlled, in poor health and those who had fewer social supports, fewer resources and unhealthy behaviors. Cultural factors such as coresidence with children and giving children instrumental support were associated with lower frequency of self-perceived uselessness. One unique finding is the relationship between fixed attribute age and self-perceived uselessness. We found that older age was associated with greater relative risk ratio (RRR) for high or moderate frequency of selfperceived uselessness relative to low frequency, which is in line with many previous studies [47,49,[59][60][61]. The finding is justifiable because at older ages, health tends to decline and activities tend to decrease, leading to diminished opportunities to help others [48]. Because the Moreover, when health condition and other factors were taken into consideration, the RRRs for older ages were reversed, indicating that with wellbeing held constant, the older the respondents were, the less frequently they felt useless. Empirical evidence indicates that those who survive to oldest-old ages are a very selected group compared to those who died or are in a poorer state of health in their cohort [49]. The long-lived persons have likely developed excellent coping skills to overcome health decline and daily challenges [52]; as a result, they may perceive any level of usefulness positively. This is especially true in a Confucian country where long-lived persons are generally respected. On the other hand, when young-old adults experience new negative events like illness, these problems can negatively impact their perceptions of usefulness in the absence of coping skills that develop over time [49]. Our findings for the fixed attribute of age are in line with one recent study that found no difference in selfperception of aging among the oldest-old aged 80 or older as compared to older adults aged 60-69 when health was controlled [51]. The second important finding is the importance of socioeconomic resources -not only the respondent's own education, but also the education of significant others -in relation to self-perceived uselessness. We found that compared to those with no schooling, higher levels of education were associated with lower risk ratio for high and moderate frequency of self-perceived uselessness relative to low frequency. This finding is in line with many other previous studies [45,58], but contradicts one recent study of older adults in Canada and Japan that showed either a negative association or no association between education of the respondents and their selfperception of aging [70]. This may be due to the lower overall level of educational attainment of the current cohorts of Chinese older adults. About two-thirds of the respondents in the current study were illiterate, whereas the proportion of respondents with 16 years of schooling was about 10% in the Japanese sample and 38% in the Canadian sample. We additionally found that educational attainments of significant others (spouse or father) were associated with respondents' self-perceived uselessness, especially when other sets of factors were not present. The significant association of spouse's education suggests that their knowledge and related attitudes and perceptions could play a role in the formation of respondents' selfperception of uselessness at later ages [69]. The significant role of father's education implies that parental education could also have a direct or indirect influence on one's internalized perception of aging or usefulness from early life though old age. However, the roles of significant others diminished when all measured covariates were included, particularly due to intergenerational similarity in SES within families. In sum, every family member's education could matter for respondents' self-perception of uselessness, but the more proximate measure of their own education mattered most. The third unique finding is the association between self-perceived uselessness and cultural environmental factors of coresidence and intergenerational transfer that are uniquely important in China. Coresidence with adult children and the concordance between expected and actual coresidence are associated with lower risk ratio for high or moderate frequency of self-perceived uselessness relative to low frequency. Because China is a Confucian society, having a large family and coresidence with children are considered as a tradition [93]. Most members of older generations consider family life, good intergenerational relations and coresidence with children to be the most important parts of their daily lives [42]. From the perspective of older adults, coresidence with children is important to ensure communication, contact and shared views and understanding with children, thus improving family solidarity. Coresidence with children also reflects the cultural tradition, which is important for older generations. That is why those who expect to coreside with children and fulfill that expectation have the greatest reduction in RRRs for high or moderate frequency of self-perceived uselessness compared to those whose coresidence expectation was not met. Furthermore, older parents can provide some assistance to coresiding adult children in terms of doing housework and taking care of grandchildren, which could enhance older adults' feelings of usefulness to the family [93]. All of these processes would eventually benefit all domains of health and improve positive perceptions about aging among older adults. A separate but related cultural norm, receiving financial and instrumental support from children, was associated with greater RRR for high frequency of self-perceived uselessness relative to low frequency. This seemingly counterintuitive finding is interpretable. Needing financial or instrumental support from children may indicate difficulties in older adults' financial condition, poor health or other needs. As such, older adults may interpret receipt of transfers as a family burden [57,104] and reinforce negative perceptions about their usefulness to the family. Taken together with the coresidence patterns, we argue that emotional support of family members may be more important in influencing older adults' perceptions about their usefulness than financial or instrumental support. By contrast, providing financial and instrumental support to children was associated with less frequent self-perceived uselessness. This is likely because actively and capably helping children could increase older adults' selfesteem and their perception of their value to family members [93,105]. Additionally, providing support to children represents frequent contact with family members that can help to avoid social isolation, loneliness and unhealthy behaviors [58]. In sum, our findings related to cultural components imply that culturally normative family support is important to the formation of self-perceived usefulness in old age, which further supports the importance of family members as central forms of social support [58]. Several other fixed attributes were important. For example, we found that women's greater RRRs for high frequency of self-perceived uselessness disappeared when health and all other factors were modeled. This is consistent with several previous studies [47,51], and is likely influenced by their traditional gender roles and their poor health compared to men [47]. Compared to respondents of minority ethnicity, Han older adults were more likely to report high frequency of self-perceived uselessness relative to low frequency, but there was no ethnic difference between moderate and low frequency. The results for optimism and self-control are expected and consistent with the literature because optimism indicates that one is open-minded, hopeful and secure about the future, which could help one to effectively cope with adversities and conflicts in daily life [58,106], and because self-control enables one to be actively engaged in health-promoting behaviors, which in turn develop a positive perception of aging [58]. Resources other than education were also important, including urban residence, white collar occupation, economic independence and good economic condition. This is possibly because the non-education resources increase quality of life. If people feel good about their life and living conditions at older ages, they may be less likely to see themselves as useless in old age [69]. Older adults with more resources can also afford services and products and modifications to allow them to continue to contribute despite setbacks like poor health [58], and enjoy better services that could help them overcome difficulties or adversities faced in daily life. These results are in line with previous findings that individuals who were well educated and had lower levels of economic hardship were significantly more likely to report greater levels of positive beliefs about aging [46]. Overall, one's self-perception of aging is closely linked with resources available to that person [68]. Individuals with more resources are more likely to have positive attitudes, views and perceptions about aging because they have more opportunities and expectations [69]. Associations between good health behaviors and lower likelihood of self-perceived uselessness, independent of socioeconomic resources, environments, health conditions and fixed attributes, were expected and consistent with previous studies [34,58]. Regular involvements in or maintenance of health behaviors such as leisure activities, exercise and social engagements could simulate body functions, buffer against negative emotional or psychological distress, develop daily coping skills and increase feelings of meaningfulness [47,48,58,74]. Our findings provide additional evidence emphasizing the potential role of healthy behaviors in preventing selfperceived uselessness. Previous research also suggests that health outcomes may be the factor that most strongly predicts selfperceptions of aging [45]. Among the most common health events associated with the aging process are those pertaining to functional health and disability [45]. Physical health conditions, such as chronic disease, functional disability, sensory performance and number of sick days, may form an underlying basis for self-evaluation of aging and health status [67]. Our findings confirm that health conditions might be possibly the most pronounced predictors of self-perceived uselessness, and that loneliness and disability might be the most significant factors compared to other health outcomes. Given the subjective nature of self-perceived uselessness, however, it is important to acknowledge that self-perceptions are not only influenced by objective health indicators, but also by psychological and social factors [67]. Our findings have important policy implications. Given China's large size and the rapid growth of the elderly population [107], the fact that the one-fifth of this population reports a high frequency of self-perceived uselessness is a great challenge for public health. Identification of factors associated with self-perceived uselessness provides a great opportunity to target interventions and influence the health and wellbeing of the elderly population. Our findings related to cultural and social support elements imply that intervention should be oriented to supporting awareness of the value of older adults, the nature of the aging process, and the importance of family support and healthy behaviors. The intervention programs should also aim to increase dialogues between generations and different groups of people, and eventually promote frequent intergenerational contacts and geographical proximity or coresidence. Findings related to behaviors suggest that it is crucial to develop volunteer programs that facilitate community-based leisure and social engagements to promote and improve healthy behaviors associated with low frequency of selfperceived uselessness. Findings on resources and health imply that policies to support those with limited resources and poor health are also key to improve older adults' selfperceived usefulness. The United Nations Sustainable Development Goals set for years of 2016-2030 have provided us a global context to address aging issues. The theme of the International Day of Older Persons for the year of 2016 is "taking a stand against ageism" [108]. One purpose is to draw global attention to challenging negative perceptions about aging. We hope that programs and events like these, which are consistent with the findings of this study, will influence the Chinese Government to better address self-perception of aging. This study has the following limitations. First, selfperceived uselessness was measured by a single item. Multi-item measures of uselessness would provide a more complete reflection of the concept of uselessness [11], but may be difficult to implement in large-scale epidemiological studies. We encourage additional studies to investigate more sophisticated, positive/negative, and/ or domain-specific constructs of self-perceived uselessness-and self-perceived aging more generally-to better understand mechanisms for successful aging [1,109]. Second, we did not examine whether there is an association between changes in self-perceived uselessness and subsequent successful aging. Although previous studies showed that self-perceived uselessness is relatively stable [2,11], changes are still frequent [1]. It would be interesting to investigate predictors of change over a longitudinal study period. Third, the relationship between selfperceived uselessness and some behaviors and health are likely bidirectional. Like most existing studies in the field [76], we did not disentangle such bidirectional associations because it was beyond the scope of the study. More sophisticated methods such as simultaneous equation modeling or structural equation modeling may shed light on this if more waves of data are available. Fourth, in the literature, coresidence has been used either as a proxy of social support or as a cultural tradition [89][90][91][92][93][94][95] that is determined by many other factors such as needs and resources [91,92]. In the present study, we considered it as a cultural tradition, which may not completely capture its broad meaning. It is still a challenge to classify coresidence into a correct category and capture its meaning in the context of cultural norms. Fifth, resource factors at the aggregated neighborhood level, such as socioeconomic development and neighborhood attributes, were not considered in the analysis due to lack of data. Because there is a documented association between these factors and self-perception of aging [45], inclusion of these interesting factors would lead to a more properly specified model. Much work remains to fully utilize the important concept of usefulness to intervene and improve the lives and health outcomes of older people as they age. --- Conclusions Based on a unique large nationally representative dataset of older adults in contemporary China from 2005 to 2014, this study found that socioeconomic resources (R), environments (E), health (H), fixed attributes (A) and behaviors (B) were associated with self-perceived uselessness at older ages. Specifically, individuals who were younger, men, non-Han, optimistic, self-controlled, healthy and with social support, healthy behaviors and better resources were significantly less likely to report high frequency of self-perceived uselessness. Cultural factors such as coresidence with children and giving children instrumental support were also associated with lower risk of self-perceived uselessness. Our findings could inform the development of targeted public health programs that aim to promote positive self-perceptions about aging in China, and possibly in other countries. --- Availability of data and materials The CLHLS datasets are publicly available at the National Archive of Computerized Data on Aging (http://www.icpsr.umich.edu/icpsrweb/NACDA/ studies/03891). Researchers may obtain the datasets after submitting a data user agreement to the National Archive of Computerized Data on Aging. --- Abbreviations ADL: Activities of daily living; CLHLS: Chinese Longitudinal Healthy Longevity Survey; IADL: Instrumental activities of daily living; MMSE: Mini-mental state examination --- Funding The authors declare that they have no financial support for this study. Authors' contributions DG designed, drafted and revised the text. DG also supervised the data analysis. YZ drafted and revised the text. JMS revised the paper and interpreted the results. LQ prepared the data and performed the analyses. All authors read and approved the final version of the manuscript. --- Competing interests DG is a Section Editor of the Journal. JMS is an Associate Editor of the Journal. --- Consent for publication Not applicable. --- Ethics approval and consent to participate No ethics approval was required for this study since the datasets used are obtained from a public accessible database of the Chinese Longitudinal Healthy Longevity Survey (http://www.icpsr.umich.edu/icpsrweb/NACDA/ studies/03891) with a signed data user agreement. --- Disclaimer Views expressed in this paper are solely those of the authors, and do not necessarily reflect the views of Nanjing Normal University, University of the Sciences or the United Nations.
Background: Self-perceived uselessness is associated with poor health and high mortality among older adults in China. However, it is unclear which demographic, psychosocial, behavioral and health factors are associated with self-perceived uselessness. Methods: Data came from four waves (2005, 2008, 2011 and 2014) of the largest nationwide longitudinal survey of the population aged 65 and older in China (26,624 individuals contributed 48,476 observations). This study aimed to systematically investigate factors associated with self-perceived uselessness based on the proposed REHAB framework that includes resources (R), environments (E), health (H), fixed attributes (A) and behaviors (B). Self-perceived uselessness was measured by a single item: "with age, do you feel more useless?" and coded by frequency: high (always and often), moderate (sometimes) and low (seldom and never). Multinomial logistic regression models with low frequency as the reference category were employed to identify REHAB risk factors associated with self-perceived uselessness. Results: Most factors in the REHAB framework were associated with self-perceived uselessness, although some social environmental factors in the full model were not significant. Specifically, more socioeconomic resources were associated with reduced relative risk ratio (RRR) of high or moderate frequency of self-perceived uselessness relative to low frequency. More environmental family/social support was associated with lower RRR of high frequency of self-perceived uselessness. Cultural factors such as coresidence with children and intergenerational transfer were associated with reduced RRR of high frequency of self-perceived uselessness. Indicators of poor health status such as disability and loneliness were associated with greater RRR of high or moderate frequency of self-perceived uselessness. Fixed attributes of older age and Han ethnicity were associated with increased RRR of high frequency of self-perceived uselessness; whereas optimism and self-control were associated with reduced RRR. Behaviors including regular consumption of alcohol, regular exercise, social participation and leisure activities were associated with reduced RRR of high frequency of self-perceived uselessness. Conclusions: Self-perceived uselessness was associated with a wide range of factors in the REHAB framework. The findings could have important implications for China to develop and target community health programs to improve self-perceived usefulness among older adults.
Introduction Research has demonstrated that a risk and protective factor model is essential in conceptualizing youth behavior. The presence of risk factors (such as poor family bonding and educational stress) have been linked to negative outcomes for youth including use of alcohol and drugs (Catalano & Hawkins, 1996;Szapocznik & Coatsworth, 1999), delinquency (Arthur et al., 2002), homelessness (Bassuk, et al., 1997), suicide and mental health disorders (Borowsky, Resnick, Ireland & Blum, 1999). A recent synthesis of studies by the Institute of Medicine (IOM; 2009) also explored the developmental impact of risk factors on youth, including their influence on mental, emotional, and behavioral health outcomes. For example, the presence of intensely stressful experiences in early childhood is linked to clinical anxiety later in life, just as supporting early learning is linked to healthier developmental outcomes. Of particular interest in recent risk factor research are studies of Latino populations. Latinos are expected to reach one quarter (25%) of the U.S. population (about 97 million) by the year 2050 (U.S. Department of Health and Human Services [DHHS], 2001), and are at a disproportionate risk for negative behavioral health outcomes such as substance use and alcoholism (National Survey on Drug Use and Health, 2007), sexually transmitted illnesses such as HIV (Centers for Disease Control and Prevention [CDC], 2007), and mental health concerns (Prado et al., 2006). Middle-school-aged Latino youth (11-14 years old) are of particular concern, with large increases in the youth population expected in the next century (American Community Survey, 2009). Furthermore, researchers are increasingly acknowledging that there is a dearth of knowledge on how culturally related factors such as discrimination and language difficulties impact healthy development in Latino families and that increased attention is needed (Avison & Gotlib, 1994;Cordova & Cervantes, 2010). It is known, however, that Latinos are experiencing a myriad of culturally based risk factors (Cervantes, Kappos, Duenas & Arellano, 2003). In a series of studies examining psychosocial stress and acculturation among adults and youth, Cervantes and colleagues found important culturally based stressors within the major life domains of Latino's. For example, Cordova and Cervantes (2010) identified discrimination and racism to be key stressors that Latino adolescents are exposed to on a daily basis. Additionally, Cervantes and colleagues (2011) identified eight key life domains of stress specifically related to Latino youth in a sample of more than 1,600 youth, including stress related to family, educational, immigration, marginalization, and discrimination. Acculturation stress has been associated with increased risk for substance abuse (Vega & Gil, 1998), alcohol consumption (Caetano, Ramisetty-Mikler, Caetano Vaeth, & Harris, 2007), increased rates of cigarette smoking (Detjen, Nieto, Trentthiem-Dietz, Fleming, & Chasan-Taber, 2007), and family stress (Cervantes & Cordova, 2011). This increase in stress may lead to increased difficulty in the family relationships, decreases in parental oversight, and risky behaviors among adolescents (Szapocznik & Kurtines, 1980). Given these unique concerns, culturally informed behavioral health interventions for Latinos are more effective (Santisteban & Mena, 2009). Some progress has been made in the development of evidence-based prevention programs that target racial and ethnic minority youth (see, for example, Pantin et al., 2003;Santisteban et al., 1996;Marsiglia et al., 2005). In a recent investigation by Szapocznik and colleagues (2007), however, only four randomized drug abuse preventive intervention models existed that targeted Latino youth ages 12-17 and included samples where 70% of participants were represented by Latino youth. Further, the Substance Abuse and Mental Health Services Administration (SAMHSA; 2010) recently published a report indicating that more emphasis is needed on the development of integrated behavioral health programs that address mental health, substance abuse, HIV, and other factors associated with poor development. As risk factor are linked to multiple negative outcomes (IOM, 2009), prevention programs that can effectively address multiple risk factors are likely to have enhanced outcomes, and potentially be more cost effective. Familia Adelante (FA; originally titled The Hispanic Family Intervention Program) was initially developed at the National Institute of Mental Health (NIMH)-funded Spanish Speaking Mental Health Research Center at the University of California Los Angeles. Using the stress-illness paradigm (Aneshensel, 1992) as a framework for understanding risk factors, the developer utilized qualitative study findings (interview) on stress and coping mechanisms (Padilla, Cervantes, Maldonado, & Garcia, 1987) as well as quantitative survey studies of Hispanic adult and adolescent stress (Cervantes, Padilla, & Saldago de Snyder, 1990;Padilla et al., 1987. From these studies, key risk domains were articulated and FA modules were developed. For pilot testing (Cervantes, 1993), in-depth qualitative interviews combined with psychometric scales including the Hispanic Stress Inventory (Cervantes, Padilla, & Salgado de Synder, 1991) and the Conners Parent Rating Scale (Conners, Sitarenios, Parker, & Epstein, 1998) were used to evaluate program effectiveness. The first version of the curriculum showed reductions in family stress and youth behavior problems, enhancing academic and psychosocial coping, and decreasing substance use patterns in Latino youth (Cervantes, 1993). Following the development of the original curriculum for youth and parents (Cervantes, 1993), Familia Adelante was again tested through the SAMHSA-funded Blythe Street Prevention Project in 1998. The drug prevention project included youth (n = 133) and their parents (n = 63) and again demonstrated positive findings (Cervantes & Pratt, 1998). This study found significant improvements (p <unk>.001) in knowledge and skills by both youth and parents. For example, youth reported an increase in conflict resolution skills. Parents participating in the FA curriculum reported an increase in gang awareness, cultural pride, drug knowledge, and conflict resolution. There was also a trend in improved emotional health across both youth and parents (Cervantes & Pratt, 1998). The current study was conducted from 2003 to 2006 and tested through six cohorts of families in a school-based setting. Prior to implementation, some exercises were added to address HIV knowledge, attitudes, and beliefs in response to requirements set forth by the funding agency. --- Purpose of Study The purpose of the current study was to test the multi-risk reduction Familia Adelante curriculum for its effectiveness with high-risk Latino youth. Specifically, the curriculum was designed to enhance family and peer communication, prevent/reduce substance abuse, increase HIV knowledge and perceptions of harm about high-risk behavior, and improve school bonding and behavior. It also seeks to enhance psychosocial coping and life skills in both youth and their parents and decrease substance use and emotional problems by focusing on ways to cope with acculturative stress. Research suggests benefits to including families in treatment of high-risk youth (Hoagwood, Burns & Weisz, 2002;Kumpfer, Alexander, McDonald & Olds, 1998), Currently only two evidence-based approaches exist that target multiple risk factors, include families, and have a high representation of Latinos in their sample: Familias Unidas (Pantin et al., 2003) and Brief Strategic Family Therapy (Santisteban et al., 1996). In an effort to expand this area, Familia Adelante was developed as a family intervention that is administered to youth and parents concurrently but separately in a group format. --- Methods --- Procedure Familia Adelante consists of twelve, 90-minute group sessions for youth and their parents. Youth participants were referred to the Familia Adelante program as a result of their experiencing behavioral or emotional problems. Upon receiving a referral, project staff contacted parents and an appointment was given for the first session. All group sessions were held at a convenient school location and conducted during afterschool hours. Youth and parent group sessions were held separately and simultaneously. Bilingual/bicultural staff were trained by the program developer. Each group was led by a masters-level prevention staff and was assisted by a bachelors-level staff member. Both parent and youth groups typically consisted of eight to ten individual participants. The curriculum is guided by a facilitator manual which outlines each of the session topics. Each session in this manual includes goals, leaning objectives, activities, and a list of materials used. The activities are designed for at-risk youth (age 11-14) and their families and cover a range of topics shown in Table 1. At Session 1, the participants completed pretest instruments, described in further detail below. The second session covered general knowledge building whereas session 3 covered feelings. Sessions 4 and 5 included content for both parents and families on types of stress, including stress from discrimination and racism. In the sixth week, youth discussed stress at school as well as strategies for improving grades and communication with their teachers while parents discussed stress from work and providing for their family. During week seven, youth again discussed the influence of peers on decision making and strategies for making safe decisions while their parents discussed parenting, including differences between parenting in the U.S. and their home country (for example, the use of corporal punishment). Weeks 8 and 9 provided strategies for increasing family communication, and the final weeks provided gang prevention strategies and general substance use education. At the final week, posttest instruments were distributed and famliies participated in a "graduation" ceremony. Families were also recontacted at six months to complete follow-up measurement and determine long-term effectiveness of the program. --- Participants Youth (n = 153) and their parents (n = 149) were recruited at a middle school located in San Fernando, CA, a large metropolitan city with a high percentage of Latino residents (89.2%; U.S. Census, 2000). Youth were included in this study if they (a) were Latino, (b) were between the ages of 11-14, (c) exhibited behavioral or school problems as reported by a teacher of school counselor, and (d) experienced academic problems not related to language differences. Youth were excluded if they (a) displayed autism or other pervasive developmental disorder or (b) were psychotic or in some need of clinical treatment for a mental or behavioral health problem. The only inclusion criteria for parent participation was their child's acceptance into the project. Four families had multiple children enrolled in the program (resulting in a larger youth sample). All participants received a detailed consent form and were only enrolled in the program upon obtaining appropriate parental consent and youth assent. --- Instruments All measures were made available in both English and Spanish. Parents completed the measures in approximately 30 minutes. This was more quickly than youth, who were read the questions by the facilitator and averaged approximately 45 minutes. Initially, demographic questionnaires were collected to gather information from youth and parents on age, gender, and socioeconomics. These forms also included information on the participant's nativity, language preference, and educational level. Other parent and youth surveys were collected during the first session, at the final session and at six-month post intervention. These instruments consisted of a number of scales from previously established and normed instruments. Parents and youth completed the SAMHSA Government Performance and Results Act (GPRA) Participant Outcome Measures for Discretionary Programs (SAMHSA, 2003). This survey tool was used as part of SAMHSA's national cross-site evaluation and is comprised of questions that include alcohol, tobacco, and other drug (ATOD) use and knowledge; ATOD beliefs and perceived risk of harm from ATOD use; future intentions to use drugs; and HIV knowledge and risk perception. Youth also completed other sections of the GPRA, including School Behavior, School Bonding/Attachment, Family Bonding, Communication with Peers, Communication with Parents, Comfort Level Talking with Parents, HIV Anxiety, Peer Condom Use, Attitudes toward Condom Use, Social Norms, Living Conditions, and Drug-Free Commitment as well as the Hispanic Children's Stress Inventory (Padilla, Cervantes, & Maldonado, 1987). Parents also completed the Conners Children's Behavioral Parent Rating Scale (CBPRS; Conners et al., 1998), a subjective assessment of their youth's behavior. --- Results All participants in the program completed baseline measures. A total of 153 youth and 149 adult instruments were administered to six participant family cohorts. A pretest occurred on the first session date. A posttest was conducted at the final session (Week 12), and an additional measure was administered six months after baseline. The overall retention rate for the youth participants at posttest was high (83%) and decreased from posttest to six-month follow-up (80%). In addition, the program was able to retain a high percentage of the parents (81%) at posttest, but the retention rate decreased at the six-month follow-up (59%). --- Demographic Characteristics The demographics of the participant sample can be seen in Tables 2 and3. Notably, among parent participants, the majority were female mothers (80.7%). As expected, parents overwhelmingly identified as Hispanic (95%), and most indicated their primary language was Spanish (68.8%). Almost two-thirds (63%) reported their household income to be less than $25,000 per year, indicating a high rate of poverty and near-poverty in the participant sample. Further, a significant portion (41.86%) of parents had a high school diploma or less education. Unlike their parents, the majority of youth participants were male (68.5%). Slightly more than half (58.6%) of youth in the sample reported Spanish as the primary language spoken at home. The majority of youth were U.S. born (84.9%) while their parent counterparts were mostly immigrants (79.3%). --- Reliability Analyses Reliability analyses were conducted for each of the youth and parent assessment scales using Cronbach's alpha. As can be seen in Tables 4 and5, the majority of the scales (12) demonstrated high reliability scores (<unk> =.80 or higher). A number of scales ( 9) are marginally reliable (<unk> =.60 or higher). For parents, the highest reliabilities were found in HIV risk (<unk> =.88) and the sub-scales from the Conners scale, including Conduct Problems (<unk> =.87), while the lowest were the Psychosomatic scale (<unk> =.59) and Anxiety (<unk> =.52). For youth, the highest reliability was found in Comfort Level Talking with Parents (<unk> =.90) and Social Norms (<unk> =.89). The lowest reliabilities were found in Attitudes towards Condom Use (<unk> =.42). --- Program Effects and Outcomes Our next step was to investigate the changes in attitudes and behaviors reported by youth and parent participants. As we were particularly interested in whether the FA curriculum addressed a wide range of risk factors that are empirically linked to negative emotional and behavioral problems, family communications problems, school bonding, and ATOD and unprotected sexual behaviors, this was the focus of our analysis. Using SPSS 16.0, a repeated measures GLM analysis (Mardia, Kent, & Bibby, 1979) was conducted for youth and adults. --- Youth Findings Analysis of the risk-related outcomes for youth demonstrated several areas of program effects, as seen in Table 6. First, youth communication skills appear to have been positively impacted by Familia Adelante. Specifically, youth demonstrated improved communication skills with their peers (p <unk>.01) and comfort talking with their parents (p <unk>.01). Additionally, youth participants reported improved overall improved family attachment (p <unk>.05). A second general area of positive change was noted with regard to youth knowledge, attitudes, and behaviors with regard to sexual risk. Specifically, HIV-related anxiety (p <unk>. 001) and social norms regarding sexual behavior (p >.01) decreased across the measurement points as well as a significant increase in peer condom use at follow-up (p <unk>. 01). Sexual intercourse (over last 30 days) showed a curvilinear pattern, increasing at posttest but decreasing dramatically at follow-up. The changes in the youth school-related measures were not notable. Stress levels as measured by the Latino Stress Inventory fell sharply at posttest and rose again (but not to pretest levels) at the follow-up. Table 6 also illustrates findings on perceptions and behavior change in alcohol and drug use for youth across the three measurement points. Use of marijuana dropped dramatically, with posttest and follow-up reporting zero use (p <unk>.001). In addition, youth participants reported zero use of all other drugs at posttest (p <unk>.01), including cocaine, heroin, methadone, PCP/ LSD, methamphetamines, barbiturates, and inhalants. Alcohol use increased slightly at posttest, but dropped dramatically at follow-up. Alcohol use to intoxication dropped to zero at posttest and remained at this level in follow-up. Past 30 day sexual intercourse increased dramatically at posttest and decreased to pretest levels at follow-up. Significant and positive changes were observed in youth by their parents as well, as measured by the Conners Parent Rating Scale (Conners et al., 1998) and shown in Table 7. Reductions in conduct disorders, learning disorders impulsivity, anxiety, and hyperactivity were all significant (p <unk>.01). Psychosomatic symptoms also decreased although not significantly (p = 0.16). --- Parent Findings Table 8 shows findings from the parental measures. Results from the parent measures are encouraging though should be interpreted cautiously as some instruments did not meet criteria for acceptable reliability (<unk> >.80). Results indicate that parents' knowledge of drugs and HIV rose substantially across the three measurement points. The highest effect sizes for the intervention were found in parents' increase in drug and HIV knowledge. Strong trends were also found for ATOD harm although not significant and to a lesser degree in HIV risk behavior. --- Discussion Familia Adelante is a family-oriented prevention intervention that has been developed to address the unique needs and risk factors found among Latino families. Few prevention or early intervention programs have been available for Latino families. Further, programs that specifically address risk factors known to predispose youth to negative behavioral health outcomes often lack attention to culture and acculturation stress which has a direct impact on Latino youth mental health and substance use outcomes (Cervantes et al., in press). The main objective of the intervention was to enhance family and peer communication, increase substance abuse and HIV knowledge and perception of harm, and improve school bonding and behavior. It also sought to enhance psychosocial coping and life skills in both youth and their parents and decrease substance use and emotional problems by focusing on stress related to acculturation. The evidence provided by both significance and effect size is encouraging for the use of the intervention for both youth and parents. Effect sizes of greater than.30 were found across a number of risk-related factors, and this level of program effect is acceptable within prevention science (Tabachnick & Fidell, 2001). In this study, many of the program effects were durable and lasting, shown through the follow-up testing. Positive changes in communication and cognition (as well as to a lesser degree behavior) can be expected to have long-lasting effects on the dynamics of the families that participated. The parents consistently reported improvements in their children's behavior across multiple domains over the course of the intervention and months afterwards. The children also presented significant improvements in communication with their parents and increases in family attachment, an important factor in preventing risk (Gould et al., 1996) In the illegal drug behavior outcomes, significant positive changes were also found even though this was a young at-risk population that had low drug use at baseline. The success of the study is underscored by the additional considerations of age and service exposure. The research literature suggests that there are natural maturational upward trends regarding risk factors and substance use rates in this age group in young adolescents in these at-risk social conditions. That marijuana and other illegal drug use decreased significantly in the youth sample and stayed this way at follow-up further supports the effectiveness of the curriculum. While the overall changes in the young adolescents are not as dramatic as with their parents, positive changes were documented in sensitive areas involving condom use and sexual norms. In the case of HIV knowledge, however, the change in youth was greater than their parents. In addition, culturally based stress reduction as measured by the Latino Children's Stress Inventory was minimal, although with a positive trend. Since the time of this study, advances have been made in measurement, namely the development of the Latino Stress Inventory-Adolescent Version (Cervantes et al., in press) that may prove to be a more sensitive measure of culturally based stressors in subsequent trials of the curriculum. A limitation lies in the attrition of the sample in the period between the posttest and followup, especially for the parents. While the reasons for this drop in retention have not been systematically investigated, the impression gained by the research team is that it was due, in part, to the high residential mobility in families. Future studies with Latino immigrant families may need to budget in sufficient resources to include methods for locating mobile families if the validity of the results are to be strengthened, and incentivizing participants may increase the response rate as well (Khadjesari et al., 2011). Adding additional measures for parents would have also been helpful in gaining greater understanding of program benefits, although knowledge measures showed promising results. Additionally, the lack of a control group makes it difficult to say with certainty that these changes were not based on history, maturation, or testing. However, the use of multiple measurements (such as youth and parent ratings on behavior) adds additional support for the reliability of findings. Several of the scales fell below acceptable reliability, making interpretation of these findings cautionary. This is possibly due to lack of cultural appropriateness with this population, a common concern with the use of existing measures (e.g., Bilheimer & Klein, 2010). Addressing this concern through use of different instruments and a thorough pretest would be helpful in future research with Familia Adelante. Despite these limitations, a majority of statistically significant findings were found among scales with high reliabilities. This suggests that the results may be even stronger in a larger sample. Future research with Familia Adelante should strengthen the measurement instruments to increase validity and test the product with a larger, randomized study of Latinos and their parents. Lastly, the rate of HIV infection in Latinos is an increasing concern (Prado et al., 2006). Familia Adelante seeks to impact multiple behaviors including HIV risk, but found insignificant results for key prevention techniques such as condom use. An important area for curriculum adaptation may be the enhancement of HIV prevention messaging and the use of condoms by youth.
A comprehensive approach for providing behavioral health services to youth is becoming increasingly emphasized (IOM, 2009). Latino youth are at increased risk for substance abuse, mental health concerns, unsafe sexual practices and HIV (Prado et al., 2006), and these outcomes have been empirically connected to individual, family and community-based stress (IOM, 2009). Despite this knowledge, there is a lack of evidence-based approaches that target these negative outcomes by reducing stress in Latino families in a culturally relevant manner (Cervantes, Kappos, Duenas & Arellano, 2003). The current study examined the use of research-based strategies for reducing multiple risk behaviors in a predominantly Mexican American sample of families. Through a modular approach, participants engaged in a psycho-educational curriculum to enhance communication and psychosocial coping, increase substance abuse and HIV knowledge and perception of harm, and improve school behavior. Over 12 sessions, the curriculum aimed to achieve these outcomes through an overall decrease in family and community-based stress by focusing on acculturative stress. Findings indicate that communication and perception of substance use harm were significantly enhanced, while social norms regarding sexual behavior, HIV anxiety and past use of marijuana and other illegal drugs were significantly reduced. While many of measures were reliable (α > .80), further changes are necessary to improve the accuracy of future studies. Despite these limitations, Familia Adelante improves many areas of participant's family life, and points toward the feasibility of multi-risk reduction behavioral health prevention approaches.
M edical doctors use medical language, which does not lead to a meaningful discussion with other occupations during meetings. So when I say [they] dominate, it is more about the type of language they use. " Such are the words of one member of the Global Fund's Country Coordinating Mechanism (CCM) in Nigeria, as reported by Lassa and colleagues in their case study of power dynamics in health policy-making. 1 A key finding of their study was the dominance of medical professionals, specifically allopathic physicians, in decision-making spaces in Nigeria, who leveraged both structural power (using professional monopoly to enforce an occupational hierarchy) and productive power (using privileged access to a specialized knowledge base to frame the discourse on problems and solutions) to direct efforts and determine solutions for strengthening HIV/AIDS care. In health policy discussions, medical dominance occurs when allopathic medicine is positioned as the sole or primary framework for understanding and responding to health problems, with medical doctors correspondingly elevated as the most knowledgeable experts and decision-makers. Medicalized approaches to public health are reductionist, seek causes in biology rather than social or environmental factors, are individualistic rather than collectively minded, and focus narrowly on clinical and/or technological interventions. 2 The medicalization of health issues from a macro (ie, policy or prioritization) perspective and the related question of medical dominance has been examined mainly in Western countries. 3 However in recent years medicalized approaches to health have been increasingly understood as part of the colonial inheritance in many low-and middle-income countries. 4 For example in Nigeria, where Lassa et al report on medical dominance in HIV/AIDS policy-making, the medical system continues to emphasize hospital-based curative care, benefiting the urban elite -rather than building a strong and equitable primary healthcare system that draws on multiple sectors to promote health and prevent illness amongst the whole population. 5 As such, Lassa and colleagues make a useful addition to a long tradition of public health and anthropological scholarship calling out biomedical power as detrimental to operationalizing health as a holistic, socially-embedded concept. But more work is needed to draw attention to how medical dominance prevails in the 'high spheres' of global health and how it perverts incentives, results in blinkered advice, and can harm rather than improve equity and effectiveness at every level. Global health institutions, including the World Health Organization (WHO), major multi-lateral bodies and global health initiatives, and bilateral and private donor agencies, have rarely questioned the dominance of medical professionals within their ranks and medical discourse in their strategies -nor the economic thinking and cost-effectiveness calculations that are used to further buttress this dominance. A medicalized framing is evident across a plethora of global health issues, and the goal-oriented structures of global health institutions, and competition between them, incentivize the application of biomedical solutions. 2 Medical dominance, exerted via structural and productive power, means that global health institutions rely on narrow conceptions of knowledge to guide their responses to health issues, often excluding or only superficially including lived experience, social policy expertise, and knowledge derived from non-positivist paradigms such as Indigenous methodologies, participatory action research, and even much of mainstream social science. 6 These types of knowledge remain largely absent from the deliberative and decisionmaking processes of most major global health institutions -as does the practical wisdom ('phronesis') of how to implement interventions and policies. 7 Dismissal of non-medical knowledge that could inform health strategies was evident in Lassa and colleagues' study, where respondents said members of community-based organizations and patient groups did not have the'sophistication [of] MBBS medical doctors.'As a guide to decision-making, the obsession with quantifying the impact of targeted, disease-specific, medical solutionssometimes called the 'Gates approach' -is much criticized. 8,9 Yet in global health spaces, this narrow, highly technical approach merely compounds the problems caused by the dominance of medicine, with its prioritization of quantifiable knowledge rendered ever-more'scientific' by advances in machine-powered calculation. With such epistemological underpinnings, it should come as no surprise when so-called'solutions' to complex and highly contextual health problems are, in effect, pre-determined, even in 'country-led' collaborations such as the Global Fund's CCMs. "It seems they have the answers to the questions they want you to answer. " "Their system is so rigid, everything is already spoon feeding. " "A path is shaped for you to follow." The words of the Nigeria CCM members interviewed by Lassa and colleagues indicate that a medicalized approach to HIV programming was in fact a non-choice -demonstrating how donor prerogatives drive funding allocations regardless of local priorities, drawing on the combined structural and productive power of global health institutions in the process. 10 In this context, we can better understand the finding that Nigerian medical professionals sought to advance their own power and influence in health system decision-making by participating in these forums -and recommending medicalized solutions to public health problems. Despite the existence of a robust critical literature that situates healthcare as but one determinant of population health, the medical professionals who make up the leadership of many global health institutions, as well as in countries, are not equipped by their training to work in teams to address these determinants. As Naidu and Abimbola describe, Eurocentric medical -and we would argue public health -education as practiced around the world crowds out approaches to caring for people's health that are more holistic, people-centered and equity-oriented, such as the Ife Philosophy of medical and health professionals education in Nigeria, which trained doctors as part of multi-disciplinary teams providing community-based primary healthcare, or the Aboriginal Community-Controlled Health Services in Australia. 4 The oft-cited 'barefoot doctors' in China and other community health workers are frequently harkened back to in the global health discourse, in fond remembrance of Alma-Ata and continuing calls for more comprehensive notions of primary healthcare. 11 In the most well-endowed global health initiatives, meanwhile, the focus on medicalized solutions continues largely undisturbed. Indeed, global health institutions today are arguably constitutionally incapable of producing policies and interventions that can realize the ambition of truly comprehensive primary healthcare. For instance, Lassa and colleagues described how in the Global Fund's CCM in Nigeria, social interventions were de-emphasized in favor of biomedical content, so as to adhere to WHO guidelines and pass muster with the Global Fund's Technical Review panel. Similarly, in Mozambique, rapid scale-up of technical HIV 'care' with financing from the World Bank, the Global Fund, the Clinton Foundation and President's Emergency Plan for AIDS Relief was destructive to relationships between patients and caregivers, crowding out non-clinical forms of care, such as prayer and'motherly' attention. 12 In these cases and others, outreach to and partnership with people and communities, particularly marginalized ones, was subsumed into a medicalized framework that was not only exclusionary, but actively undermined critical forms of health 'care.'The dominance of biomedical cadres, epistemologies and discourses in global health institutions limits the effectiveness of the interventions they propose, support and finance. In the case study of Nigeria's CCM, Lassa and colleagues identified a strong emerging theme of 'wasted antiretrovirals' due to lack of uptake of the clinical HIV programming on offer, with over 20 tons of expired commodities left at central medical stores and 15 tons at state level stores, according to an audit report. The focus on purchasing commodities exemplifies how medicalization of health creates 'too simplistic a view of making more modern medical treatments available to more people' (Benatar, cited in Clark 2 ), failing to recognize the intersecting social, economic and cultural conditions that must be in place to ensure a corresponding number of patients seek to use them. In the early 2010s, the Global Fund responded to significant criticism and pressure to shift its disease-focused and top-down approach to include health systems strengthening, yielding some improvements. 13 But Lassa and colleagues, and earlier research, 14 demonstrate how the structural influence of medical power in the broader global health environs continue to shape and narrow the focus of such initiatives. Medicalization can result in successful outcomes when viewed from certain angles. A recent evaluation of the Global Fund said the partnership had underperformed in building strong and resilient health systems due to its focus on diseasespecific goals, while nonetheless touting the 44 million lives saved by the Fund since its inception. 13 This framing gives the truth of the matter: despite sometimes aspiring to build durable health systems that serve populations including those traditionally excluded, global health initiatives remain fundamentally defined by, and focused on, activities that enable quantification of disease reduction and lives saved. Forty-four million lives is no small number. But it should not obscure the fact that the medicalization of health issues, via approaches that are focused on quantifiable technical or clinical interventions and designed without meaningful input from non-medical stakeholders, are also tightly linked to the ongoing colonial agenda of global health. Indeed, who are these numbers designed to appeal to? As scholars of Indian medical history have demonstrated, medicalization is not a regrettable outcome of historical contingency. 15 Allopathic medicine is a tool in ongoing efforts by powerful states and actors to exert control in what should be a leading site of cooperation -the preservation and protection of people's health. Lassa and colleagues' research is a reminder that breaking the hold of medical dominance in global health institutions is necessary if we wish to make best use of limited resources to improve population health. Yet it will be a long row to hoe. Doing so will require a collective push from multiple directions, including research, civil society, and even political pressure to overcome deeply rooted power dynamics. Global health bodies and the academic institutions which are so tightly linked to them can start by meaningfully engaging in a learning agenda to finance, publish, collate and publicize research that demonstrates the pitfalls of medicalization and the ways in which holistic approaches are superior in terms of equity, justice, and basic effectiveness in promoting and protecting population health. Direct and robust advocacy is necessary to reveal and draw attention to the workings of power in global health institutions to challenge the ongoing narrative of disinterested (apolitical) investment in solving technical (medical) problems, and to surface conduits of power within the processes and policy agendas of such initiatives, and their impacts on the broader system. For their part, donor agencies will need to have faith and be patient. The most transformational development programs, which build institutions and encourage policy reform, are often those least likely to be precisely and easily measured. 8 Lassa and colleagues' research identifies and names a power dynamic amongst a small group of actors that has had major consequences for HIV interventions in Nigeria. Following the trail of evidence leads straight to the biggest behemoths in global health. --- Ethical issues Not applicable. --- Competing interests Authors declare that they have no competing interests. --- Authors' contributions SLD conceived of the commentary and wrote the first draft of the manuscript. OAS and SMT provided critical revisions for important intellectual content. SLD finalized the manuscript with the input and approval of OAS and SMT. --- Authors' affiliations 1 Department of International Health, Johns Hopkins School of Public Health, Baltimore, MD, USA. 2 Institute for Global Health, University College London, London, UK. 3 Department of Population Health Sciences, Spencer Fox Eccles School of Medicine at the University of Utah, Salt Lake City, UT, USA. 4 College of Public Health Medical and Veterinary Sciences, James Cook University, Townsville, QLD, Australia.
Medical professionals exercised structural and productive power in the Global Fund's Country Coordinating Mechanism (CCM) in Nigeria, directly impacting the selection of approaches to HIV/AIDS care, as described in a case study by Lassa and colleagues. This research contributes to a robust scholarship on how biomedical power inhibits a holistic understanding of health and prevents the adoption of solutions that are socially grounded, multidisciplinary, and co-created with communities. We highlight Lassa and colleagues' findings demonstrating the 'long arm' of global health institutions in country-level health policy choices, and reflect on how medical dominance within global institutions serves as a tool of control in ways that pervert incentives and undermine equity and effectiveness. We call for increased research and advocacy to surface these conduits of power and begin to loosen their hold in the global health policy agenda.
INTRODUCTION A long tradition of sociological research has examined the effects of divorce and father absence on offspring's economic and social-emotional well-being throughout the life course 1 Overall, this work has documented a negative association between living apart from a biological father and multiple domains of offspring well-being, including education, mental health, family relationships, and labor market outcomes. These findings are of interest to family sociologists and family demographers because of what they tell us about family structures and family processes; they are also of interest to scholars of inequality and mobility because of what they tell us about the intergenerational transmission of disadvantage. The literature on father absence has been criticized for its use of cross-sectional data and methods that fail to account for reverse causality, for omitted variable bias, or for heterogeneity across time and subgroups. Indeed, some researchers have argued that the negative association between father absence and child well-being is due entirely to these factors. This critique is well founded because family disruption is not a random event and because the characteristics that cause father absence are likely to affect child well-being through other pathways. Similarly, parents' expectations about how their children will respond to father absence may affect their decision to end their relationship. Finally, there is good evidence that father absence effects play out over time and differ across subgroups. Unless these factors are taken into account, the so-called effects of father absence identified in these studies are likely to be biased. Researchers have responded to concerns about omitted variable bias and reverse causation by employing a variety of innovative research designs to identify the causal effect of father absence, including designs that use longitudinal data to examine child well-being before and after parents separate, designs that compare siblings who differ in their exposure to separation, designs that use natural experiments or instrumental variables to identify exogenous sources of variation in father absence, and designs that use matching techniques that compare families that are very similar except for father absence. In this article, we review the studies that use one or more of these designs. We limit ourselves to articles that have been published in peer-reviewed academic journals, but we impose no restrictions with regard to publication date (note that few articles were published before 2000) or with regard to the disciplinary affiliation of the journal. Although most articles make use of data from the United States, we also include work based on data from Great Britain, Canada, South Africa, Germany, Sweden, Australia, Indonesia, and Norway. Using these inclusion rules, we identified 47 articles that make use of one or more of these methods of causal inference to examine the effects of father absence on outcomes in one of four domains: educational attainment, mental health, relationship formation and stability, and labor force success. In the next section, entitled "Strategies for Estimating Causal Effects with Observational Data," we describe these strategies, their strengths and weaknesses, and how they have been applied to the study of father absence. In the section entitled "Evidence for the Causal Effect of Family Structure on Child Outcomes," we examine the findings from these studies in each of the four domains of well-being. Our goal is to see if, on balance, these studies tell a consistent story about the causal effects of father absence and whether this story varies across different domains and across the particular methods of causal inference that are employed within each domain. We also note where the evidence base is large and where it is thin. We conclude by suggesting promising avenues for future research. --- STRATEGIES FOR ESTIMATING CAUSAL EFFECTS WITH OBSERVATIONAL --- DATA Identifying causal effects with observational data is a challenging endeavor for several reasons, including the threat of omitted variable bias, the fact that multiple---and often reciprocal---causal effects are at work, the fact that the causal treatment condition (such as divorce) may unfold over a period of time or there may be multiple treatment conditions, and the fact that the effects of the treatment may change over time and across subgroups. Traditional approaches to estimating the effect of father absence on offspring well-being have relied primarily on ordinary least squares (OLS) or logistic regression models that treat offspring well-being as a function of father absence plus a set of control variables. These models are attractive because the data requirements are minimal (they can be estimated with cross-sectional data) and because they can accommodate complex specifications of the father absence effect, such as differences in the timing of father absence (early childhood versus adolescence), differences in postdivorce living arrangements (whether the mother lives alone or remarries), and differences by gender, race, and social class. Studies based on these models typically find that divorces that occur during early childhood and adolescence are associated with worse outcomes than divorces that occur during middle childhood, that remarriage has mixed effects on child outcomes, and that boys respond more negatively than girls for outcomes such as behavior problems (see, for example, Amato 2001, Sigle-Rushton & McLanahan 2004). Interpreting these OLS coefficients as causal effects requires the researcher to assume that the father absence coefficient is uncorrelated with the error term in the regression equation. This assumption will be violated if a third (omitted) variable influences both father absence and child well-being or if child well-being has a causal effect on father absence that is not accounted for in the model. There are good reasons for believing that both of these factors might be at work and so the assumption might not hold. Until the late 1990s, researchers who were interested in estimating the effect of father absence on child well-being typically tried to improve the estimation of causal effects by adding more and more control variables to their OLS models, including measures of family resources (e.g., income, parents' education, and age), as well as measures of parental relationships (e.g., conflict) and mental health (e.g., depression). Unfortunately, controlling for multiple background characteristics does not eliminate the possibility that an unmeasured variable is causing both family structure and child well-being. Nor does it address the fact that multiple causal pathways may be at work, with children's characteristics and parents' relationships reciprocally influencing each other. Adding control variables to the model can also create new problems if the control variables are endogenous to father absence. (See Ribar 2004 for a more detailed discussion of cross-sectional models.) --- Lagged Dependent Variable Model A second approach to estimating the causal effect of father absence is the lagged dependent variable (LDV) model, which uses the standard OLS model described above but adds a control for child well-being prior to parents' divorce or separation. This approach requires longitudinal data that measure child well-being at two points in time---one observation before and one after the separation. The assumption behind this strategy is that the preseparation measure of child well-being controls for unmeasured variables that affect parents' separation as well as future child well-being. Although this approach attempts to reduce omitted variable bias, it also has several limitations. First, the model is limited with respect to the window of time when father absence effects can be examined. Specifically, the model cannot examine the effect of absences that occur prior to the earliest measure of child well-being, which means LDV models cannot be used to estimate the effect of a nonmarital birth or any family structure in which a child has lived since birth. Second, if pre-separation well-being is measured with error, the variable will not fully control for omitted variables. Third, lagged measures of well-being do not control for circumstances that change between the two points in time and might influence both separation and well-being, such as a parent's job loss. Another challenge to LDV studies is that divorce/separation is a process that begins several years before the divorce/separation is final. In this case, the pre-divorce measure of child wellbeing may be picking up part of the effect of the divorce, leading to an underestimate of the negative effect of divorce. Alternatively, children's immediate response to divorce may be more negative than their long-term response, leading to an overestimate of the negative effect of divorce. Both of these limitations highlight the fact that the LDV approach is highly sensitive to the timing of when child well-being is measured before and after the divorce. In addition, many of the outcomes that we care most about occur only once (e.g., high school graduation, early childbearing), and the LDV strategy is not appropriate for these outcomes. (See Johnson 2005 for a more detailed technical discussion of the LDV approach in studying family transitions.) These advantages and limitations are evident in Cherlin et al.'s (1991) classic study employing this method. Drawing on longitudinal data from Great Britain and the United States, the authors estimated how the dissolution of families that were intact at the initial survey (age 7 in Great Britain and 7--11 in the United States) impacted children's behavior problems as well as their reading and math test scores at follow-up (age 11 in Great Britain and 11--16 in the United States). In OLS regression models with controls, the authors found that divorce increased behavior problems and lowered cognitive test scores for children in Great Britain and for boys in the United States. However, these relationships were substantially attenuated for boys and somewhat attenuated for girls once the authors adjusted for child outcomes and parental conflict measured at the initial interview prior to divorce. By using data that contained repeated measurements of the same outcome, these researchers argue that they were able to reduce omitted variable bias and derive more accurate estimates of the casual effect of family dissolution. This approach also limited the external validity of the study, however, because the researchers could examine only separations that occurred after age 7, when the first measures of child well-being were collected. --- Growth Curve Model A third strategy for estimating causal effects when researchers have measures of child wellbeing at more than two points in time is the growth curve model (GCM). This approach allows researchers to estimate two parameters for the effect of father absence on child wellbeing: one that measures the difference in initial well-being among children who experience different family patterns going forward, and another that measures the difference in the rate of growth (or decline) in well-being among these groups of children. Researchers have typically attributed the difference in initial well-being to factors that affect selection into father absence and the difference in growth in well-being to the causal effect of father absence. The GCM is extremely flexible with respect to its ability to specify father absence effects and is therefore well suited to uncovering how effects unfold over time or across subgroups. For example, the model can estimate age-specific effects, whether effects persist or dissipate over time, and whether they interact with other characteristics such as gender or race/ethnicity. The model also allows the researcher to conduct a placebo test---to test whether father absence at time 2 affects child well-being prior to divorce (time 1). If future divorce affects pre-divorce well-being, this finding would suggest that an unmeasured variable is causing both the divorce and poor child outcomes. The GCM also has limitations. First, it requires a minimum of three observations of wellbeing for each individual in the sample. Second, as was true of the LDV model, it can examine the effect of divorces that occur only within a particular window of time---after the first and before the last measure of child well-being. Also, like the OLS model, the GCM does not eliminate the possibility that unmeasured variables are causing both differences in family patterns and differences in trajectories of child well-being, including growth or decline in well-being. For example, an unmeasured variable that causes the initial gap in well-being could also be causing the difference in growth rates. We are more confident in the results of the GCMs if they show no significant differences in pre-divorce intercepts but significant differences in growth rates. We are also more confident in studies that include placebo or falsification tests, such as using differences in future divorce to predict initial differences in well-being. If later family disruption is significantly associated with differences in pre-divorce well-being (the intercept), this finding would indicate the presence of selection bias. [See Singer & Willett (2003) for a more detailed technical discussion of GCMs and Halaby (2004) for a more detailed discussion of the assumptions and trade-offs among the various approaches to modeling panel data.] Magnuson & Berger's (2009) analysis of data from the Maternal and Child Supplement of the National Longitudinal Survey of Youth 1979 (NLSY79) is illustrative of this approach. These authors used GCMs to examine the relationship between the proportion of time children spent in different family structures between ages 6 and 12 and scores on the Peabody Individual Achievement Test (PIAT) cognitive ability test and the Behavioral Problems Index. They focused on several family types: intact biological-parent families (married or cohabiting), social-father families (married or cohabiting), and single-parent families. They found no differences in the initial well-being of the children in these different family structures, suggesting that controls for observable factors had successfully dealt with problems of selection. In contrast, they found major differences in children's well-being trajectories, with time spent in intact biological-parent families leading to more favorable trajectories than time spent in other family types. The combination of insignificant differences in intercepts and significant differences in slopes increases our confidence in these results. However, it remains possible that time-varying unobserved characteristics were driving both time spent in different family structures and changes in child behavior and achievement. --- Individual Fixed Effects Model A fourth strategy for estimating causal effects is the individual fixed effects (IFE) model, in which child-specific fixed effects remove all time-constant differences among children. This model is similar to the LDV and GCM in that it uses longitudinal data with repeated measures of family structure and child well-being. It is different in that instead of including pre-separation well-being as a control variable, it estimates the effects of father absence using only the associations between within-child changes in family structure and withinchild changes in well-being, plus other exogenous covariates (and an error term). The IFE model is equivalent to either including a distinct dummy variable indicator for each child, that absorbs all unobserved, time-constant differences among children, or to differencing out within-child averages from each dependent and independent variable. In both of these specifications, only within-child variation is used to estimate the effects of father absence. The advantage of this model is that unmeasured variables in the error term that do not change over time are swept out of the analysis and therefore do not bias the coefficient for father absence. (See Ribar 2004 for a discussion of fixed effects models.) The IFE model also has limitations. As with LDVs and GCMs, IFE models cannot be estimated for outcomes that occur only once, such as high school graduation or a teen birth, or for outcomes that can be measured only in adulthood, such as earnings. Also, as with LDVs and GCMs, the IFE model does not control for unobserved confounders that change over time and jointly influence change in father presence and change in child well-being. Third, because the model provides an estimate of the effect of a change in a child's experience of father absence (moving from a two-parent to a single-parent family or vice versa), it does not provide an estimate of the effect of living in a stable one-parent family or a stable two-parent family. Unlike the other approaches, the IFE model estimates the effect of father absence by comparing before-after experiences for only those children within the treatment group, rather than comparing children in the treatment and control groups. Finally, and perhaps most importantly, the IFE model is very sensitive to measurement error because estimates of the effect of a change in father absence rely heavily on within-individual changes. A good illustration of the IFE approach is a study by Cooper et al. (2011). Using data from the first four waves of the Fragile Families Study, the authors examined the link between two measures of school readiness---verbal ability and behavioral problems at age 5---and children's exposure to family instability, including entrances and exits from the household. Using an OLS model, they found that the number of partnership transitions was associated with lower verbal ability, more externalizing behavior, and more attention problems, but not more internalizing behavior. These relationships held for both coresidential and dating transitions and were more pronounced for boys than girls. To address potential problems of omitted variable bias, the authors estimated a fixed effects model and found that residential transitions, but not dating transitions, reduced verbal ability among all children and increased behavior problems among boys. The fact that the IFE estimates were consistent with the OLS estimates increases our confidence in the OLS results. --- Sibling Fixed Effects Model A fifth strategy for dealing with omitted variable bias is the sibling fixed effects (SFE) model. This model is similar to the previous model in that unmeasured family-level variables that are fixed (i.e., do not vary among family members) are differenced out of the equation and do not bias the estimates of father absence. In this case, the group is the family rather than the individual, and the difference that is being compared is the difference between siblings with different family experiences rather than the change in individual exposure to different family experiences. The literature on father absence contains two types of SFE models. One approach compares biological siblings who experience father absence at different ages. In this case, the estimate of the causal effect of father absence is based on the difference in siblings' length of exposure. For example, a sibling who is age 5 at the time of a divorce or separation will experience 12 years of father absence by age 17, whereas a sibling who is age 10 when the separation occurs will experience 7 years of father absence by age 17. In some instances, children may leave home before their parents' divorce, in which case they are treated as having no exposure. A second approach compares halfsiblings in the same family, where one sibling is living with two biological parents and the other is living with a biological parent and a stepparent or social father. Both of these strategies sweep out all unmeasured family-level variables that differ between families and could potentially bias the estimate of the effect of divorce. Both approaches also have limitations. The first approach assumes that the effect of divorce does not vary by the age or temperament of the child and that there is a dose-response effect of father absence with more years of absence leading to proportionately worse outcomes, whereas the second approach assumes that the benefits of the presence of both a biological mother and father are similar for children living with and without stepsiblings. With respect to the first assumption, as previously noted, both theory and empirical evidence suggest that, at least for some outcomes, divorces occurring in early childhood and adolescence have more negative effects on child outcomes than divorces occurring in middle childhood (Sigle-Rushton & McLanahan 2004). Moreover, if siblings differ in their ability to cope with divorce, and if parents take this difference into account in making their decision about when to divorce, this approach will lead to an underestimate of the effect of a change in family structure. The major limitation of the second approach is that it assumes that the benefits of living with two biological parents are similar for children living in blended families and children living in traditional two-parent families. With respect to this assumption, there is good evidence that stepparent families are less cooperative than stable two-parent families, which means that living in a blended family is likely to reduce the well-being of all children in the household (Sigle-Rushton & McLanahan 2004). A final limitation of the SFE model is that estimates cannot be generalized to families with only one child. 2Within-family fixed effects models are employed in Gennetian's (2005) analysis of data on 5-to 10-year-old children interviewed from 1986 to 1994 for the children of the NLSY79 study. Gennetian examined how children in two-biological-parent families, stepfather families, and single-mother families fared on the PIAT cognitive test as well as how children living with step-or half-siblings compared to those with only full siblings. In simple comparisons, the data revealed a significant disadvantage in PIAT scores for children in single-mother families, stepfather families, and blended families relative to those in twobiological-parent families. Gennetian (2005) then leveraged the data, which included repeated measurements over time of family composition and outcomes for all of the mother's children, to estimate models with mother and child fixed effects. These analyses found very little evidence that children living in single-mother, stepfather, or blended families were disadvantaged on PIAT scores relative to children in nonblended twobiological-parent families, although they did indicate that number of years in a singlemother family had a small negative effect on PIAT scores. Finally, Gennetian further tested the logic of the sibling approach by comparing the wellbeing of half-siblings, one of whom was living with both biological parents and the other of whom was living with a biological parent and a stepparent. The analyses showed the expected negative effect on PIAT scores for children living with stepfathers, with this relationship remaining negative (but declining in size and losing significance) in models with mother and child fixed effects. Importantly, these analyses also revealed a negative effect of the presence of a half-sibling on the child who was living with two biological parents. --- Natural Experiments and Instrumental Variables A sixth strategy is to use a natural experiment to estimate the effect of divorce on child wellbeing. The logic behind this strategy is to find an event or condition that strongly predicts father absence but is otherwise unrelated to the offspring outcome of interest. The natural experiment may be an individual-level variable or an aggregate-level measure. Several studies use parental death as a natural experiment, generally comparing outcomes for children whose parents divorced with those whose parent died. The assumption behind this strategy is that experiencing parental death is a random event and can therefore be used to obtain an unbiased estimate of the effect of father absence. In such analyses, a significant negative relationship between child outcomes and both parental death and divorce is taken as evidence of the causal relationship of divorce on child well-being, particularly if the divorce and death coefficients are not statistically different. 3 A major challenge for these studies is that parental death is rarely random; whatever is causing the death may also be causing the child outcome. Violent and accident-related deaths, for example, are selective of people who engage in risky behaviors; similarly, many illness-related deaths are correlated with and the other does not. These analyses control for family differences that are common to both siblings; however, they do not control for within-sibling differences that lead one sibling to divorce and another to be stably married. Twin studies go one step further, by comparing MZ twins (who share identical genetic information) and DZ twins (who have half of their genes identical), allowing researchers to determine the role of genetics in accounting for the effect of divorce. 3 We only include studies of the effect of parental death on child outcomes if the author uses one of the causal methods described below or explicitly uses death as a natural experiment for divorce or other types of father absence. lifestyles that affect child outcomes, such as smoking. Children of deceased parents are also treated very differently than are children of divorced parents, not only by their informal support systems but also by government. Other studies use natural experiments to estimate instrumental variable (IV) models. This strategy involves a two-step procedure. In the first step, the researcher uses the natural experiment to obtain a predicted father absence (PFA) measure for each individual. Then, in the second step, PFA is substituted for actual divorce in a model predicting offspring wellbeing. Because PFA is based entirely on observed variables, the coefficient for this variable cannot be correlated with unmeasured variables, thereby removing the threat of omitted variable bias. For this strategy to work, however, the researcher must make a number of strong assumptions. First, he or she must find a variable---or instrument---that is a strong predictor of divorce or separation but that is not correlated with the outcome of interest except through its effects on father absence or divorce. The second assumption is often violated [for example, see Besley & Case (2000) for a discussion of why state policies are not random with respect to child well-being]. A third limitation of the IV model is that it requires a large sample. Because PFA is based on predicted absence rather than actual absence, it is measured with a good deal of error, which results in large standard errors in the child well-being equation and makes it difficult to interpret results that are not statistically significant. Finally, the IV model requires a different instrument for each independent variable, which limits the researcher's ability to specify different types of father absence. A good example of the natural experiment/IV approach and its limitations is Gruber's (2004) analysis of the effect of changes in divorce laws on divorce and child outcomes. Combining data on state differences in divorce laws with information from the 1960--1990 US Censuses, Gruber found a significant positive effect of the presence of unilateral divorce laws---which make divorce easier---on the likelihood of being divorced. This part of the analysis satisfied the first requirement for the IV model; namely, that the instrument be strongly associated with divorce. He then estimated the effect of living in a state (for at least part of childhood) where unilateral divorce was available on a host of adult outcomes. These analyses showed that unilateral divorce laws were associated with early marriage and more divorce, less education, lowered family income, and higher rates of suicide. Additionally, women so exposed appeared to have lower labor force attachment and lower earnings. To distinguish the effect of divorce laws from other state-level policies, Gruber investigated the associations between the presence of unilateral divorce laws and changes in welfare generosity and education spending during this same time period, finding no associations suggestive of bias. He did find, however, that his results were driven in large part by factors at work in California over this period. Most importantly, Gruber concluded that divorce laws did not pass the second requirement of the IV model; namely, that they affect child well-being only through their effect on parents' divorce. Instead, he argued that divorce laws are likely to affect child well-being by altering decisions about who marries and by altering the balance of power among married couples. Gruber's analysis highlights the difficulty of finding a natural experiment that truly satisfies both assumptions of the IV model. --- Propensity Score Matching A final strategy used in the literature for obtaining estimates of the causal effect of divorce is propensity score matching (PSM). Based on the logic of experimental design, this approach attempts to construct treatment and control groups that are similar in all respects except for the treatment condition, which in this literature is father absence. The strategy begins by estimating the probability of father absence for each child based on as many covariates as possible observed in the data, and then uses this predicted probability to match families so that they are similar to one another in all respects except for father absence. This approach has several advantages over the OLS model. First, researchers may exclude families that do not have a good match (i.e., a similar propensity to divorce), so that we are more confident that our estimates are based on comparing "apples to apples." Second, PSM analyses are more flexible than OLS because they do not impose a particular functional form on how the control variables are associated with divorce. PSM estimation is also more efficient than OLS because it uses a single variable---predicted probability of divorce---that combines the relevant predictive information from all the potential observed confounders. Finally, it can accommodate the fact that the effects of divorce may differ across children by estimating separate effects for children in families with low and high propensities to divorce. Propensity scores may also be used to reweight the data so that the treatment and control groups are more similar in terms of their observed covariates (Morgan & Todd 2008, Morgan & Winship 2007). The PSM approach has limitations as well. First, the model is less flexible than the OLS model in terms of the number and complexity of family structures that can be compared in a single equation. Second, the approach does not control for unmeasured variables, although it is possible to conduct sensitivity analyses to address the potential influence of such variables. For this reason, the approach is less satisfactory than IV models for making causal inferences. Finally, the strategy relies heavily on the ability of the researcher to find suitable matches. If there is not sufficient overlap in the kinds of people who divorce and the kinds of people who remain stably married, the approach will not work. Similarly, by limiting the sample to cases with a match, the researcher also reduces sample size and, more importantly, the generalizability of the results [see Morgan & Winship (2007), Ribar (2004), and Winship & Morgan (1999), for a more extended technical discussion of the logic and assumptions of matching techniques]. The work of Frisco et al. (2007) serves as a useful example of the use PSM models in the study of the effects of divorce. Drawing on the Add Health data, the authors first estimated simple OLS regressions of the relationship between the dissolution of a marital or cohabiting relationship between waves I and II and adolescents' level of mathematics coursework, change in GPA, and change in proportion of courses failed between the two waves. These models revealed a significant negative relationship between dissolution and the measures of GPA and course failure but no link to mathematics coursework, after controlling for a large number of potentially confounding variables. Next, the authors calculated a propensity to experience dissolution as a function of parents' race, education, income, work, age, relationship experience and quality, religiosity, and health and adolescents' age, gender, and number of siblings, and then used this predicted propensity to conduct nearest neighbor matching with replacement and kernel matching. Regardless of matching method, the estimates from the PSM models accorded very well with those from the simple OLS regressions. As in those models, there were significant negative relationships between dissolution and GPA and positive relationships with course failure, and the point estimates were of a very similar magnitude across models. This study also examined how large the influence of an unobserved confounder would have had to be in order to threaten the causal interpretation of the results. The study had some unique and some general limitations. Because of data limitations, the authors could not separate dissolutions stemming from divorce from those attributable to other causes, such as parental death. More generally, because matching is limited to observable characteristics, the authors could calculate only propensities of dissolution based on observable characteristics. To assess the sensitivity of their results to omitted variable bias, the authors conducted a simulation and discovered that an unobserved confounder that is moderately associated with dissolution and the outcomes (r <unk> 0.1) could bias their findings. --- EVIDENCE FOR THE CAUSAL EFFECT OF FAMILY STRUCTURE ON CHILD --- OUTCOMES In this section, we assess the evidence for a causal effect of father absence on different domains of offspring well-being. Empirical studies have used multiple strategies for identifying causal effects that each have unique strengths and weaknesses---as we identified in the previous section---but we are more confident in the presence of causal effects if we identify consistent results across multiple methods. Many of the articles we examine used more than one analytic strategy and/or examined outcomes in more than one domain. Consequently, our unit of analysis is each separate model reported in an article, rather than the article itself. For instance, rather than discussing an article that includes both SFE and LDV analyses of test scores and self-esteem as a single entity, we discuss it as four separate cases. The virtue of this approach is that it allows us to discern patterns more clearly across studies using similar analytic strategies and across studies examining similar outcomes. The drawback is that some articles contribute many cases and some only one. Consequently, if there are strong author-effects, for articles that contribute many cases, then our understanding of the results produced by a given analytic strategy or for a given domain could be skewed. We note when this occurs in our discussions below. Studies in this field measured father absence in several ways, which the reader should keep in mind when interpreting and comparing results across studies. Some studies compared children of divorced parents with children of stably married parents; others compared children whose parents married after their child's birth with those parents who never married; still others simply compared two-parent to single-parent families (regardless of whether the former were biological or stepparents and the latter were single through divorce or a nonmarital birth). More recently, researchers have started to use even more nuanced categories to measure family structure---including married biological-parent families, cohabiting biological-parent families, married stepparent families, cohabiting stepparent families, and single parents by divorce and nonmarital birth---reflecting the growing diversity of family forms in society. Still other studies look at the number of family structure transitions the child experiences as a measure of family instability. We did not identify any studies that used causal methods to study the effects of same-sex unions. Finally, we include studies of father absence that use data from a range of international samples. We should note, however, that what it means to reside in a father-absent household varies a great deal cross-nationally. Children whose parents are not married face starkly different levels of governmental and institutional support and unequal prospects for living in a stable two-parent family in different countries. In fact, both marital and nonmarital unions in the United States are considerably less stable than in any other industrialized nation (Andersson 2002). --- Education We begin our review of the empirical findings by looking at studies that attempted to estimate the causal effect of divorce on school success. We distinguish between studies that looked at children's test scores; studies that looked at educational attainment; and studies that looked at children's attitudes, engagement, and school performance. Test scores-We identified 31 analyses that examined the relationship between father absence and test scores, including tests of verbal, math, and general ability. The articles containing these analyses are listed and briefly described in the first section of Table 1. Virtually all of the test score analyses used US-based samples (only Cherlin et al. 1991 used international data). Although the overall picture for test scores was mixed, with 14 finding significant effects and 17 finding no effect, there were patterns by methodology. 4 First, significant effects were most likely in the analyses using GCMs. Of the GCM studies finding significant differences in slopes between children of divorced and intact families, about half found no significant differences in the pre-divorce intercepts, which made their significant results more convincing. One GCM study (Magnuson & Berger 2009) performed a falsification test and found no evidence that subsequent divorce predicted intercepts, ruling out the threat of selection bias. In contrast with analyses based on the GCM design, the IFE and SFE analyses rarely found significant effects of family structure on children's test scores. In general, standard errors tended to be larger in IFE and SFE analyses than in OLS analyses, but in virtually all of these analyses, the fixed effects coefficients were markedly reduced in size relative to the OLS coefficients, suggesting that the lack of significant results was not simply due to larger standard errors. Several factors may have limited the generalizability of the fixed effects models, however. First, all of these analyses came from comparisons of siblings in blended families. The parents in blended families differed from those in traditional married families because at least one of the parents had children from a previous relationship, limiting the external validity of these results. Second, the father-absent category included children of divorced parents as well as children of never-married mothers, whereas the father-present category contained both children whose mothers were married at birth and children whose mothers married after the child's birth. We might expect that the benefit of moving from a singleparent household to a married-parent household would be smaller than the benefit of being born into a stably married family. Given these comparisons and the small samples involved in estimation, it is understandable that we found little evidence of an impact of family structure on test scores using fixed effects models. Although there were clear patterns in the GCM and fixed effects analyses, LDV studies were a mixed bag: Half found effects and half did not. Sometimes the results were not robust even
The literature on father absence is frequently criticized for its use of cross-sectional data and methods that fail to take account of possible omitted variable bias and reverse causality. We review studies that have responded to this critique by employing a variety of innovative research designs to identify the causal effect of father absence, including studies using lagged dependent variable models, growth curve models, individual fixed effects models, sibling fixed effects models, natural experiments, and propensity score matching models. Our assessment is that studies using more rigorous designs continue to find negative effects of father absence on offspring well-being, although the magnitude of these effects is smaller than what is found using traditional crosssectional designs. The evidence is strongest and most consistent for outcomes such as high school graduation, children's social-emotional adjustment, and adult mental health.divorce; single motherhood; education; mental health; labor force; child well-beingThe authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review. 1 We use the term "father absence" to refer to children who live apart from their biological father because of divorce, separation from a cohabiting union, or nonmarital birth. We use the terms "divorce" and "separation" to talk about change in children's coresidence with their biological fathers.
suggesting that the lack of significant results was not simply due to larger standard errors. Several factors may have limited the generalizability of the fixed effects models, however. First, all of these analyses came from comparisons of siblings in blended families. The parents in blended families differed from those in traditional married families because at least one of the parents had children from a previous relationship, limiting the external validity of these results. Second, the father-absent category included children of divorced parents as well as children of never-married mothers, whereas the father-present category contained both children whose mothers were married at birth and children whose mothers married after the child's birth. We might expect that the benefit of moving from a singleparent household to a married-parent household would be smaller than the benefit of being born into a stably married family. Given these comparisons and the small samples involved in estimation, it is understandable that we found little evidence of an impact of family structure on test scores using fixed effects models. Although there were clear patterns in the GCM and fixed effects analyses, LDV studies were a mixed bag: Half found effects and half did not. Sometimes the results were not robust even within the same paper. For example, both Cherlin et al. (1991) and Sanz-de-Galdeano & Vuri (2007) found significant effects for math scores but not reading scores. Using the same data as Sanz-de-Galdeano & Vuri (the National Education Longitudinal Study), Sun (2001) found positive effects for both math and reading tests. Educational attainment-There is stronger evidence of a causal effect of father absence on educational attainment, particularly for high school graduation. Of nine studies examining high school graduation using multiple methodologies, only one found null effects, and this study used German data to compare siblings in blended families. There was also robust evidence of effects when attainment was measured by years of schooling. Again, the only studies that found no effect of father absence were those that used international samples or compared siblings in blended families. Finally, there was weak evidence for effects on college attendance and graduation, with only one of four studies finding significant results. Taken together, the evidence for an effect of father absence on educational attainment, particularly high school graduation, is strong in studies using US samples, perhaps because of the relatively open structure of the US educational system compared with the more rigid tracking systems within many European countries. How might one explain the stronger, more consistent evidence base for father absence effects on educational attainment relative to cognitive ability? One explanation is that measurement error in test scores is to blame for the weak and sometimes inconsistent findings in that domain. Another explanation is that the methods involved in measuring attainment---sibling models and natural experiments---do not control as rigorously for unobserved confounders as the repeated-measure studies (GCM, LDV, IFE) of cognitive ability. The lack of strong test score effects is also consistent with findings in the early education literature that suggest that cognitive test scores are more difficult to change than noncognitive skills and behaviors (see, e.g., HighScope Perry Preschool Project; Schweinhart et al. 2005). Given that educational attainment is based on a combination of cognitive ability and behavioral skills (that are influenced by family structure, as we describe below), it makes sense that we find strong evidence of effects on the likelihood of high school graduation but not on test scores. Attitudes, performance, and engagement-A smaller number of analyses (10) examined the effect of father absence on children's school performance, including GPA, coursework, and track placement. Of these analyses, four found no significant effect on track placement using German data and multiple methodologies (Francesconi et al, 2010). Three analyses came from a study in the United States by Frisco et al. (2007) that found effects for GPA and courses failed, but not for a third, somewhat unusual measure: years of math coursework completed. It is difficult to draw any conclusions about the effects of family structure on school performance across these disparate samples and measures. Finally, seven studies examined the effect of father absence on educational engagement and aspirations among teenagers in the United States. Five of the seven analyses found no effect on these noncognitive measures. For example, one study (Sun & Li 2002) found positive effects on aspirations, but the other two found no effect. Similarly, one study (Astone & McLanahan 1991) found positive effects on school engagement, but the other three found no effect. The latter findings suggest that educational aspirations and orientations toward schooling may form at younger ages, and none of these analyses examined aspirations among children younger than age 12. --- Mental Health After education, the second most common outcome examined in the literature is mental health, which is measured as social-emotional development when respondents are children and adolescents. Mental health and social-emotional development are closely related to what social scientists call noncognitive skills or soft skills to distinguish them from cognitive skills such as math and reading tests. Recent research shows that social-emotional skills play an important role in adult outcomes, not only in influencing mental health but also in influencing educational attainment, family formation and relationships, and labor market success (Cunha & Heckman 2008). Adult mental health-We identified six studies that examined the association between parental divorce and adult mental health (see Table 2) Three of these studies were based on UK data, and TWO were based on US data. All of the empirical strategies that we discussed in the previous section were used to estimate the effects of divorce and father absence on adult mental health. The findings were quite robust, with four of the six analyses showing a negative effect of parental divorce on adult mental health. Moreover, one of the two null findings (Ermisch & Francesconi 2001) was overturned in a subsequent paper by the same authors that distinguished between early and later exposure to divorce (Ermisch et al. 2004). Social-emotional problems-Social-emotional problems in childhood are typically measured using the Child Behavior Checklist (CBCL) (Achenbach & Edelbrock 1981), which includes behaviors such as aggression, attention, anxiety, and depression. Some researchers use the full CBCL scale, whereas others use subscales that distinguish between externalizing behavior (aggression and attention) and internalizing behavior (anxiety and depression). For adolescents, researchers often use a delinquency scale or a measure of antisocial behavior, which overlaps with some of the items on the externalizing scale. A few of the studies we examined looked at other psychological outcomes, such as locus of control and self-esteem, and several studies looked at substance use/abuse. We identified 27 separate analyses that examined the association between parental divorce and some type of externalizing behavior or delinquency. These analyses were based on data from four countries: the United States, the United Kingdom, Canada, and Australia,. Of these, 19 analyses found a significant positive effect of divorce or father absence on problem behavior for at least one comparison group, whereas 8 found no significant association. The findings varied dramatically by method, with the LDV approach yielding the most significant results and the two fixed effects approaches yielding the fewest significant findings. Two analyses found effects among boys but not girls (Cooper et al. 2011, Morrison & Cherlin 1995), and one analysis found effects among girls but not boys (Cherlin et al. 1991). Of the analyses reporting null findings, several had characteristics that might account for the lack of significant findings. One combined cohabiting parents with married parents (Boutwell & Beaver 2010), which likely weakened the effect of father absence on child outcomes, as prior research shows that disruptions of cohabiting unions are less harmful for children than are disruptions of marital unions (Brown 2006). A second controlled for family income, which is partly endogenous to divorce (Hao & Matsueda 2006). And a third used a small, school-based sample (Pagani et al. 1998). Six analyses examined internalizing behavior in children, including studies that measured loneliness and difficulty making friends. Three of these analyses reported significant effects of father absence, whereas the other three reported no effects. As was true of the externalizing analyses, the internalizing analyses relied on multiple strategies. Also, as before, the analyses reporting null effects had characteristics that might account for their lack of strong findings. Two of the analyses that used IFE models were based on low-income samples (Bachman et al. 2009, Foster & Kalil 2007), and a third study controlled for income (Hao & Matsueda 2006). In addition, the Bachman analysis compared single mothers who married with those who remained single. Finally, five analyses looked at low self-esteem and low self-control, which are sometimes treated as markers of depression or psychological distress. The findings from these studies were mixed. Substance use-We identified six analyses that examined substance use, measured as cigarette smoking and drug and alcohol use. The evidence for this set of outcomes was very robust, with only one analysis reporting a null effect (Evenhouse & Reilly 2004). Furthermore, the findings were consistent across multiple strategies, including SFE models, which often showed no effects for other outcomes. --- Labor Force We found only a few analyses that examined the effect of father absence on children's labor force outcomes in adulthood (see Table 3). In part, this is because earnings, employment, and welfare receipt in adulthood do not lend themselves to analysis using IFEs, GCMs, or LDVs, which require observations before and after the divorce. Indeed, all the analyses of this domain of outcomes used SFE models or natural experiments. However, in many other respects, there is limited comparability between the studies. Although several studies used data from the United States (Biblarz & Gottainer 2000, Björkland et al. 2007, Gruber 2004, Lang & Zagorsky 2001), many of these analyses were derived from estimates based on British or Canadian data. Further, the Gruber (2004) and Corak (2001) studies, which contributed 9 of the 14 analyses, differed in the ages and periods examined, with Gruber using data from a longer time period (1960--1990), a wider range of ages (20--50), and so a much larger set of cohorts (births 1910--1970) than Corak (2001Corak ( ): ages 25--32 and births 1963Corak ( --1972. The remaining analyses, with the exception of Biblarz & Gottainer (2000), accorded with Corak (2001) insofar as they used data from the mid to late 1990s and focused on respondents in their 20s and early 30s. The findings for effects of father absence were, however, consistent. Both Gruber (2004), using changes in US state laws to allow for unilateral divorce, and Corak (2001), using parental death in Canada, found that divorce was associated with lower levels of employment. The studies disagreed, however, about for whom these effects were most pronounced, with Gruber's (2004) analyses suggesting that female children of divorce were less likely to work and Corak (2001) finding that male children exposed to parental loss had lower labor force participation. Similarly, using SFE models with British data, Ermisch and coauthors (Ermisch & Francesconi 2001, Ermisch et al. 2004) found evidence of higher levels of labor force inactivity among those who experienced divorce in early childhood. Looking at adult occupational status rather than simply employment status, Biblarz & Gottainer (2000) found that although children growing up in divorced-mother households fared worse than those growing up in stable two-parent households, there was no significant disadvantage to growing up in widow-mother households. However, these researchers did find that children growing up in stepparent households were disadvantaged regardless of whether father absence was due to divorce or widowhood. The results of analyses of the effect of divorce on income and earnings were less consistent than the results for employment. Again, Gruber (2004) and Corak (2001) contributed most of the analyses for these outcomes, with Gruber finding evidence of negative effects of divorce on income per capita and on women's earnings (but not poverty), and Corak finding negative effects of divorce on men's family income (but minimal impacts on earnings). Corak's result is consistent with analyses by Lang & Zagorsky (2001) who, using parental death as a natural experiment, found no effect of father absence on wages and by Björkland et al. (2007) who, using SFE models with US and Swedish data, found no effects on earnings. Corak (2001) also investigated how divorce was related to the receipt of unemployment insurance and income assistance in Canada, finding a higher probability of receiving income assistance but not unemployment assistance. --- Family Formation and Stability Like the evidence base for labor force outcomes, there is relatively little research on how family structure affects patterns of offspring's own family formation and relationship stability. The lack of research in this domain is somewhat surprising, given that these outcomes are closely related to the causal effect under consideration. The dearth of studies may be because these outcomes do not lend themselves to LDV, GCM, or IFE analyses. Marriage and divorce-Virtually everything we know about the effects of father absence on marriage and divorce comes from just three studies (see Table 4), all of which used a natural experiment design, with the experimental variable being parental death (Corak 2001, Lang & Zagorsky 2001) or changes in divorce laws (Gruber 2004). All three studies examined marriage as an outcome but came to different conclusions. Lang & Zagorsky found that parental death and divorce reduced the likelihood that sons will marry but found no effect on daughters. Using parental death as a natural experiment, Corak found no evidence of a causal effect of father absence on marriage for either sons or daughters. Finally, using divorce laws as a natural experiment, Gruber found that growing up under the newer, relaxed divorce laws actually increased the likelihood of marriage for youth. The evidence for an effect of father absence on marital stability was more consistent, with both Corak and Gruber finding evidence of a positive effect on separation but not on divorce. Early childbearing-We identified only two analyses that examined the effect of father absence on early childbearing (Ermisch & Francesconi 2001, Ermisch et al. 2004). These analyses were conducted by the same research team, they used the same SFE model, and they used the same data---the British Household Panel Survey data in Great Britain. Both analyses found a positive association between parental absence and early childbearing, with divorce in early childhood having a stronger effect than divorce in middle childhood. --- CONCLUSIONS The body of knowledge about the causal effects of father absence on child well-being has grown during the early twenty-first century as researchers have increasingly adopted innovative methodological approaches to isolate causal effects. We reviewed 47 such articles and find that, on the whole, articles that take one of the more rigorous approaches to handling the problems of omitted variable bias and reverse causality continue to document negative effects of father absence on child well-being, though these effects are stronger during certain stages of the life course and for certain outcomes. We find strong evidence that father absence negatively affects children's social-emotional development, particularly by increasing externalizing behavior. These effects may be more pronounced if father absence occurs during early childhood than during middle childhood, and they may be more pronounced for boys than for girls. There is weaker evidence of an effect of father absence on children's cognitive ability. Effects on social-emotional development persist into adolescence, for which we find strong evidence that father absence increases adolescents' risky behavior, such as smoking or early childbearing. The evidence of an effect on adolescent cognitive ability continues to be weaker, but we do find strong and consistent negative effects of father absence on high school graduation. The latter finding suggests that the effects on educational attainment operate by increasing problem behaviors rather than by impairing cognitive ability. The research base examining the longer-term effects of father absence on adult outcomes is considerably smaller, but here too we see the strongest evidence for a causal effect on adult mental health, suggesting that the psychological harms of father absence experienced during childhood persist throughout the life course. The evidence that father absence affects adult economic or family outcomes is much weaker. A handful of studies find negative effects on employment in adulthood, but there is little consistent evidence of negative effects on marriage or divorce, on income or earnings, or on college education. Despite the robust evidence that father absence affects social-emotional outcomes throughout the life course, these studies also clearly show a role for selection in the relationship between family structure and child outcomes. In general, estimates from IFE, SFE, and PSM models are smaller than those from conventional models that do not control for selection bias. Similarly, studies that compare parental death and divorce often find that even if both have significant effects on well-being, the estimates of the effect of divorce are larger than those of parental death, which can also be read as evidence of partial selection. --- The Virtues and Limitations of the Key Analytic Strategies Although we are more confident that causal effects exist if results are robust across multiple methodological approaches, it is understandable that such robustness is elusive, given the wide range of strategies for addressing bias. It is also the case that each of these strategies has important limitations and advantages. Although GCMs, LDV designs, and PSM models allow for broad external validity, these approaches do less to adjust for omitted variables than do IFE and SFE models. Yet such fixed effects models require one to assume that biological parents in blended families are just like parents in nonblended families and that the age at which children experience father absence does not affect their response. In general, studies that employ more stringent methods to control for unobserved confounders also limit the generalizability of their results to specific subpopulations, complicating efforts to draw conclusions across methods. In many ways, the natural experiment strategy is appealing because it addresses concerns about omitted variable bias and reverse causality. In practice, however, these models are difficult to implement. Approaches that use parental death must make assumptions about the exogeneity of parental death and the comparability of the experiences of father absence due to death and divorce. Similarly, approaches that use instruments such as divorce law changes and incarceration rates must make a convincing case that such policies and practices affect child outcomes only through their effects on family structure. Some of these methodological approaches are better suited to examining one set of outcomes rather than others. For instance, GCM, LDV, and IFE designs do not lend themselves to the investigation of the effects of father absence on adult outcomes. In contrast, although natural experiments and PSM models can be used to examine a wider range of outcomes, they are much less flexible in how father absence can be measured, generally using dichotomous measures of absence rather than the more detailed categorical measures of family type or measures that seek to capture the degree of instability experienced by children. Because of these differences by method in the domains that are examined and the definitions of family structure that are used, it is difficult to discern if some methods seem more apt than others to find evidence for or against the effect of father absence on children. But our impression is that LDV and GCM designs tend to find stronger evidence of effects of father absence on education and, particularly, social-emotional health than do the other designs. The evidence on the effects of father absence is more mixed in studies using IFE and SFE. The relatively smaller number of papers that use PSM designs also return a split verdict. Among those studies using natural experiments, there is some evidence of negative effects of father absence from changes in divorce laws, weak evidence when incarceration is used as an instrument, and mixed evidence from studies using parental death. --- Areas for Future Research Looking across studies, it is apparent that father absence can affect child well-being across the life course. But, within any one study, there is rarely an attempt to understand how these different types of outcomes are related to one another. For instance, studies separately estimate the effect of father absence on externalizing behavior, high school completion, and employment, and from these analyses we can tell that family disruption seems to have effects on each outcome. But it is also plausible that the effect of father absence on high school completion operates through an effect on externalizing behavior or that the effect on employment is attributable to the effect on high school completion. Stated differently, the articles reviewed here do a good job of attempting to estimate the causal effects of father absence on particular outcomes, but they do not tell us very much about why or how these effects come about. This omission reflects a fundamental tension, extending beyond our particular substantive topic, between the goal of estimating causal effects versus the goal of understanding the mechanisms and processes that underlie long-term outcomes (Moffitt 2003). Few of the studies reviewed here investigate whether the effects of father absence vary by child age, but those that do find important differences, with effects concentrated among children who experienced family disruption in early childhood (Ermisch & Francesconi 2001, Ermisch et al. 2004). New developments in the fields of neuroscience and epigenetics are rapidly expanding our understanding of how early childhood experiences, including in utero experiences, have biological consequences, and sociologists would benefit from a better understanding of these dynamics as they relate to a wide range of potential outcomes, especially health in adulthood (Barker 1992, Miller et al. 2011). Similarly, although there has been some attention to how boys and girls may respond differently to father absence, researchers should continue to be attentive to these interactions by gender. We found surprisingly little work on interactions between father absence and race or class. Given that African American and low-income children experience higher levels of father absence than their white and middle-class counterparts, a differential response to absence could serve to mitigate or further exacerbate inequalities in childhood and adult outcomes. More work, particularly using the methods of causal inference discussed here, remains to be done on this topic. We also suggest that more research is needed to understand if the effects of father absence on child well-being may have changed over time. We might expect that if stigma has lessened, as father absence has become more common, then the negative effects may have diminished. Alternatively, diminishing social safety net support and rising workplace insecurity could have served to make the economic consequences of father absence more severe and the negative effects more pronounced. Finally, emerging research on family complexity shows that children raised apart from their biological fathers are raised in a multitude of family forms---single-mother families, cohabiting-parent families, stepparent families, blended families, multigenerational families---many of which are often very unstable (McLanahan 2011, Tach et al. 2011, Tach 2012). Indeed, stable single-mother households are quite rare, at least among children born to unmarried parents, which means that unstable and complex families may be the most common counterfactual to the married two-biological-parent family. Thus, studies of the causal impact of father absence should not treat father absence as a static condition but must distinguish between the effect of a change in family structure and the effect of a family structure itself. --- Author Manuscript McLanahan et al. Page 24 --- Table 1 Studies of the effects of father absence on education 20--50 in 1960, 1970, 1980, or 1990 Census
The literature on father absence is frequently criticized for its use of cross-sectional data and methods that fail to take account of possible omitted variable bias and reverse causality. We review studies that have responded to this critique by employing a variety of innovative research designs to identify the causal effect of father absence, including studies using lagged dependent variable models, growth curve models, individual fixed effects models, sibling fixed effects models, natural experiments, and propensity score matching models. Our assessment is that studies using more rigorous designs continue to find negative effects of father absence on offspring well-being, although the magnitude of these effects is smaller than what is found using traditional crosssectional designs. The evidence is strongest and most consistent for outcomes such as high school graduation, children's social-emotional adjustment, and adult mental health.divorce; single motherhood; education; mental health; labor force; child well-beingThe authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review. 1 We use the term "father absence" to refer to children who live apart from their biological father because of divorce, separation from a cohabiting union, or nonmarital birth. We use the terms "divorce" and "separation" to talk about change in children's coresidence with their biological fathers.
Background Interprofessional collaboration is central to the primary care of complex chronic illnesses such as medical frailty [1,2]. To support medically frail older patients, who often have complex multimorbidity and an increased risk of mortality [3], the interprofessional team should collectively engage patients and their care partners in endof-life (EOL) conversations [4][5][6][7]. Engaging in talk about immediate goals, fears, and wishes when facing a lifelimiting illness can improve quality-of-life and goal-concordant care [8]. However, little research has explored the forces that shape interprofessional collaboration to support EOL conversations in primary care. To address this knowledge gap, which limits the ability to improve practice, we conducted a critical ethnography to examine the structural forces shaping nurses' and allied health clinicians' involvement in EOL conversations in the primary care of frail older adults. --- Interprofessional primary care Primary care is the first point of contact with the healthcare system. It provides longitudinal care and aims to prevent and manage chronic illnesses, which is best achieved by an interprofessional approach that involves sharing knowledge, skills, and experience [9]. As primary care grapples with increasingly complex multi-morbid patients, who are discharged from acute care "quicker and sicker" and require comprehensive care from a team, interprofessional collaboration is essential [1,10]. Interprofessional collaboration is defined as a partnership between a team of clinicians as well as a patient in "a participatory, collaborative and coordinated approach to shared decision-making around health and social issues" [11] (p.1). Effective collaboration requires understanding of team members' roles, effectively managing conflict, supporting team functioning, and collaboratively formulating, implementing, and evaluating care to enhance patient outcomes [11]. In primary care, the most responsible provider, usually a physician or nurse practitioner, is considered the medical expert. Within interprofessional teams, the most responsible provider has an overall responsibility for directing and coordinating patients' care [12][13][14]. At the same time, most responsible providers are expected to understand, respect, and support their overlapping roles and responsibilities with other clinicians on the team and be able to change from team leader to team member based on need [15]. Primary care exists within larger healthcare systems and social policies, which in Canada and other countries internationally tend to be shaped by neoliberal logics [16] that promote cost and speed efficiencies, government deregulation to encourage innovation, privatization of public services to lower costs for the state, and individual responsibility to offload structural problems onto individuals [17][18][19][20]. Research shows that healthcare systems governed by neoliberalism are often characterized by interprofessional isolation and conflict as opposed to effective collaboration [16,21,22]. To date, little work has explored the connection between social policies, interprofessional collaboration, and EOL conversations. --- End-of-life conversations in primary care As patients approach their EOL, having a clear understanding of their wishes and values and aligning these to care becomes essential for person-centred EOL care [12]. This can be accomplished through EOL conversations, which include three types of conversations: 1) advance care planning; 2) goals-of-care discussions; and 3) EOL decision-making discussions [23,24]. The focus in this study is on the latter two types of EOL conversations since research has found that advance care planning does not improve EOL care, influence EOL decision-making, help to align care with patient goals, or improve satisfaction of care [25,26]. EOL goals-of-care and decision-making discussions require providing information about the illness as well as the harms and benefits of medical interventions, exploring what matters to patients and their care partners, and making recommendations based on expressed goals [23]. Determining goals-of-care is considered the "gold standard" for ensuring person-centred conversations and decision-making [27]. However, research has found that physicians often do not engage patients in EOL decision-making and goals-of-care discussions until death is imminent [4,28]. To address this, an interprofessional approach is recommended for achieving timely and high-quality EOL conversations in primary care [4,28]. If physicians and nurse practitioners collaborate interprofessionally, alignment between patients' goals and the care provided in the last years, months, or weeks of life might improve [13]. --- Nursing and allied health clinicians' roles in EOL conversations According to the Ontario Palliative Care Competency Framework (2019), it is well within the scope of nurses, social workers, occupational therapists, physiotherapists, pharmacists and many other clinicians who care for patients with life-limiting conditions to engage patients in discussions about EOL. These clinicians can assess patients' understanding of life-limiting conditions, recognize common illness trajectories, support the expression of wishes and goals-of-care, and facilitate goal setting, decision-making and informed consent in order to support the best possible outcomes and quality-of-life. An interprofessional primary care team could address unmet emotional, psychological, spiritual, and informational needs of patients at EOL more effectively than a physician or nurse practitioner alone because this approach provides well-rounded information from a variety of disciplines and improves access to timely EOL conversations due to the availability of more clinicians [12,29]. However, research examining how clinicians collaborate to support EOL conversations reveals nurses and allied health clinicians are most often not engaging patients in these conversations [7,12,30]. Previous studies have aimed to improve the problem of low interprofessional collaboration in serious illness conversations by improving the communication training of clinicians from multiple disciplines [7,28]. Although these interventions promoted more frequent and higher quality conversations, they did not lead to earlier conversations in patients' illness trajectory [7,28]. Clinicians also experienced challenges including role confusion, less trust from clinicians, exclusion from collaboration, and the perception that EOL conversations were futile [7]. The social, political, and professional conditions that created these forms of collaboration were not examined in this research, leaving gaps in understanding how social and practice structures shape collaboration. As critical health scholars who are interested in the ways normative (i.e., dominant social rules) logics and social structures shape care, our previous work explored how biomedical norms constrain EOL conversations between physicians or nurse practitioners (e.g., patient's most responsible providers) and frail older adults and/or their care partners in an urban Family Health Team [30]. Our findings suggest that attempts by patients or the most responsible provider to talk about decline, death, or the limits of medicine, were constrained by talk and behaviour that emphasised the possibility of living longer [30]. The logic of reversing or mitigating decline is reinforced by biomedical culture, clinical practice guidelines, and the societal expectation of longevity, making it less possible for EOL conversations to occur in primary care. This work demonstrated the importance of examining the way broad, yet often hidden forces shape EOL conversations. In this manuscript we build on our previous analysis of ethnographic data to critically explore how interactions within a team impact EOL conversations. We previously found that while the conversations were fragmented, patient's primary physician or nurse practitioner did engage frail older adults in goals-of-care discussions or decision-making, yet nurses and allied health clinicians did not [30]. To investigate why this pattern was observed, we explore the factors that influence the quality of collaboration in the primary care team as well as policies that govern interprofessional practice more broadly. --- Methods We engaged in a critical ethnography using observations, document analysis, and interviews to gather in-depth information about how macro-structures, such as policy and normative assumptions, influence how a team of clinicians collaborate around EOL conversations [31][32][33]. Critical ethnography differs from ethnography in that it seeks not only to understand and describe the language and behaviours of a group at the micro level, but also interpret how group culture is shaped by sociopolitical structures [33]. To guide our investigation of characteristics shaping patterns of collaboration between clinicians from multiple disciplines, we drew on the Gears Model of Factors Affecting Interprofessional Collaboration (see Table 3) [34]. Like other investigators [35][36][37], we used this taxonomy of characteristics at the macro, meso, and micro levels to examine the quality of collaboration in this team. To link patterns of collaboration with sociopolitical structures, we also examined the governing policies of the clinic and normative assumptions within the interviews, observations, and policies [20,38]. --- Setting and recruitment The Family Health Team (referred to as the clinic) we studied is part of a larger academic teaching hospital located in Ontario, Canada that is governed by a physician board. This teaching hospital has an existing interprofessional education program, including access to structured student and staff interprofessional education, and dedicated resources. Additionally, there was at least one champion within the clinic who acts as a representative for the institutional interests in effective collaboration. The clinicians also received professional development on EOL conversations. The clinic is comprised of over 20 staff family medicine physicians and several nurse practitioners who are nurses with 'extended class', meaning they have received graduate university education allowing them to order tests, diagnose, and prescribe. The clinic also includes nurses and allied health clinicians such as social workers, an occupational therapist, physiotherapist, pharmacists, and dieticians. This team works with patients of all ages, including frail older adults, and patients from diverse socio-economic and racialized backgrounds. We used purposeful sampling which allowed for recruitment of participants who could provide rich information about the topic of inquiry and allowed for in-depth understanding of an issue with the results being transferable, rather than generalizable [39]. To participate, clinicians had to care for patients who they considered to be mildly to severely frail on the Clinical Frailty Scale [40] (used for recruitment, not care). A senior physician at the clinic facilitated clinician recruitment. --- Clinician participants We employ certain terminology to refer to the clinicians in our study. The term 'allied health clinician' refers to a social worker, physiotherapist, and occupational therapist. The term 'nurse', refers to registered and registered practical nurses who share similar scopes of practice, as opposed to 'nurse practitioners' whose scope of practice resembles family medicine physicians. 'Medical professionals' refers to both nurse practitioners and physicians who act as the most responsible provider to patients in the clinic. Grouping participants together is important to protect anonymity. Twenty (n = 20) clinicians participated in this study: 10 medical professionals (8 physicians + 2 nurse practitioners); 4 nurses (including both registered nurses and registered practical nurses); 4 allied health clinicians (including 1 social worker; 1 occupational therapist; 1 pharmacist; 1 physiotherapist); and 2 medical students. One clinician who was approached declined to participate and one clinician withdrew after being observed due to increasing work demands (see Table 1 for demographics of clinician participants). All participants involved in the study provided informed written consent prior to their engagement in the research process, and assent was obtained during each research encounter. --- Data production We utilized several data production strategies: observations of clinicians in their day-to-day activities; oneto-one semi-structured interviews; and document analysis. Data were produced from February -October 2019 resulting in 17 interviews with clinicians (one clinician withdrew prior to the interview, and two left the clinic) each lasting 60 minutes on average, and over 100 hours of structured observations of clinicians' dayto-day activities excluding direct patient care. On average, each clinician was observed for 6.7 hours (min 1 h and max 13.5 h) (see Table 2 for the data production of each participant). An observation guide was used to focus the first author's (CC) attention in the field on people, communication, collaboration, conflict, and talk of frailty or EOL. CC wrote reflexive notes about initial impressions, decisions about what to observe and when, how participants responded to her, and ethical dilemmas such as being shown patient records that were not part of the research protocol. Interviews were conducted by CC in person at the clinic using a semi-structured interview script. Interviews were recorded and transcribed verbatim by a transcription service. CC listened to audio recordings to check the accuracy of the transcription. Transcripts were then de-identified and audio recordings deleted. The remaining data will be retained on a password protected secure server for 5 years and then deleted. To increase rigor, the team held routine meetings to discuss data production strategies, such as note taking techniques, examine decisions in the field, and explore initial analytic ideas [41]. To produce data about the structural level policies that shape primary care practice in Ontario, Canada, we conducted searches of policies and documents, and consulted with experts, including the Association of Family Health Teams of Ontario. Some governing policies such as annual funding agreements between the Family Health Team and the Ontario Ministry of Health and Long-Term Care (MOHLTC) were not publicly available. The documents we analyzed include: 1) the Ontario Palliative Care Competency Framework; 2) Primary Care Performance Measurement Framework for Ontario 2014; 3) Physician Service Agreement 2012; 4) Health Quality Ontario -Primary Care Performance in Ontario 2019; and 5) Family Health Team Accountability Reform Application Package 2014-2015. These policies were included because of their central role in governing primary care through the enforcement of practice competencies, funding priorities, performance measures, and clinical accountabilities. --- Data analysis Our analysis began with data simplification using the Gears Model (see Table 3) [34]. This framework was used as a coding structure to code observation notes, interview transcripts, and document data to draw our attention to factors in the data such as beliefs and formal information structures that shape collaboration [34,42]. We then examined how the characteristics, captured by each of the code, impacted the way nurses and allied health clinicians collaborate in EOL conversations. These codes were then sorted by characteristics that support EOL conversations such as knowledge, and by barriers to EOL conversations such as prioritizing biomedical assessments. Secondly, we critically examined the coded data to understand the assumptions and logics influencing collaboration. We accomplished this by paying attention to patterns of language and actions that revealed the underlying assumptions [20]. This final progression of coding resulted in the creation of a number of themes, which we present in our findings. --- Results Our findings suggest that nurses and allied health clinicians, including a social worker, physiotherapist, and occupational therapist, have the knowledge, skills, and willingness to facilitate EOL goals-of-care and decision-making discussions with frail older adults. However, most of these clinicians did not engage in EOL conversations within this setting. We argue that forces at the practice, organizational, and structural levels constrain nurses and allied health clinicians' practice. These constraints can be traced to neoliberal-biomedical ideas that normalize and prioritize biomedical effectiveness and efficiency. These ideas converge to impact team collaboration in particular ways that limit nurses and allied health clinicians involvement in EOL conversations. --- A culture governed by neoliberal-biomedicine Primary Care Performance Measures in Ontario assess access, patient-centredness, integration, effectiveness, focus on population health, efficiency, safety, and appropriate resources (see Additional file 1). From our examination of these measures, we suggest the governance of primary care in Ontario, at the time of the study, is shaped by neoliberal-biomedical logics. Neoliberal-biomedicine refers to the way pervasive social norms, such as efficiency and cost containment, are intertwined to produce certain effects on care. We observed how efficiency relating to cost and speed as well as individualism, promotes individual responsibility for health and construes freedom as choice. Performance indicators for access mostly emphasize quantity of, and speed at which patients are seen by a physician or nurse practitioner, while also assessing the reduction of costly acute care services. For example, one measure evaluates, 'the percentage of patients who report that they were able to see their physician or nurse practitioner on the same or next day'. Performance indicators for integration tend to emphasize cost containment by discouraging duplication of services and the prevention of hospitalizations, with little emphasis on non-medical, social, or community care. Indicators for population health tend to promote individual responsibility for the prevention of illness by evaluating the percentage of patients who have engaged in health screenings and certain lifestyle choices. For example, one measure evaluates 'the percentage of female grade-eight students who have completed vaccination against human papillomavirus'. Efficiency logic intersects with biomedicine particularly through the promotion of evidence-based practice that not only justifies the use of public funds but promotes cost-saving health prevention initiatives. When we refer to biomedicine in this context, we are most interested in biomedical dominance and the control over what constitutes legitimate "healthcare" and who can practice it. Within the performance measures, we found indicators for effectiveness often promote the appropriate screening and treatment of chronic illness emphasizing that illnesses can and should be treated. An objective of the performance measure is to assess patient centredness. A system-wide objective of patient centredness based on best practice should encourage collaborative relationships between clinicians and patients that prioritize patients' values, goals, beliefs, and needs. However, patient-centred care is measured by access to biomedical care and individual choice, thereby reproducing logics of biomedical effectiveness and individualism, not patient-centredness. For example, measures evaluated'spending enough time with' 'involvement in decision-making', and 'an opportunity to ask questions'. Our overall analysis of performance measures suggests that neoliberal-biomedicine, particularly the logics of biomedical effectiveness and efficiency are evident at the structural level. We also found this ideology exerts, to various degrees, influence on beliefs and actions at the organizational and practice levels shaping team collaboration. --- Nursing and allied health knowledge and skills to support EOL conversations Nurses and allied health clinicians had consistent and clear beliefs about when to initiate EOL conversations, what to discuss, and how to discuss it. To describe when it is appropriate to start having EOL conversations, the nurses and allied health clinicians spoke of conditions that signify the body is deteriorating, such as being frail, getting a diagnosis of a terminal illness, or a decrease in health status. A clinic nurse explains: The The nurses and allied health clinicians also spoke about the importance of person-centredness during EOL conversations, which extends beyond offering and exploring choice about EOL care. These participants believe it is important to elicit patients' goals and wishes at the EOL, including what is important to them and what they are hoping to accomplish before they die. They emphasized how the personhood of the patient should be discussed, not just their physical body. A nurse explains: --- We might think the focus is physically what you want done, but for some people, the focus maybe more of like a spiritual aspect... so just in sharing or asking them what was the goal of your loved one, what was their desire for death... anything they wanted to do or accomplish... just figuring out how they can live in their last days, live and die with dignity... just kind of identifying what are their wishes, what are their desires and yeah, and ensuring that we can support them in that sense. (Nurse 023) Many of the nurses and allied health clinicians also drew on person-centredness to emphasis the role rapport plays in EOL conversations. For example, many believe EOL conversations should be initiated by someone with an existing relationship who knows the patient's values and has their trust, which is usually their primary physician or nurse practitioner: --- Usually it's the physician that initiates it. It should be someone who the patient has a trusting relationship with too. Yeah someone that they've built a rapport with, I think like a healthcare professional who is understanding... who knows their situation a little bit and they're able to have honest, open, trusting communication with them. (Nurse 023) The nurses and allied health clinicians' descriptions of EOL conversations highlights their overall knowledge of this complex practice. Despite this knowledge, there was agreement that nurses and allied health clinicians are not generally involved in EOL conversations, yet it could be beneficial if they played a bigger role. Some medical professionals in the team suggested the value of nurses and allied health clinicians taking on more responsibilities: ---...[C]ertainly nurses and, you know, nurses ask my patients (pause) about their sex history,... they ask them about all, like all kinds of... taboo topics... So, I have no concerns about them being able to address like end of life in a, you know, a sensitive sort of open-ended...manner.... the social worker or the counsellors, yeah I think anyone could... but yeah.... I don't think in ten years I've ever, like I remember anybody coming to me and being like, oh you know, like I'm the nurse, I took their blood pressure, I was talking to them about their hospitalisation and, you know, I asked a bit about who's going to make decisions for them, that's never happened. (Medical professional 029) Nurses and allied health clinicians hold practice knowledge about EOL conversations that supports them to engage frail older adults in EOL conversations, yet this group rarely enacts this dimension of care. Part of the reason for this appears to be a culture characterized by certain patterns of collaboration that make it difficult for nurses and allied health clinicians to initiate and sustain EOL goals-of-care and decision-making discussions. --- Biomedical dominance and efficiency: Constraints to nurses and allied health clinicians' practice When examining interprofessional collaboration at the clinic, biomedical dominance is noticeable in which profession is most central to patients' care. At the clinic, practice is structured so that nurses and allied health clinicians are positioned to provide episodic, task-based care, which limits their knowledge of, and rapport with patients, making it less possible for them to support EOL goals-of-care and decision-making discussions. Additionally, influenced by efficiency logic, the clinical team works to maintain fast-paced care, which, leads to a particular pattern of relating between disciplines, making it less possible for nurses and allied health clinicians to engage in EOL conversations. Our findings of how biomedical dominance and efficiency constrain collaboration in EOL conversations beyond physicians and nurse practitioners are organized into three sections, 1) lack of longitudinal relationships; 2) lack of collaborative decision-making; and 3) undervaluing nurses' practice. --- Lack of longitudinal relationships Biomedical dominance is reproduced in the way nurses and allied health clinicians' roles are organized in the clinic. Whereas these clinicians are relegated to shortterm care, physicians and nurse practitioners foster longitudinal relationships with patients over time, which supports the facilitation of EOL goals-of-care and decision-making conservation because of their understanding of patients' medical history and values. One of the medical professionals explains how longitudinal relationships support EOL conversations: --- Having a conversation is really important, but just in kind of understanding what their [a patient's] past behaviour is like and having conversations about their life, about their childhood, about what's going on now,... I get a sense of who they are. And I'm not suggesting that that should replace a good [goals-ofcare] conversation where you allow that person the opportunity to actually say it, but it's a really rich (pause) in an area where you get to know people over a longitudinal thing, it's a very rich environment to understand people's values, wishes... what they define as quality. (Medical professionals 002) Long-term relationships make it easier to engage in EOL goals-of-care and decision-making discussions with patients. Another medical professional agrees, and elaborates that it can be challenging for nurse practitioners and physicians to find the time to support EOL conversations, and that it would be helpful if nurses and allied health clinicians could assist in this work. However a lack of close relationships with patients makes this difficult: --- [J]ust putting those first few questions out [philosophic or value-based discussions], you can't just walk out of the room, like it turns into a longer appointment and then you're behind... it would be nice to [have]... people to be able to talk through this kind of thing, more social workers. But if they don't have that long-term relationship with the patient, it's not going to go anywhere, right? So, it comes back to the same people who have the long-term relationship, who are busy, and have lots of patients (pause) like it just is not (pause) it's not great at all. (Medical professional 026) The way roles are designed, physicians and nurse practitioners are central to patient care as they develop continuous relationships with their patients. This is in contrast to nurses and allied health clinicians who often have longer appointments with patients, yet they generally do not know the patients well, thereby making it more difficult to engage in EOL goals-of-care and decision-making discussions. A nurse discusses how the organization of their role as task-based limits the possibility of EOL conversations: --- The thing is with the clinic, it's like more episodic and focus on one (pause) single issues. So, like just sometimes in the clinic we just do blood pressure or dressing change, or just specific task. It's hard for me to kind of make just a decision (pause) oh the patient needs advanced care planning.... Just because I feel like I don't see the whole picture of their health. (Nurse 005) Because of biomedical dominance, the established pattern of collaboration is to involve nurses and allied health clinicians in task-based care which limits what these clinicians know about patients and how they contribute to their care. Without longitudinal relationships and knowledge of patient's health history, it becomes more difficult and less likely for these clinicians to engage in EOL goalsof-care and decision-making discussions. --- Lack of collaborative decision-making Nurses and allied health clinicians are most often involved in patient care because of a request or referral from the patient's physician or nurse practitioner. Nurses are most often involved in patient care with inthe-moment task-based requests, whereas allied health clinicians often receive an electronic referral requesting specific types of support. In describing the organization of their role, an allied health clinician (003) states "I'm mandated to see each patient only once. I'm in a consultant role". Another allied health clinician further explains the care they provide and how they communicate with physicians and nurse practitioners: --- I see people two to three times, and then discharge them to services that can be longer-term. I always write to the referring provider with a note about history, clinical presentation and plan and goals. I usually get a note back and we dialogue this way. (Allied Health Clinician 010) When requested to provide care, nurses and allied health clinicians use their specialized knowledge to support patients through assessments and access to other services, as opposed to providing long-term therapy, planning, or follow-up care. The biomedical dominance that shapes the organization of allied health clinicians' work also leads some of these clinicians to feel excluded from shared decision-making and collaboration in patient care. An allied health clinician explains: --- Some physicians do not referral at all. They maybe don't know what I do. (pause) or they might have myopic focus on medicine and pay less attention to the psycho-social issues and ways we can help.... It's not as collaborative and cohesive as could be... There's no mechanism for that collaboration really. (Allied Health Clinician 004) Nurses and allied health clinicians occupy a supportive if not marginalized position in the clinic. Biomedical dominance makes the work and knowledge of physicians and nurse practitioners central to patient care with there being few opportunities to include nurses and allied health clinicians in shaping patient care to a similar extent. Additionally, by design of the clinical workflow and the allocation of clinical responsibilities, nurses and allied health clinicians provide episodic, referral-based care, which has consequences for their ability to engage in EOL goals-of-care and decision-making discussions. --- Undervaluing nurses' practice Observation notes by first author CC, detail the constant work done by the team to provide efficient, fast-paced care. An example of this is the way it was noted that team members are friendly and polite but walk quickly around the clinic going from one task to another, rarely stopping. --- An excerpt of observation notes exemplifies this: Observing a physician:... He comes out of his room walking towards the nursing station. A nurse is there and gives an update about a blood pressure reading. "That's great, thank you [name]!" he says and puts a piece of paper in a mailbox. He turns back to his room, quickly sits down at the computer, pulls something up on the screen, skims it and then walks quickly to the waiting room to get his next patient. Efficiency logic is also reproduced in the way some clinicians speak about their roles. This is particularly true for physicians who see their time as a resource that needs to be used efficiently and fairly. Consider the following two comments of medical professionals on the culture of efficiency: It's realizing that as much as you want to spend 90 minutes with a patient, that comes at the expense of your other patients. So, you have to balance that time in the room with this patient, against the patients that are outside in the waiting room who you also need to see. --- (Medical professional 013) We think of our time as a resource. And you know, if I spend an hour with a patient, that means there's three other patients I don't see, so my job is to create access. (Medical professional 001) Efficiency logic influences the collaboration between nurses and physicians in such a way that limits nurses' involvement in EOL conversations. At the heart of this efficient primary care clinic is the 15-min appointment with a physician, with 30-45 min appointments for some complex frail patients. For physicians to see a new patient every 15-30 min, they often require help from nurses. A participant explains: I admit, our physicians think about their practice, their population, how can you help me with X... they just think about getting through their day with these patients and this problem in front of them, because they're too busy to think any other way. So, they're like, 'I want a nurse to help me with all these people.'(Medical professional 007) The expectation for efficiency shapes the role of nurses to help physicians with their patient care, rather than cultivate their own forms of practice with patients. Nursing roles at this clinic most often include giving immunizations, measuring vital signs, assessing infants, doing wound care, administrative tasks, and healthcare or team organization. A nurse comments on their role and collaboration with physicians: --- I feel as a nursing scope of practice,... we have wellbaby assessments [developmental and safety screening for infants and toddlers] or help with physicals, dressing change, those kinds of things... it just really depends on the need of the clinic. If (pause)... the family doctor... maybe they are too busy with medical care... it's important for a nurse... then [to] help the doctors. (Nurse 005) Physicians often request in-the-moment support from nurses, which would catch them mid-task in their clinical care. This form of collaboration prioritizes efficient biomedicine but leads to interruptions of nursing work. Interruptions indicate an undervaluing of nursing practice as a cultural norm and makes it less possible for nurses to have EOL goals-of-care and decision-making discussions because they are less likely than other clinicians to have uninterrupted time. A nurse explains: --- I've worked really hard on if my door's closed then there's a reason for that, and my colleagues know that.... I would hope that...my colleagues and management would see it as a worthwhile time for me to spend time with these people, whoever needs to have that [EOL] conversation, and would respect that.... but I do have interruptions at times, and it does complicate that conversation.... interruption is a big one. So even though we went, we just had that big spew about what I try to set the tone for, I still get interrupted. (Nurse 030) While this pattern of relating is normalized due to neoliberal-biomedical logics, some nurses at this clinic resist it. A group of the nurses worked with management to stop interruptions to their work. Management announced a written nursing request system that the team is meant to use instead of interrupting. A medical professional explains why this is important as captured in the following observation note: A physician said he had to take a lengthy history and I [author CC] asked if he often would ask a nurse to do it. He said that's a good question and shut the door. He said we're in a difficult time. There has been the introduction of the medical communication form that just happened... He said it's been years in the making.... The nurses do not want to do menial tasks. Workload is high and they want to practice to their full scope. It is not okay to ask nurses to do histories or pre-screening because they have other more important work to do. He says if he's behind, he can ask nurses to help him out, but that should not be a regular thing. However, despite this new communication system, interruptions continued. This was captured in author CC's observation notes: A medical resident comes out of his room in a hurry asking where is nursing? I'm [Author CC] the only person there and say, 'they all seem occupied' and gesture to the closed doors of the nursing offices. He looks around at all the doors and then a nurse comes out of her room with a blood pressure machine putting it back where it is stored. The resident tells her he needs help with vaccinations. The nurse pauses, seeming unsure of what to do. She turns back to her room saying she is with a patient right now and instructs him to write a communication note. He pops into another nurse's office and asks for help. Neoliberal-biomedical logics normalizes a particular way of collaborating in this clinic that prioritizes efficient fast-paced medical care. This logic leads to the organization of nurses and their work to be task-oriented and driven to meet the needs of the clinic but undervalues nurses' independent practice and expertise. This context and culture of collaboration is one of the ways nurses are constrained in their ability to engage in EOL goals-ofcare and decision-making discussions. --- Discussion Our analysis suggests that the distribution of tasks and roles in this Family Health Teamis shaped by neoliberalbiomedical logics that normalize and prioritize biomedical effectiveness, biomedical dominance, and efficiency, thereby limiting interprofessional collaboration for EOL goals-of-care and decision-making discussions. Biomedical effectiveness and dominance prioritize the role of biomedicine in sustaining patients' physical health, preventing decline and death, and controlling what counts as healthcare and who can practice it [21,43,44]. Efficiency prioritizes speed and minimizing costs [16,22,45]. Together these logics create a culture that prioritizes the work of physicians and nurse practitioners while normalizing limited collaboration with nurses and allied health clinicians and restricting their practice to providing episodic task-based care that supports biomedical efficiency rather than drawing on their own professional expertise. This culture and its patterns of relating limit the possibility of nurses and allied health clinicians' engagement in EOL conversations. Our findings align with a small but growing body of work that draws attention to the way relationships of power operate on interprofessional collaboration [46]. However, we are the first to apply this type of critical analysis to ethnographic data to explicate the way relationships of power limits collaboration for EOL goals-of-care and decision-making discussions in primary care. Our findings suggest the barriers to nurses and allied health clinicians' involvement in EOL conversations are less related to skills and knowledge and more rooted in normative logics that shape the way primary care service delivery is structured and evaluated. While we recommend strategies to improve interprofessional collaboration for EOL conversations be targeted at the structural level, our data does highlight
Context Interprofessional collaboration is recommended in caring for frail older adults in primary care, yet little is known about how interprofessional teams approach end-of-life (EOL) conversations with these patients. Objective To understand the factors shaping nurses' and allied health clinicians' involvement, or lack of involvement in EOL conversations in the primary care of frail older adults. Methods/setting A critical ethnography of a large interprofessional urban Family Health Team in Ontario, Canada. Data production included observations of clinicians in their day-to-day activities excluding direct patient care; oneto-one semi-structured interviews with clinicians; and document review. Analysis involved coding data using an interprofessional collaboration framework as well as an analysis of the normative logics influencing practice. Participants Interprofessional clinicians (n = 20) who cared for mildly to severely frail patients (Clinical Frailty Scale) at the Family Health Team.Findings suggest primary care nurses and allied health clinicians have the knowledge, skills, and inclination to engage frail older adults in EOL conversations. However, the culture of the clinic prioritizes biomedical care, and normalizes nurses and allied health clinicians providing episodic task-based care, which limits the possibility for these clinicians' engagement in EOL conversations. The barriers to nurses' and allied health clinicians' involvement in EOL conversations are rooted in neoliberal-biomedical ideologies that shapes the way primary care is governed and practiced.Our findings help to explain why taking an individual-level approach to addressing the challenge of delayed or avoided EOL conversations, is unlikely to result in practice change. Instead, primary care teams can work to critique and redevelop quality indicators and funding models in ways that promote meaningful interprofessional practice that recognize the expertise of nursing and allied health clinicians in providing high quality primary care to frail older patients, including EOL conversations.
who can practice it [21,43,44]. Efficiency prioritizes speed and minimizing costs [16,22,45]. Together these logics create a culture that prioritizes the work of physicians and nurse practitioners while normalizing limited collaboration with nurses and allied health clinicians and restricting their practice to providing episodic task-based care that supports biomedical efficiency rather than drawing on their own professional expertise. This culture and its patterns of relating limit the possibility of nurses and allied health clinicians' engagement in EOL conversations. Our findings align with a small but growing body of work that draws attention to the way relationships of power operate on interprofessional collaboration [46]. However, we are the first to apply this type of critical analysis to ethnographic data to explicate the way relationships of power limits collaboration for EOL goals-of-care and decision-making discussions in primary care. Our findings suggest the barriers to nurses and allied health clinicians' involvement in EOL conversations are less related to skills and knowledge and more rooted in normative logics that shape the way primary care service delivery is structured and evaluated. While we recommend strategies to improve interprofessional collaboration for EOL conversations be targeted at the structural level, our data does highlight some possible gaps in knowledge about EOL conversations for nurses and allied health clinicians. For example, when asked about what to discuss during EOL conversations nurses and allied health clinicians rarely mentioned the importance of exploring patients' understanding of their illness and using patients' goals to guide decision-making. Some resources could be directed at clarifying scopes of practice in relation to EOL conversations and providing instruction on how to engage in robust EOL goals-ofcare and decision-making discussions in primary care. Neoliberal-biomedical logics are present in the way clinicians' roles and responsibilities are governed in primary care, with the work of physicians and nurse practitioners being organized as central to all patient care, and nurses and allied health clinicians being mandated to provide episodic care with little long-term relationship development with patients. Structuring care this way supports biomedical effectiveness and efficiency. Neoliberal-biomedical logics are also present at the practice level with nurses and allied health clinicians being less involved in decision-making about patient care and ownership of care. Hierarchies between clinicians from different disciplines is well documented, especially between nurses and physicians [47]. There is a long history of medicine expecting obedience from nursing, with nurses being expected to act as physician's eyes and hands [47,48]. Nurses have often been treated as "physician's assistants" who perform manual labour at the direction of physicians, rather than having their own practice, forms of knowledge, and expertise [48,49]. A similar disregard for the expertise of social work and occupational therapy exists with their knowledge base and clinical effectiveness often being questioned within interprofessional medical teams [50]. Despite policy shifts towards interprofessional collaboration in the provision of primary care, physicians often remain the "de facto" leaders of these teams -a trend that disrupts collaboration with little evidence to support this hierarchy [44]. To curb biomedical dominance in primary care, foster collaboration, and embrace overlapping roles, changes to funding and governance are needed [51,52]. We argue that for primary care nurses and allied health clinicians to become involved in EOL conversations, their expertise needs to be valued, they need to be equal members of the team who share in decision-making about what care is needed, have professional autonomy, and be able to develop longitudinal trusting relationships with patients. Other research outside of primary care supports our findings. Research has found that to support EOL conversations nurses and allied health clinicians should be integral members of the team who share responsibility for making decisions about patients' care [53,54]. A lack of shared decision-making disrupts the ability of nurses' and allied health clinicians' to use their expertise, which often involves a more holistic approach that can be helpful during EOL conversations [8,[53][54][55]. Our analysis is also consistent with research that suggests nurses and allied health clinicians need to form trusting and ongoing clinician-patient relationships to facilitate person-centred EOL conversations [8,53]. This was not possible at our study site due to the organization of work that inhibited participation in longitudinal care. Without sufficient knowledge of the patients, clinicians are less equipped to engage in EOL conversations because of a lack of understanding of patients' illness history and trajectory, and needs around EOL [27,55]. All patients do not require an interprofessional approach, but complex older patients do. We recommend primary care teams clarify nurses and allied health clinicians' roles in EOL conversations and ensure teams are aware of, and supportive of these roles to facilitate involvement in this practice and potentially increase the quality of care for these patients [27,54,55]. However, we also argue that without addressing the influence of neoliberal-biomedical logics on the organization and delivery of care, nurses and allied health clinicians will likely continue to be excluded from this practice. Most studies examining interprofessional collaboration have done so at the micro interpersonal level, ignoring the structural-level characteristics that impact collaboration [46,51]. Our findings underscore the importance of analysis of structural characteristics as well as dominant ideologies and their influence on collaboration. While there may be strategies at the practice level to support team collaboration in primary care, such as regular team meetings, sharing responsibility for patient care, role clarity, and non-hierarchical team building [37,56], we believe these strategies are less impactful if governing logics are left unexamined and unchanged. Bourgeault & Mulvale [51] suggest the "caring" work of non-medical primary care clinicians such as nurses, social workers, and occupational therapists who promote well-being, fulfilling occupations, and coping, among other things, is less valued because its outcomes are more challenging to quantify than biomedical work [18,21,44]. We found this marginalization and devaluing of the work of nurses and allied health clinicians in our study in the way the clinic was organized to allow physicians and nurse practitioners to control who provides care and how, which resulted in nurses and allied health clinicians having limited involvement in patient care including in EOL conversations. These findings point to the importance of examining and modifying primary care quality indicators in ways that value the work of all team members and patient-centred care. Another reason biomedical dominance remains entrenched in primary care includes funding models shaped by neoliberalism [22,57]. Within neoliberal reforms, funding priorities are often focused on managing chronic illness, reducing cost by reducing hospital admissions, and supporting physician-owned primary care practices [17,18,22,57]. Primary care clinics' funding is often controlled by incorporated businesses governed by a physician-board that makes organizational and service delivery decisions [58]. Research from Ontario, where our study took place, has linked funding agreements such as those of Family Health Teams, to decreased interprofessional collaboration and minimal delegation of tasks to nurses and allied health clinicians [52]. We recommend primary care teams interested in cultivating team collaboration examine the influence of funding models, and work to make meaningful changes that support more collaboration for complex patient care including EOL conversations. In Canada and internationally, research has examined how neoliberal-biomedical logics govern healthcare policy, institutional governance, and direct care in a variety of areas such as EOL care, maternal care, women's health, addiction care, public health, emergency services, and primary care [17,22,[59][60][61]. What our study adds to this scholarship is the way neoliberal-biomedical logics limit collaboration in interprofessional primary care teams, specifically in the area of EOL conversations. This is a novel and important finding because it helps to explain why taking an individual-level approach to addressing the challenge of delayed or avoided EOL conversations, specifically by educating nurses and allied health clinicians about how to facilitate EOL conversations [13,28,62], is unlikely to result in practice change. This is because nurses and other allied health clinicians are embedded in a biomedical culture that prioritizes biomedical effectiveness, biomedical dominance, and efficiency; until and unless these are addressed, individual-level solutions will fall short of achieving real change. --- Limitations To protect anonymity, data from nurse practitioners and physicians were grouped together. While these two types of clinicians have similar roles at the study site, there are differences in the organization of their work that is not captured in our findings. Future research could focus on nurse practitioner led clinics to further explore this groups' experience facilitating EOL conversations. Additionally, we were unable to determine the participants' level of training in EOL since clinicians attended schooling at various times in various geographical locations making it impractical to review the curriculum each participant received in EOL care and interprofessional collaboration. Finally, our study site was an urban, academic, medicare funded primary care team located within a health science centre. While not generalizable, our detailed description of the setting, participants, and interactions support transferability to other contexts. --- Conclusion Our findings suggest primary care nurses, a social worker, occupational therapist, and physiotherapist have the knowledge, skills, and inclination to engage frail older adults in EOL conversations. However, they are constrained in their ability to do this by specific patterns of relating that are shaped by neoliberal-biomedical logics operating at the structural, organizational, and practice level. Our study highlights the way these governing logics restricts interprofessional collaboration in primary care by shaping the distribution of tasks and roles in such a way that limits nurses and allied health clinicians' engagement in EOL conversations. It is our hope that this study inspires future practice change research to improve interprofessional collaboration and EOL conversations by reimagining funding models and performance indicators in primary care that fully support meaningful interprofessional person-centred care for complex frail patients. --- Availability of data and materials The datasets generated during the current study are not publicly available due issues of privacy but are available from the corresponding author on reasonable request. --- Abbreviation --- EOL End-of-life --- Supplementary Information The online version contains supplementary material available at https:// doi. org/ 10. 1186/ s12875-023-02171-w. Additional file 1. Primary Care Performance Measures in Ontario. --- Authors' contributions CC: Conceptualized the study, developed the methodology, acquired funding, collected and analyzed data, and was the lead author on the manuscript. SM: supported the methodological development, analysis and writing of the manuscript. RU: supported the methodological development, analysis and writing of the manuscript. PK: supported the conceptualization of the study, developed the methodology, supported the acquisition of funding, and supported data collection and analysis and the writing of the manuscript. --- Authors' information This research study is one part of Dr. Carter's doctoral dissertation. --- Declarations Ethics approval and consent to participate Ethics approval was granted from the University of Toronto's Research Ethics Board (REB # 00037350) as well as the institutional Research Ethics Board from the study site (REB #18-5831). The study site has not been named to protect the anonymity of participants. All participants involved in the study provided informed written consent prior to their engagement in the research process, and assent was obtained during each research encounter. All methods were carried out in accordance with relevant guidelines and regulations in the Declaration of Helsinki. --- Consent for publication Not applicable. --- Competing interests The authors declare no competing interests. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Context Interprofessional collaboration is recommended in caring for frail older adults in primary care, yet little is known about how interprofessional teams approach end-of-life (EOL) conversations with these patients. Objective To understand the factors shaping nurses' and allied health clinicians' involvement, or lack of involvement in EOL conversations in the primary care of frail older adults. Methods/setting A critical ethnography of a large interprofessional urban Family Health Team in Ontario, Canada. Data production included observations of clinicians in their day-to-day activities excluding direct patient care; oneto-one semi-structured interviews with clinicians; and document review. Analysis involved coding data using an interprofessional collaboration framework as well as an analysis of the normative logics influencing practice. Participants Interprofessional clinicians (n = 20) who cared for mildly to severely frail patients (Clinical Frailty Scale) at the Family Health Team.Findings suggest primary care nurses and allied health clinicians have the knowledge, skills, and inclination to engage frail older adults in EOL conversations. However, the culture of the clinic prioritizes biomedical care, and normalizes nurses and allied health clinicians providing episodic task-based care, which limits the possibility for these clinicians' engagement in EOL conversations. The barriers to nurses' and allied health clinicians' involvement in EOL conversations are rooted in neoliberal-biomedical ideologies that shapes the way primary care is governed and practiced.Our findings help to explain why taking an individual-level approach to addressing the challenge of delayed or avoided EOL conversations, is unlikely to result in practice change. Instead, primary care teams can work to critique and redevelop quality indicators and funding models in ways that promote meaningful interprofessional practice that recognize the expertise of nursing and allied health clinicians in providing high quality primary care to frail older patients, including EOL conversations.
Introduction For some decades, ageing in Europe causes serious political concerns on well-being and social participation of old citizens as well as on care for the (frail) old and personnel to take care for them. These concerns increased due to the economic crisis of 2008 and are related: health and need for (long-term) care affect well-being and social participation and vice versa [1][2][3][4]. Therefore, in most European countries policy measures were proposed and taken to influence consequences of the economic crisis and ageing [5,6]. Measures included new pension regulations, changing arrangements for long-term care and for access to social services, but the effects of such measures on society and life of citizens are unknown. This study explores which societal measures, related to the economic crisis and ageing, may have affected the life satisfaction of older citizens. Understanding the effect of such measures on life satisfaction of older citizens could inform policymakers about future measures to get a better life for old citizens [7]. Research on quality of life and life satisfaction focuses mainly on individual determinants such as age, income, marital status, health status, physical limitations, social contacts, and social participation of citizens, whose determinants mostly show a statistically significant relationship with life satisfaction [8][9][10][11][12]. Besides individual characteristics, the context in which people live should be taken into account, because life satisfaction is strongly context related and also depends on social comparison [13,14]. Therefore, some researchers argue to use vignettes to assess life satisfaction to correct for the so-called differential item functioning, a bias in self-reports caused by differences in personal and sociocultural context [15]. Although a useful method, it does not take into account societal changes over time, including cohort effects, affecting life satisfaction [16,17]. Societal changes may have a wide effect, that is, influencing the life of many people in various countries (e.g., World War II, mass migration, or an economic crisis), while others have an effect on people living in specific regions or countries (e.g., an earthquake or national legislation). Comparative studies between countries have shown the influence of specific factors, such as national income, age composition, life expectancy, and welfare provisions, on life satisfaction at societal, national level [18][19][20][21]. Such studies may show "how data on well-being can help policymakers identify the groups and countries that are bearing the brunt of the economic crisis, as well as those that are holding out better than expected, and provides a new layer of evidence to aid policy decisions." [20]. Comparative research on life satisfaction worldwide shows a U-shape between life satisfaction and age, supposing a major influence of ageing itself. Young age groups show relatively high life satisfaction, which decreases in middle age (40-65) groups but increases again in older age (65 and over) groups [12,22,23]. Such U-shape is found to be rather consistent [24], but it does not apply to all countries [3,25,26], indicating that specific events or societal changes may affect it in different ways. This may also apply to the economic crisis, starting in 2008. Therefore, our premise is that this economic crisis has affected the U-shape relationship between age and life satisfaction in European countries. The first research question is, has this U-shape changed and, if so, in which direction and in which European countries? The second research question is, which societal conditions, changed by measures due to the economic crisis of 2008, are related to life satisfaction of older citizens in Europe. We focus on older citizens because these citizens may be more vulnerable to such societal changes and therefore are an important "target group" for policy measures [3]. To answer the second question one has to look at differences in changes between countries [20]. Analyses on life satisfaction at national/country level use standardised national data (e.g., Gross Domestic Product (GDP), age dependency ratio, or life expectancy) and aggregate individual data (e.g., percentage of persons with long-standing illnesses or % of unmet health care needs). A comparative study of 27 European countries, describing changes in "a full range of subjective well-being" between 2007 and 2011 in all adult citizens, shows that GDP and percentage of people with disabilities are related to wellbeing. However, subjective well-being over time increased marginally and did not apply to all countries, indicating that national policy or culture may make a difference [20]. Although it was expected that the economic crisis of 2008 would show some effect on life satisfaction, one may argue that the time frame was too short to see such effects. A recently published study analyses changes in life satisfaction in 24 European countries between 2002 and 2012, also taking into account recent, mainly income related societal changes, including the economic crisis of 2008 [21]. Income related indicators do affect life satisfaction mostly, but their effects are not uniform over European countries. However, it was stated that "economic crises tend to be followed by crises in happiness." Our study focuses on more detailed changes, like change in life expectancy, pension, health status, and quality of care, that is, especially on changes important for older people. --- Methods To "see" effects of policy measures, related to the 2008 crisis, on "life satisfaction" one has to wait till such measures are implemented in practice and experienced by citizens. Therefore, we use the time frame between 2006 and 2014 and assessed indicators two years before the crisis was started and two years after the measures taken were fully implemented. Based on former mentioned studies, we selected societal indicators on demography, on welfare, and on health. The research design combines changes in societal indicators over time (2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014) with a comparison between countries to analyse which changes in national indicators affect life satisfaction of old citizens (65 years and over) in 2013. The data are based on representative samples in each country like the data on life satisfaction or subjective health or on official, national statistics collected by Eurostat or OECD. Life satisfaction is assessed as "the degree to which an individual judges the overall quality of his life," to be scored on a 10-point scale between "not satisfied at all" and "fully satisfied," using data of the EU-SILC AHM 2013 study [12]. To answer the first research question, life satisfaction of different age groups in 31 European countries is compared. To answer the second research question we selected the following indicators to measure societal conditions, as independent variables: Demographic indicators: old dependency ratio (65+ to population 15-64 years) [27] and life expectancy at birth [28]. Welfare indicators: % of GDP for social protection [29], % of GDP for long-term care [30], and aggregate replacement ratio, an indication for gross pension for 65-74 years old as compared to gross earnings of 50-59 years old [31]. Health indicators: people with very good subjective health (citizens 16 years or older) [32], % of longstanding illnesses [33], and % of self-reported unmet needs in health care [34]. SPSS 23 is used for data storage and analysis. --- Analysis. First, we present the average life satisfaction of adult citizens in 31 European countries in 2007 and 2013. Next, the relationship between age and life satisfaction in 31 European countries in 2013 is described; that is, is a U-shape present? The relationship between mean life satisfaction and age in 2014 is described for the following age categories: 16-24, 25-34, 35-49, 50-64, 65-74, and 75 and over. We ranged countries in four figures. Next, the existence of statistically significant differences in mean life satisfaction of older citizens as compared to adult citizens will be tested for 2007 and 2013. Mean paired sample test for analysis of variance is used to test the difference in life satisfaction between adult citizens till 65 years and citizens 65 years and over. Before answering the second question "do changes in demographic, welfare, and health indicators between 2006 and 2014 affect life satisfaction in older citizens in 2013?" we present the data of the eight independent indicators 234). Next bivariate Pearson correlations between differences in mean life satisfaction of citizens between 16 and 65 years versus 65 years and older and the independent indicators in 2014 are described. The mean differences of the eight indicators for 2006 and 2014 are calculated per country and tested on statistical significance. Significant differences are described. In the last analysis step, the influence of these mean differences (2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014) in indicators on life satisfaction of older citizens in 2013 is assessed, using linear regression analysis and enter method with collinearity (VIF) tested. --- Results The mean life satisfaction scores for all citizens (over 16-18 years) in 2007 and 2013 in the 31 countries are about the same 7,0 and 7,1, respectively (see Table 1). Overall, the tendency is that life satisfaction decreased in western European countries and increased in central-eastern European countries. However, some considerable differences exist between countries. Mean life satisfaction decreased with at least 0,5 points in Cyprus, Denmark, Estonia, and Malta but increased with at least 0,5 points in Austria, Hungary, Latvia, and Romania. The relationship between age categories and mean life satisfaction per country shows that a "U-shape" does not dominate in 2013. The majority of countries (19 out of 31) show a declining line in life satisfaction from young citizens (16-24 years) to old citizens (75 years and over) (see Figures 1 and2). Twelve countries show more or less a U-shape (see Figures 3 and4). The declining gradient between age and life satisfaction is most notable in Bulgaria, Croatia, Greece, Latvia, Portugal, and Romania. In Denmark, Switzerland, Sweden, Norway, and Ireland citizens in the age group 65-74 score highest in life satisfaction, where score declines in the 75 years and over age group, with the exception of Switzerland (see Figure 4). An increase in life satisfaction at 75 years and over is only found in Iceland (see Figure 3). No statistically significant difference in mean life satisfaction is found between the two age groups (18-64 years versus 65 years and over) in 2007. The mean score on life satisfaction in 2007 applies for both age groups, that is, 7,0. The mean life satisfaction for citizens till 65 years is 7,2 in 2013 and for citizens of 65 years and over 6,9. Analysis of variance of mean life satisfaction scores between adult citizens of 16 to 65 years and those 65 years and over in 2013 shows a statistically significant difference (p =.003); that is, for old citizens the mean life satisfaction is significantly lower in 2013. In 2013 mean differences in life satisfaction are strongly decreased in older citizens in Romania, Bulgaria, Croatia, Greece, Lithuania, Portugal, Slovenia, and Slovakia as compared to not old citizens. Increase of mean life satisfaction in Current Gerontology and Geriatrics Research older citizens as compared to young ones is rare but occurs in Denmark and Ireland. The difference in mean life satisfaction between 2007 and 2013 is due to lower life satisfaction in older citizens in 2013. Before answering the second research question we present the mean or proportional scores for each indicator of societal change (demographic, welfare, and health) per country (see Tables 234). The demographic indicators show an increase in the old age dependency ratio in all countries (except Luxembourg) as well as in life expectancy at birth between 2006 and 2014. The welfare indicator "% of GDP on social protection" increased in all but two (i.e., Hungary and Poland) countries. This increase was relatively strong in Cyprus, Denmark, Finland, Greece, Ireland, Netherlands, and Spain. The welfare indicator "% GDP on long-term care expenditure" stayed on average the same in 2006 compared to 2014. The only decrease was in Romania; a strong increase occurred in Finland and Norway. The "aggregate replacement ratio" increased slightly between 2006 and 2014 in most European countries, but not in Austria, Estonia, Germany, Italy, and Sweden. A quarter of citizens in the 31 European countries reported very good health in 2006 and in 2014. On average there is a slight decrease between 2006 and 2014. The score of this health indicator varies strongly between countries with low scores (<unk>10%) in Estonia, Latvia, Lithuania, and Portugal and high scores (>40%) in Cyprus, Greece, Iceland, and Ireland. A strong decrease in subjective health is reported in Denmark and Finland. The average proportion of long-standing illness/health problems stayed about the same between 2006 and 2014 in the 31 countries as did the proportion of self-reported unmet needs for they were too expensive. Long-standing illness/health problems were more frequently reported between 2006 and 2014 in Austria, Cyprus, Greece, Malta, and Portugal and less reported in Bulgaria and Luxembourg. Self-reported unmet needs in 2006-2014 increased strongly in Greece, Ireland, Iceland, and Italy and decreased strongly in Bulgaria, Germany, Lithuania, Poland, and Romania. The relationship between the mean scores of the eight societal indicators in 2014 and difference in life satisfaction between both age groups (16-64 versus 65 and over) in 2013 is explored by Pearson's correlations (see Table 5). Older citizens (65+) with higher life satisfaction as compared to younger ones (16-64) live in countries, which have a high life expectancy at birth, which spend a high percentage of their GDP on social protection and long-term care in 2014, and have a high percentage of citizens in very good health. Next, it is analysed which societal indicators changed significantly between 2006 and 2014. The following indicators show significant mean changes between 2006 and 2013: old dependency ratio (mean 3,26; p =.00), life expectancy at birth (mean 2,23; p =.00), % GDP social protection (mean 3,03; p =.00), and % long-standing illnesses (mean 1,97; p =.05). In 31 European countries the old dependency ratio increases on average with 3 points between 2006 and 2014 (with highest increase in Czech Republic, Finland, Malta, Denmark, Sweden, and Netherlands) the life expectancy over 2 years on average (with highest increase in Estonia, Latvia. Lithuania, and Slovak Republic), the percentage of GDP spent on social protection with 3% (with highest increase in Greece, Spain, Finland, Ireland, Cyprus, Denmark, and Netherlands), and the percentage of long-standing illnesses increased with almost 2% (with highest increase in Austria, Estonia, and Portugal). No significant mean differences between 2006 and 2014 are found for % of GDP spent in long-term care, aggregate replacement ratio, the % of very good subjective health, and % of unmet needs in health care. Regression analysis, with life satisfaction of older citizens in 2013 as dependent variable and mean differences in the eight indicators (2006-2014) as independent variables, shows that four indicators statistically significantly contribute to explaining the level of life satisfaction in older citizens, explaining 38% of the variance (see Table 6). Low life satisfaction of older citizens (65 years and over) in 2013 occurs in countries, where life expectancy decreased as well as financial means for social protection and long-term care. In countries, where the percentage of unmet needs in health care between 2006 and 2014 increased, older citizens show low life satisfaction in 2013. --- Discussion Life satisfaction of older citizens in Europe is significantly decreased in 2013 as compared to younger age groups. Such a difference has not been found before, that is, in 2007 or 2003 [20,35]. It is interesting to note the overall tendency that life satisfaction decreased in western European countries and increased in central-eastern European countries, but it seems that in the later countries younger age groups are more satisfied as compared to older ones. The former found that [1]. Therefore, it is important to look for the dynamics of these conditions on aggregate level to investigate the influence of policy measures on life satisfaction as stated in the third quality of life survey in Europe [20]. Our study shows that significant changes in societal indicators, related to the ageing of the population in combination with the economic crisis, occurred in European countries between 2006 and 2014. Relatively less investment in social protection and long-term care affected negatively life satisfaction of older citizens as did decline in quality of care (low increase in life expectancy and increase of unmet needs). The second quality of life survey in Europe showed that material deprivation and health status were the most important influences on life satisfaction at individual level [35]. Our outcomes suggest that also policy measures taken at national level affect life satisfaction of the most vulnerable citizens, like old citizens, directly. However, this applies especially in countries, which already have a back-log in economics and welfare. Many central-eastern European countries have arrears in social protection and quality of care, as compared to more prosperous northwestern European countries with long-standing social welfare provisions. Therefore, in these countries, older citizens show more frequently a lower life satisfaction as compared to young and middle aged citizens. As all studies, our study has shortcomings. Most evident is that not all, theoretically possible, indicators for societal changes could be included, because of the limited indicators in international data bases. The same goes for the time period. The time period is partly determined by the available data in specific years. Nevertheless, we have argued that the chosen years 2006 and 2014 are adequate. In 2006 a financial crisis was not discussed or visible. In 2014, the policy measures were implemented and people were confronted with the consequences, especially older citizens because of intervention in welfare and care facilities. A strong point of the study is that the collected data are comparative, not only in time, but also in method of data collection and in validity of the measurements. For international, comparative research data from Eurostat or OECD are reliable, valid, and (mostly) free available. Most studies on life satisfaction and ageing use individual indicators as explaining factors to understand variance in life satisfaction [20,35]. Rather innovative is that we use aggregate indicators on national level, including 31 European countries, to understand changes in life satisfaction in older citizens. Based on this study we conclude that life satisfaction of old citizens deteriorated related to policy measures, taken because of the economic crisis of 2008 and ageing of the population in Europe. These measures have changed various societal conditions negatively. Nevertheless, some societal indicators show that social conditions clearly improved in some countries but others got worse. For example, the percentage of reported unmet needs decreased significantly in Bulgaria, Lithuania, Romania, Poland, and Estonia between 2006 and 2014 but it is not said that old citizens profited most. In Ireland, Greece, Italy, Iceland, and Belgium unmet needs in health care were more often reported. Based on our analysis, we believe that the rudimentary structure of health and welfare provisions in various central-eastern European countries still were too vulnerable to cope with the imposed policy measures and not because of attitudes or belief systems [18]. At the same time, it should be stated that knowledge and understanding on how societal processes and policy measures affect quality of life of citizens are limited. Theoretical development is still poor, especially when it comes to the interaction between policy measures, societal changes (including ageing of societies), and individual preferences and behaviour. International, comparative research, based on sound theoretical concepts, is strongly needed. --- Conflicts of Interest The authors declare that there are no conflicts of interest. Submit your manuscripts at https://www.hindawi.com --- Stem Cells International Hindawi
Objectives. Ageing of societies causes serious political concerns on well-being of old citizens and care for the (frail) old. These concerns increased with the economic crisis of 2008. In European countries policy measures were taken to deal with the consequences of this crisis. This study explores the possible effects of these measures on life satisfaction of older citizens. Methods. Life satisfaction was assessed through international surveys in 2007 and 2013 and changes in societal conditions, using eight indicators on demography, welfare, and health, are assessed in 31 European countries in 2006 and in 2014. Data are standardised and based on official, national surveys and statistics. Results. The former found that U-shape relationship between age and life satisfaction disappeared after the crisis. Negative changes in social protection and care arrangements, taken after the economic crisis, are related to low life satisfaction in old citizens. Conclusions. Various societal conditions deteriorated in 2014 as compared to 2006. Policy measures, taken due to the 2008 economic crisis, have changed societal conditions and affected life satisfaction of older citizens negatively. In countries with a rudimentary structure of health and welfare provisions old citizens could not cope with the imposed policy measures.
Background Socioeconomic position (SEP) throughout life is usually inversely associated with morbidity and mortality from cardiovascular disease, although the underlying biological pathway is not entirely clear [1,2]. Cardiovascular disease has been associated with higher levels of inflammatory molecules, perhaps as a consequence of exposure to pathogenic organisms [3], although it is unclear whether pathogen burden mediates SEP differences in cardiovascular risk [3,4]. Poor early life conditions are usually associated with higher levels of inflammatory markers [5][6][7][8][9][10][11] and poorer adult immune function [12,13]. These associations are less clear, however, amongst men from middle income countries [10]. Furthermore, little is known about the association of SEP across the life course with immune function. The duration or number of exposures across the life course may be most important (the accumulation hypothesis) [14]. Alternatively, the timing of exposure to poor socioeconomic conditions may be crucial as a number of sensitive periods or simply as a single critical period (the critical period hypothesis). It is also possible that either inter-or intra-generational social mobility plays a part. Developmental trade-offs between growth, maintenance, and reproduction may occur when there are competing demands for energy resources between biological systems [13,15,16], potentially at the expense of immune function in resource-poor environments. Alternatively, intergenerationally and environmentally driven up-regulation of the gonadotropic axis with economic development may obscure some of the normally protective effects of social advantage in the first few generations of men to experience better living conditions [17,18], thus generating epidemiologically stage specific associations between SEP and immune-related functions, such as pro-inflammatory states, among men [18,19]. Rapidly developing mega-cities of China may provide a sentinel for the changes in non-communicable diseases expected with economic development and inform effective interventions to reduce the disease burden. In a large sample of older residents from one of the most developed mega-cities in China, Guangzhou in southern China, we assessed the association of SEP at four life stages with proxies of inflammation (total white blood cell, granulocyte, and lymphocyte counts) and compared models representing the accumulation, sensitive periods and critical period hypotheses. Additionally, we hypothesise that 1) higher life course SEP is protective for adult inflammation, 2) the normal protective effect of social advantage is obscured in men experiencing rapid socioeconomic development. --- Methods --- Sources of data The Guangzhou Biobank Cohort Study is a collaboration between the Guangzhou No. 12 Hospital (Guangzhou, China) and the universities of Hong Kong (Hong Kong, China) and Birmingham (Birmingham, United Kingdom). The study has been described previously in detail [20]. Participants were drawn from the Guangzhou Health and Happiness Association for the Respectable Elders (GHHARE), a community social and welfare association unofficially aligned with the municipal government, where membership is open to anyone aged 50 years or older for a nominal monthly fee of 4 yuan (US $0.50). Approximately 7 percent of permanent Guangzhou residents aged 50 years or more are members of the GHHARE. Eleven percent of the members were included in this study, who were capable of consenting, were ambulatory, and were not receiving treatments which if discontinued might have resulted in immediate, lifethreatening risk, such as chemotherapy, radiotherapy or dialysis. Those with less serious chronic illnesses or with acute illnesses were not specifically excluded from the study though they may have been less likely to attend. Participants were recruited in three phases and this study includes participants recruited in phase 3 only (recruited between 2006 and 2008), because only phase 3 has detailed information on childhood socioeconomic position and inflammatory markers. Participants underwent a detailed half-day medical interview, as well as a physical examination with fasting blood being sampled. Quantitative haematological analysis was performed using a SYSMEX KX-21 haematology analyser. The Guangzhou Medical Ethics Committee of the Chinese Medical Association approved the study and all participants gave written, informed, consent prior to participation. --- Socioeconomic position across the life course We used indicators of SEP at four life stages: childhood, early adult, late adult and current SEP. Childhood SEP was measured by an index of notable parental possessions that were appropriate to China in the mid-20th century, based on sociologic accounts of life in southern China at that time [17]. The items selected were a watch, a sewing machine, and a bicycle. These items were categorized, as previously, as none or at least one [21]. As in other similar studies, we used education and longest-held occupation as proxies for early and late adult SEP [22]. Early adult SEP was assessed from the highest level of education (primary or less versus secondary or more). Occupation was categorised as manual (agricultural work, factory work, or sales and service) or non-manual (administrative/managerial, professional/technical, or military/police). Current SEP was assessed from household income per head. Household income was recorded in six categories (<unk>5,000 Yuan, 5000-9,999 Yuan, 10,000-19,999 Yuan, 20,000-29,999 Yuan, 30,000-49,999 Yuan and <unk>50,000 Yuan). Household income per head was estimated using the mid-point of each income category and assuming that those in the highest category had an annual income of 75,000 Yuan. The median household income per head was used as the cut-off point between low and high SEP. --- Outcome measures The primary outcome was total white blood cell count used, as in other studies, as a marker of a pro-inflammatory state [5], and less well functioning immune system. As we do not have a detailed breakdown of different white blood cell types, such as macrophages, we also considered granulocyte and lymphocyte counts as outcomes because these immune cell sub-populations largely relate to innate and adaptive immunity respectively. They have previously been used as markers of inflammation [23,24]. Other measures of inflammation (e.g. C-reactive protein) were not available. --- Statistical analysis Multivariable linear regression was used to assess the adjusted associations of SEP with the outcomes. Following Mishra et al. [25] we determined the most parsimonious representation of life course SEP by comparing models for three different life course hypotheses (the accumulation, sensitive periods and critical period hypotheses) to a 'fully saturated' model which represents all possible life course SEP trajectories. As in previous work [26], the accumulation hypothesis was represented by a model representing the number of life stages with high socioeconomic position, and the sensitive periods hypothesis by a model in which all four measures of SEP were considered as separate items in one model adjusted for all four measures of SEP. The critical period hypothesis was represented by models in which only one SEP exposure (the critical period) was included [25]. We used the Akaike Information Criterion (AIC) to compare models [27]. A smaller AIC indicates a better model. We examined whether the outcomes had different associations with SEP by sex or age, from the heterogeneity across subgroups and the significance of an interaction term obtained from a model including all interaction terms with age or sex. All models were adjusted for age (in 5 year age groups) and sex. A second set of models was additionally adjusted for lifestyle factors (smoking, alcohol use, and physical activity categorized as in Table 1) as potential mediators and a third set of models additionally adjusted for body mass index (BMI) as a potential mediator Proxies of SEP were unavailable or unclassifiable for 28.7% of the participants, mainly because information on household income or the longest-held occupation was missing. Alcohol use or smoking was not available for 2% of participants. We used multiple imputation for missing data [28,29]. Socioeconomic position at any stage, alcohol use and smoking were predicted based on a flexible additive regression model with predictive mean matching incorporating age, sex, leg length, seated height, alcohol use, smoking status, physical activity and SEP at the other three stages [28]. We imputed missing values 10 times and analysed each complete dataset separately, then summarized estimates with confidence intervals adjusted for missing data uncertainty [30]. As a sensitivity analysis, a complete case analysis without imputation was performed. We used STATA version 10.0 (STATA Corp., College Station, TX) and R version 2.12.2 for analysis, imputation and model estimation. --- Results Of the 10,088 phase 3 participants examined, 1.1% (n = 107) had missing data for total white blood cell, granulocyte or lymphocyte counts. Analysis was based on the remaining 9,981 participants. There were more women (n = 7,445) than men (n = 2,536) and the women were younger [mean age 59.2 years (S.D. 7.6)] than the men [mean age 63.1 years (S.D. 7.6)]. Overall the mean white blood cell and granulocyte counts were lower in women than men (Table 1). The associations of SEP with white blood cell, granulocyte or lymphocyte counts did not vary with age (data not shown). However, associations of SEP with lymphocyte count varied with sex, so only sex stratified results have been presented for this outcome. For white blood cell count and granulocyte count, the sensitive periods model performed better than the fully saturated model, accumulation or critical period models (Table 2). The sensitive periods model shows that some life stages had stronger negative associations than others with white blood cell count and granulocyte count; the early adult stage had the strongest association for both outcomes. The pattern for lymphocyte cell count was somewhat different. Associations varied by sex. Table 3 shows that for both sexes the accumulation and sensitive periods models did not perform as well as critical period models. The early adult life stage was a critical period for women, with a negative association between SEP and lymphocyte cell count. By contrast, for men, all estimates of association between SEP and lymphocyte cell count were positive, although all confidence intervals included zero. Additional adjustment for lifestyle factors (smoking, alcohol use, and physical exercise) attenuated estimates slightly (see Appendix) but the pattern of associations generally remained the same. Smoking among men is associated with both low SEP and higher lymphocyte count, hence adjustment for smoking strengthened the positive association of SEP with lymphocyte count (Appendix). Further adjustment for BMI (Appendix) produced very similar results; estimates of association were little changed. All results were similar in a complete case analysis (Appendix). --- Discussion Consistent with other studies in developed and developing settings examining the association between SEP and inflammation [5][6][7][8][9][10][11], we found that SEP was negatively associated with adult immune cell numbers, particularly among women. Consistent with the only other study from a developing country setting, the advantage of higher SEP for adult inflammation was less marked among men [10]. In general, considering SEP at all four life stages was better than considering individual life stages (critical periods) except for lymphocyte cell counts. This study has a number of strengths. To our knowledge, it is the first study to investigate the role of life course SEP in later adulthood inflammation in a nonwestern, developing setting. Moreover, we explicitly determined the most parsimonious representation of life course SEP. The large sample size allowed sex-specific analysis. Nevertheless, there are limitations. First, it is a cross-sectional study with recalled SEP, which may be imprecise, although most likely non-differential. Second, in a cross-sectional design reverse causality must be considered although it is unlikely that inflammation has a causal effect on life course SEP. Third, there may have been gender bias in the allocation of resources within families, most likely favouring boys and men, which may have mitigated the disadvantages of low SEP. However it is unclear why this should have mitigated the effect of SEP for lymphocytes but not for white blood cells and granulocytes. Fourth, our cohort may not be fully population representative. However, prevalence of certain morbidities, such as diabetes, were similar to those in a representative sample of urban Chinese [31]. Fifth, survivor bias is possible, which may have limited participants' socioeconomic and health diversity, biasing results towards the null. If survivorship were an issue we would have expected differences in associations by age, of which there was no evidence. Sixth, we did not explicitly consider the life course effects of social mobility since these are particularly hard to define and test clearly. Inter-and intra-generational mobility, upward and downward mobility are all potential risk factors. Seventh, a single measurement of white blood cells and differential cell counts may not accurately reflect long-term immune function or inflammation. However, white blood cell count is used as a marker of immune status in clinical settings and is a well-established and routinely-used marker of systemic inflammation [32]. White blood cell count is associated with disease risk and predicts disease outcome [33,34]. Eighth, although we report associations between SEP and differential white blood cell counts, clinical significance remains to be determined. Within the normal range, elevated white blood cell counts are associated with risk factors for chronic diseases, such as cardiovascular disease [32,35]. White blood cell counts can be conceptualised as a mixed marker of exposure and response, even a relatively small shift towards a healthier inflammationimmunological profile might have significant public health benefits at the population level [33,34]. Ninth, acute infection, trauma and underlying chronic disease or medication could be mediators. There is no evidence to suggest that participants were experiencing infection during the assessment process, nor significant trauma. Although only those with life-threatening illness were specifically excluded, those experiencing significant acute infection or trauma were less likely to attend this study, which should have minimized any bias from this source. We also performed descriptive analysis of the data to detect and exclude outliers, which may have resulted from unknown underlying disease, medication, or recording error. One possible explanation for the association of low SEP with inflammation is via current health behaviour linked to inflammation [5,[36][37][38]. Although we did not perform formal tests of mediation, we did adjust for smoking, alcohol consumption, physical activity and BMI in separate models (Appendix), which had little effect among women, but among men, this attenuated the negative association of early adult SEP with white blood cell and granulocyte counts and strengthened the positive association of early adult SEP with lymphocyte counts. This suggests that any associations are unlikely to be driven by adult health behaviour in women, though these may obscure negative associations of early adult SEP with inflammatory markers in men. Low SEP may increase exposure to pro-inflammatory agents, such as microbial pathogens, pollutants or adverse work conditions. Mechanisms for increased exposure or vulnerability to pathogens in low SEP groups include earlier and/or greater lifetime exposure due to adverse living conditions, such as overcrowding, and increased susceptibility to primary infection through nutritional deficiencies, or stress-related immune dysfunction [3]. A gender bias may have protected low SEP men from such exposures and adverse work conditions, although it is not clear why the effects should be most obvious for lymphocytes. Lower birth weight amongst those with low childhood SEP is another possible explanation, but birth weight is not available for our participants. Birth weight is inversely associated with inflammatory markers [6,39]. However, birth weight appears to be less relevant in developing country settings such as ours [40], and there is no reason why birth weight should have sex-specific effects on some white cell sub-types. An alternative explanation is that better early life conditions would be expected to promote development of the adaptive immune system, particularly of the thymus, [41] whose development takes place in early life [41] and which is sensitive to malnutrition, micro-nutrient deficiencies and infections during growth and development [42][43][44]. Moreover, the same exposure would also allow upregulation of the gonadotropic axis resulting in sex-specific effects on some immune cell sub-populations [45][46][47], particularly those relating to adaptive immunity. Consistent with this mechanism we have previously observed similar sex-specific associations, in the Guangzhou Biobank Cohort Study, of childhood stress with white cell count [48] and of childhood diet with lymphocytes but not granulocytes [19]. However, we do not have measurements that would allow proof of this mechanism. --- Conclusions Socioeconomic position was inversely associated with white blood cell differential counts, as a marker of inflammation, with a clearer and more consistent association among women than men. Environmentally and inter-generationally driven changes to the gonadotropic axis may obscure the normally protective effect of social advantage in the first few generations of men, but not women, to experience better living conditions. Given the links between the immune system, inflammation and chronic disease, this provides a biological mechanism between SEP and the pathophysiological genesis of chronic disease. Understanding such mechanisms for populations experiencing the epidemiological transition is of public health significance. --- Competing interests The authors declare that they have no competing interests.
Background: Socioeconomic position (SEP) throughout life is associated with cardiovascular disease, though the mechanisms linking these two are unclear. It is also unclear whether there are critical periods in the life course when exposure to better socioeconomic conditions confers advantages or whether SEP exposures accumulate across the whole life course. Inflammation may be a mechanism linking socioeconomic position (SEP) with cardiovascular disease. In a large sample of older residents of Guangzhou, in southern China, we examined the association of life course SEP with inflammation. Methods: In baseline data on 9,981 adults (≥ 50 years old) from the Guangzhou Biobank Cohort Study (2006-08), we used multivariable linear regression and model fit to assess the associations of life course SEP at four stages (childhood, early adult, late adult and current) with white blood, granulocyte and lymphocyte cell counts. Results: A model including SEP at all four life stages best explained the association of life course SEP with white blood and granulocyte cell count for men and women, with early adult SEP (education) making the largest contribution. A critical period model best explained the association of life course SEP with lymphocyte count, with sex-specific associations. Early adult SEP was negatively associated with lymphocytes for women. Conclusions: Low SEP throughout life may negatively impact late adult immune-inflammatory status. However, some aspects of immune-inflammatory status may be sensitive to earlier exposures, with sex-specific associations. The findings were compatible with the hypothesis that in a developing population, upregulation of the gonadotropic axis with economic development may obscure the normally protective effects of social advantage for men.
IntRoduCtIon South African today remains a nation torn by violence and racial inequity. One of major challenges for its people is to create new futures across historically constituted racial divides, by finding ways to engage with each other across difference. In this regard, multilingualism holds out the promise of offering a way of bridging difference and opening spaces for engagement and empathy with Others. However, our point in this paper is that multilingualism has always been, and remains today, an 'epistemic' site for managing constructed racialized diversity. Contemporary constructs of multilingualism, both in policy and everyday practice, continue to reinforce racialized divisions inherited from historical uses of language as a tool of colonialism, and a mechanism of governmentality in apartheid, the system of exploitation and state sanctioned institutional racism. In order to illustrate this, we trace in section 2 the ways in which constructs of multilingualism are entwined with racialization as a building block of South African imaginary. In section 3, we focus particularly on present day constructs/ practices of multilingualism that centre decoloniality, social transformation, equitable education and livelihoods, and that encapsulate a dynamics of a society in transformation. In this context, we discuss tensions in racialized multilingualism, as well as the limitations inherent in inherited constructs of multilingualism for new modes of coexistence across racialized differences. We suggest that at the present time, there a few opportunities for scoping a more constructive understanding of multilingualism within the prevailing discourses of liberal enlightenment views of language and race. By way of conclusion, we suggest that alternative linguistic orders require a decolonial rethinking of the role of language(s) in epistemic, social and political life. --- SenSeS of MultIlInguAlISM The current official account of multilingualism in South Africa since the democratic dispensation in 1996 delimits 11 official languages among a population of 56 million. This representation of multilingualism is the democratic state's recognition and repatriation of the indigenous languages that were not accorded official recognition by apartheid, but relegated to Bantustans. However, it is one conceptualization of multilingualism among a multitude, as the South African multilingual landscape has been construed and represented variously at different historical moments, as diverse representations and values of languages and their relationships (Woolard, 1998:3) have emerged out of turbulent moments of social and political change. In particular, it is an attempt to linguistically articulate the image of the 'rainbow nation'. Different multilingualisms reflect the complex socio-politics of colonialism and apartheid, the state sanctioned and institutionalized system of racial segregation, as well as the country's postapartheid, democratic dispensation since 1994. Above all, multilingualism has been part of the many attempts of the State and its institutions throughout history to manage racialization, a foundational pillar of its design. Marx (1996: 163) remarks on how the State "emerges as a central actor in race-making, as it is the subject of contestation and responds to various challenges from the society in which is is embedded" and that "racial identities [...] do not quickly fade even if the conditions that reinforced them changed" (p. 207). In South Africa, as the nation-state has engaged with the turbulence of 'change', different notions of race have superseded each other. Rasool remarks on the South African people's "long histories of racialization, of enracement, deracement and retracement" (ms.nd: 1). Across all of these conjunctures, reorganizations and turbulent shifts of state and race, multilingualism has served as the epistemic space and semiotic articulation of different racialized normative orders. We can distinguish 4 distinct periods reflected in ideologies of multilingualism that roughly correspond to major shifts in the politics and economy of the country; (1) colonialism (2) apartheid (3) the negotiated settlement, and (4) the democratic dispensation. We trace underlying structural-ideological similarities across seemingly different constructs of multilingualism, and attempt to identify the subtext of parallel, emerging, ideologies of multilingualism yet to be clearly articulated. --- Colonialism Colonial understandings of languages and their speakers were an integral part of managing the colonial-imperial encounter. In all essentials, European constructs of language and linguistic diversity were mapped onto the linguistic space of colonized Africa. The historian <unk> Stroud, Richardson and CMDR. 2021 Patrick Harries notes with respect to missionary linguistic activity with the language Tsonga in the 'Transvaal' province in the North East that many of the linguistic givens and truths believed by the Swiss missionaries to be scientifically incontrovertible were, in fact, social constructs whose roots may be traced to nineteenth-century European codes of thought (1995: p, 162). One such 'truth' was the mapping of languages onto bounded units of organization such as tribes and clans. These were European pre-feudal notions of social organization that allowed the missionaries to categorize and 'efficiently' manage people on terms they themselves were best acquainted with from their own contexts. Similarly, colonizers used European paradigms/ models of historical migration and mixture of peoples and their languages to account for what they understood to be unbridled linguistic hybridity and chaotic diversity of the African linguistic ecology. The missionaries found ready categorizations of the cultural traits and spirit of their tribes by mapping them onto a Franco-German rivalries model where for example Zulu's were likened to Germans as ferocious but industrious (1995: 163). One consequence of this was the production of an imaginary of shared ancestral languages across tribes, made distinct through separation and warfare, but possible to reclaim through tools of historical reconstruction (cf. also Makoni, 1998;Pennycook and Makoni, 2005). Veronelli (2016) refers to the notion of the coloniality of language as the "coloniality of power in its linguistic form: a process of dehumanization through racialization at the level of communication" (408). Coloniality refers to the patterns of power, control and hegemonic systems of knowledge that continue to determine forms of control and meaning across social orders, even subsequent to colonialism as a social, military or economic order. The other axis of coloniality is modernity, the specific organization of relationships of domination. The coloniality-modernity nexus that undergirds South African policies and practices of racialized multilingualism from colonialism until today --- Apartheid Building from earlier institutional and structural conditions2, racial segregation as an all-encompassing design of South African society was formally introduced with the election of the National Party in 1945. Apartheid was about structural and institutionalised racism through the implementation of judicially upheld racially discriminatory policies, for example, the prohibition of Mixed Marriages Act 1949. From the 60s to the 80s, apartheid was best known in its guise of the Group Areas Act which reserved prime land for whites and forcibly removed other races to peripheral areas. The apartheid idea of racial purity and national homogeneity found a potent resonance in the politically engineered cultivation of language and multilingualism as racial bordering, a massive investment in distinguishing people and languages following the European nation-state principle of one 'volk', one nation, one language. Because of the aversion of Afrikaners to entertaining a conceptualization of Afrikaans as "the result of a cross between the speech of the early settlers and the prattle of their black slaves" (Barnouw,1934: 20), language planning of Afrikaans was organized around three principles: (a) diachronic purism, that is, the idea that "Afrikaans is as Multilingualism as racialization <unk> Stroud, Richardson and CMDR. 2021 white and pure as the race" (Valkoff, 1971); (b) albocentrism, the stance that only the versions of the language spoken by whites could be an object of study; and (c) compartimentage, where different varieties of Afrikaans were studied as distinct phenomena, with then contemporary forms of standard Afrikaans seen as a direct and linear descendant of Dutch and subject to systemic change through internal factors alone (Valkoff, 1971). The apartheid emphasis on 'bordering work'-and its embrace of the eighteenth century idea that single languages were constitutive of the nationstate -"justified" the artificial creation of territories for ethnolinguistically defined groups and a "balkanized state" (the so-called homelands or Bantustans) (Heugh, 2016: 236). All previous attempts at so-called harmonization of African languages (Nlapo 1944(Nlapo, 1945;;cf. further references in Heugh, 2016), to a few orthographically unified 'clusters' as a way to counteract the colonially engineered linguistic divisiveness were quashed by the apartheid formation of separate language committees in 1957. --- negotiated settlement The negotiated settlement in the twilight years of the apartheid state had as its overriding goal the construction of a nonracial order. The government in waiting, the African National Congress (ANC), embraced non-racialism as a founding principle of the new democracy. In exile, this had translated ideologically into the wide use of English as the language of the liberation movement, and as a perceived neutral language, and a medium for equality, aspiration and national development (Heugh, 2016). Albert Luthuli, one of the founding leaders of the party had always been explicitly in favour of English as a language of unification, and had earlier vehemently rejected education in African languages (so-called Bantu education) as a strategic ruse on behalf of the apartheid state to divide and dispossess Africans. In line with this, the National English Language Project (NELP) was formed in 1985 on the initiative of Neville Alexander. The NELP put forward the idea of English as the link language together with a small number of secondary languages as regional languages. Alexander subsequently also suggested harmonization to two language clusters in order to "unify the nation (Heugh 2016).3 Given the lacklustre experiences among newly independent colonies that had chosen the languages of the former colonial metropole, it was inevitable that the NELP's promotion of English would be critically questioned. In 1987, following contributions by Kathleen Heugh in particular, multilingualism in African languages was recognized as an essential condition in the broader struggle for a free, democratic and united South Africa. As a result the NELP was re-conceptualized in 1987 as the, the National Language Project (NLP) (cf. Heugh 2016). In particular, the NLP emphasized the importance of the educational use of African languages for democratic and equitable development and access. The period prior to the inauguration of a democratic South Africa was one of intense work on sketching the contours of a multilingual language policy for the new State to be. The historical landmark conference under the auspices of the NLP on the cusp of democracy (1991, planned in 1987) entitled Democratic Approaches to Language Planning and Standardization introduced an unprecedented range <unk> Stroud, Richardson and CMDR. 2021 and complexity of understandings of multilingualism into political debate. Besides reopening discussions around African language harmonization from the 1920s and 40s, the conference put forward notions of multilingualism as "more than the sum of discrete languages and linguistic balkanization", and as a "complex ecology of language practices [...] ranging over grassroots and fluid practices of languages to a more conventional and hierarchical language construct" (Heugh, and Stroud 2019) -what Heugh (1996) termed functional multilingualism. During the period of 1992-1995, a resource view of language came to complement the initial discourses on language rights (Language Plan Task Group, 1995:111). Perhaps most importantly, although less noted, was the challenge to the exclusivity of the State in language planning, and the emphasis put on the necessary involvement of non-government bodies. Regrettably very few of these many insights were followed through in the concrete roll out of the democratic state. In retrospect, it is remarkable that little attention was paid to the racial underpinnings of the linguistic order that the language planners inherited. Witz et al (2017) note how "the idea of discrete races and ethnic groups was somehow present in the politics of accommodation and reconciliation that gave birth to postapartheid South Africa in 1994, with South Africans framed as a 'rainbow nation' marked by diversity and many cultures". Rasool (ms, nd) notes how "as much as race was made through structures and systems of rule, it was also produced through articulations and contests within different sections of the broad liberation movement, notwithstanding their avowed antiracism" (ms, p. 1) The idea of non-racialism defaulted to a liberal enlightenment idea of equal treatment of blacks and whites; of recognition, parity of treatment and legislative incorporation into State structures and public spaces. It did not mean the dismantling as such of the idea of race. However, recognition of indigenous languages and their speakers did not equate to the recognition of the deeply racialized colonial subjectivities layered into African languages. Neither did it offer strategic interruption of the historical mechanisms of multilingualism in the continued reproduction of these subjectivities. As one more mode of racialization, multilingualism would become apparent in the roll-out of the 'postracial state'. --- the democratic dispensation Formal transition to democracy came with the general election of the ANC to government in 1994 and the writing of the Constitution 1996. The new language policy became a central part of the structural replacement of the apartheid State. Alexander (1998:1) noted that "unless linguistic human rights and the equal status and usage of African languages were translated into practice, the democratization of South Africa [the country will] remain in the realm of mere rhetoric." Not surprisingly, the implementation of the language policy came to focus on institutional structures, such as legalization to encourage the promotion and use of African languages in all public spaces. The belief in'multilingualism' as an 'instrument' of social and epistemological justice became embedded in national policy, state institutions (education being the most important) and so-called Chapter 9 institutions, such as the Pan South African Language Board (Pansalb), the brief of which was to protect the rights of all languages and their speakers. Through recognition and institutional accommodation of 'diversity', a once divided nation would be unified by Multilingualism as racialization <unk> Stroud, Richardson and CMDR. 2021 "maximizing the democratic potential of social formations within which South Africans lived" (Alexander, 2003:9). The tension identified (although not elaborated) in the conference Democratic Approaches to Language Planning and Standardization between a multilingualism of state institutions and a more fluid and bottom-up construct came to a head in conjunction with the implementation of the Language in Education Policy (DOE 1997). The wording of the document is replete with radical wordings such as 'fluidity', and the recognition of a spectrum of multilingual practices and engagements with pupils' repertoires. However, when the proposals were inserted into the practicalities of everyday, institutionalized schooling, what was an expansive, generous and complex construct of multilingualism defaulted to a traditional hierarchical relationship between English/Afrikaans and African languages (Heugh and Stroud, 2019). Even more insidiously, the policy overtime has undergirded an increasing monolingualization as modus operandi in the school system, and increasingly so in catchment areas of great diversity. It is beyond the scope of this essay to delve into the concrete details of these developments. Nevertheless, defaulting to monolingual English schooling is likely one part of a much wider 'capture' or'repopulation' of State and private structures by elites (black and white) for whom English is a capital investment in increasingly transnational markets of 'whiteness' (see Christie and McKinney, 2017). In other words, state institutions have despite the good intentions of their architects defaulted to an increasing monolingual whitening as a motor of elite privilege. --- PoSt-RACIAl South AfRICA The tension identified in the conference between State management of language and bottom-up initiatives has come to characterize developments around multilingualism in South Africa in the last 5 years explicitly. More generally, complex strands of historical debate continue to re-surface in different configurations and with different stakeholders, and contemporary ideological constructs of multilingualism are best seen as kaleidoscopes of inherited fragments of past multilingualisms, and contemporary subtexts or responses to these. As noted above, education has been -and remains -one of the key sites for the production and circulation of ideologies on multilingualism. The school is where the complex interweaving of subjectivities, bodies, and aesthetics with different languages created under colonialism and apartheid are most visible (cf. Veronelli, 2016;Williams and Stroud, 2017). It is a space in the South African context where inter-racial and 'inter-lingual' relationships are played out on a daily basis, and where tensions in differently racialized constructs of language and multilingualism, as well as tensions between grassroots and institutions, are increasingly taking centre stage and finding their most explicit articulations. On the one hand, the school is a prototypical force for integration, segregation and disciplining; on the other, it is also an institution rich with potential for change. School policies and practices reflect the weight given to English in South African society generally and the belief that African languages constitute a hinder for learning it. Colonial and apartheid values of the inferiority of African languages, and the superiority <unk> Stroud, Richardson and CMDR. 2021 of metropolitan languages remain strong: The equation of English with intelligence and academic ability, and streaming according to English language ability serve to reinforce the indexical weights and values given to English and African languages and perpetuate a monolingual mind-set (Makoe and McKinney, 2014: 669). The variety of English valued in schools is white South African English and 'ethnolinguistic' repertoires of whiteness more generally (Makoe and McKinney, 2016), while township accents or Black Englishesare delegitimized. Teachers step out of teaching content subjects (such as Maths) to produce disciplinary asides in order to correct learners on, for example, points of English pronunciation. Makoe and McKinney (2014: 669) note how despite their multilingual proficiencies, African language speakers are seen as deficient monolinguals, and schools produce dominant ideologies of "linguistic homogeneity and inequity". Former elite (white) schools are taking African languages off the curriculum in accordance with the Basic Education Department's New Curriculum Policy that only one first additional language should be offered, and less time is given in the curriculum for any other language than English and Afrikaans. In fact, African language parents have also voiced unhappiness with their perception that the variety of the African language taught is debased: Schools teach 'Kitchen Zulu' (Ntombeble Nkosi (Chief Executive Officer of Pansalb). This then is not just a'monolingual' bias, but a particular white language bias, a situation that reproduces apartheid language hierarchies/regimes (Makoe and McKinney, 2014). Such a predominant 'white positionality' on language matters is nicely captured in the words of one member of a prominent Governing Body Foundation, who publicly stated in 2017 that; Afrikaans is a much easier language to master. There are no clicks, the vocabulary and the structure are part of the same family of languages as English and therefore easier to pick up... One reaction to the racialization of language -that incidentally also clearly illustrates bodily invasive features of 'language ideology' comes from a Cape Town elite girls' school. The school habitually penalized the children for speaking isiXhosa on the school premises, formally noting the transgression in a special book. The language prohibition was one part of a more extensive 'black' disciplinary discourse, formalized in the Code of Conduct, that stipulated that learners must keep their 'hair tidy'. Students were literally chastised to the very fibres of their black body, and took widely to social media in attempts to change antiquated codes of conduct and propriety modelled on whiteness (see Christie and McKinney, 2017). Beyond the more institutionalized (non)use of named languages, is the way in which school children us multiple languages to circumvent official racial categories. Kerfoot's (2016) important study of primary school learners in a lowincome neighbourhood in Cape Town showed how students' strategic use of repertoires in encounters across (racial) difference contributed new identitybuilding resources. Among other things, they used multiple languages also as as a means of shaping new interaction orders -restructuring hierarchies of value and subverting racial indexicalities, and sometimes even resignifiying the very meanings of racial categories. --- ConCluSIon Any singular notion of multilingualism obscures the centuries' long, shifting idea of language and conceals the de facto complexity and multiplicity of multilingualism(s) as plural responses to moments of turbulent transition. Throughout South African history, State structures, policies and institutions have engaged with constructs of the nationstate that are deeply racialized, with either the goal of constructing, separating and disempowering 'non-white races' or in order to further social transformation through addressing historically racebased inequalities. In both cases, the default is a celebration of 'whiteness', itself an ever-changing construct (Alcoff, 2015), deeply entangled with transnational, neoliberal marketization. Constructs of multilingualism have been central as epistemological and strategic sites for the play of racialized state dynamics. They have been heavily determined by racial bordering, from the early beginnings of first colonial contact until today. As part of a larger discursive regime, or battery of historical procedures and institutionalized discourses, they have helped either to invisibilize or discipline the black body, or attempted to re-stylize it and its relationships to whiteness. We have touched on how fragments of institutionally racialized ideologies of multilingualism appear in the contemporary thoughts and practices of the everyday, highlighting specifically how speakers deploy and attempt to circumvent (not always successfully) these constructs of language in their everyday practice (see also Guzula, McKinney and Tyler, 2006;Krause and Prinsloo, 2016;Makoe and McKinney, 2009). By way of brief conclusion, there is clearly a need to re-think multilingualism as a'semiotics of relationality', the articulation in language(s) (or other forms of semiosis) of relationships between individuals, groups and/or institutions, and its role as a site for racial contestation. A rethought multilingualism can provide one necessary space to interrogate the 'unmaking' of race. --- noteS: 1
South African today remains a nation torn by violence and racial inequity. One of major challenges for its people is to create new futures across historically constituted racial divides, by finding ways to engage with each other across difference. In this regard, multilingualism holds out the promise of offering a way of bridging difference and opening spaces for engagement and empathy with Others. Today contemporary constructs of multilingualism, both in policy and everyday practice, continue to reinforce racialized divisions inherited from historical uses of language as a tool of colonialism, and a mechanism of governmentality in apartheid, the system of exploitation and state sanctioned institutional racism. In this paper we seek to demonstrate how multilingualism has always been, and remains today, an 'epistemic' site for managing constructed racialized diversity. In order to do so we trace periods of South Africa's history. By way of conclusion, we suggest that alternative linguistic orders require a decolonial rethinking of the role of language(s) in epistemic, social and political life.
Introduction Indonesia is the world's 4 th most populous country, where 43.3% of its population lives in rural areas, and smoking remains as a health problem in this nation [1]. According to the 2018 Basic Health Research (Riskesdas), tobacco prevalence was predominant among the age of 15 years and above (33.8%), and majority are male smokers [2]. A study among the U.S. adults shows that it is more common in their rural areas compared with that in the urban settlements [3]. Similar findings in Indonesia also recorded the same result (36.8% vs 31.9%) [4]. Tobacco addiction is an Indonesian public health issue, with a steady rising of the incidence and also the increasing mortality rate because cardiovascular diseases are associated with the high prevalence addiction of tobacco smoking. [5]. Aditama observed that most of the male participants were heavy smokers with an average of 7.6 cigarettes per day [6], while a different study showed 21.4% [7]. Several studies have outlined the causes of smoking behaviour such as lack of knowledge, socioeconomic factors, information through media, and stress or negative life-related events [8][9][10][11]. However, majority of those studies were conducted in urban areas. erefore, this study aims to analyze the determinants of tobacco smoking addiction in rural areas. --- Methods is was a cross-sectional study conducted on February 2020 at Songgon district, Banyuwangi Residence, East Java. It lies among the border of Bondowoso and Jember Residence, which are dominated by Madurese, Javanese, and Osing (Banyuwangi natives) ethnicities. ese represent those of the rural East Java, which is the 2 nd province with the highest number of populations in Indonesia [1]. e respondents were male local villagers aged 15 year old and above. It was conducted under the Community Medicine education training program and organized by the Faculty of Medicine, Universitas Airlangga, on a one-week period. Using sample size calculation, the minimum requirement was 75 responses [12]. Consecutive samplings were carried out until the minimum number is fulfilled. e authors had provided standardized training programs for all interviewers prior to the survey administration. e respondents were requested to complete three questionnaires, consisting of tobacco addiction determinants, Perceived Stress Scale-10 (PSS-10), and WHO AS-SIST v3.0 questionnaire for tobacco. All were given in Indonesian language. e first question list consisted of three sections as follows: (1) health risk awareness, (2) social control, and (3) mass media role in tobacco smoking. With each statement, the respondent's response were placed on a 4-point Likert Scale, where 1 and 4 indicated strongly "disagree" and "agree," respectively. Each section had a maximum score of 60 points for the first section and 40 points for the second and third sections. is questionnaire has been prevalidated and tested for reliability with alpha 0.908. e Indonesian version of the PSS-10 was adapted from the previous studies with r 0.632 and alpha 0.857 [13], while the WHO ASSIST v3.0 questionnaire was from the Indonesian Ministry of Health's booklet. --- Statistical Analysis. e acquired data were analyzed using IBM SPSS Statistics for Windows ver. 23.0 (IBM Corp, Armonk, USA), which were expressed as mean <unk> standard deviation. Correlation between demographic and addiction risk was measured using both the Spearman's rank-order and Fisher's exact test, while those between the scores obtained from tobacco addiction determinant questionnaire and PSS-10, based on WHO ASSIST v3.0 questionnaire, were measured using Spearman's rank-order only. A p value of <unk>0.05 was considered as statistically significant. --- Ethical Clearance. is study followed the principles of Helsinki's Declaration and also received the permission from the Faculty of Medicine, Universitas Airlangga, before it began (ethical clearance no. 52/EC/KEPK/FKUA/2020). All respondents had presented their signed consents prior to their inclusion in the study. Details that might disclose the identity of the respondents were omitted. --- Results --- Demographic Data. A total of 75 responses were collected and validated. e mean age of the respondents was 44.04 <unk> 13.10. Most of them worked as a self-employee/ subsistence. Regarding the education level, most respondents were of senior high school levels or higher. It was also observed that most of them were married and living with 4-6 persons in their homes. e demographic data were presented in Table 1. --- Determinants of Tobacco Smoking Behavior. In the first section, respondents were asked about their views on a tobacco smoker, the introducer, the place, and its health effects. A higher score showed that they were more aware of the health risks of tobacco smoking. e mean point obtained was 42.93 <unk> 7.03, while 58 and 27 were the maximum and minimum achieved scores, respectively. In the second section, the respondents' opinions were enquired based on smoking in social settings. A higher score signifies that they could maintain good social control regarding tobacco smoking. e mean point was 29.53 <unk> 4.9, while 40 and 15 were the highest and lowest scores, respectively. While the third sections were enquired about the mass media role against smoking behaviour. A higher score means that they were more aware of the media reporting the dangers of tobacco smoking. e mean point recorded was 30.20 <unk> 4.71, while the maximum and minimum obtained scores were 40 and 15, respectively. e PSS-10 is a questionnaire used to measure the perception of stress. A higher score indicates that the respondent is having a high level of stress. Smoking was considered one of the behavioural responses during stressful conditions. e mean score acquired was 14.96 <unk> 5.67, while the highest and lowest acquired points 26 and 0, respectively. --- Addiction Risk and Demographic Data. e WHO AS-SIST v3.0 is used to measure the addiction risk toward smoking. e scores were categorized into three, namely, low (0-3), moderate (4-26), and high (more than 26), and the number of respondents on each class was recorded as 45 (60.00%), 23 (30.67%), and 7 (9.33%), respectively. Table 1 describes the correlation between demographical data and tobacco addiction danger. It was observed that the risk did not significantly correlate with age (p 0.241), occupation (p 0.553), education level (p 0.940), marital status (p 0.593), and number of persons in each home (p 0.873). --- Addiction Risk and Determinants of Smoking Behavior. e scores of each questionnaire sections and PSS-10 were compared based on the respondents' addiction risk (Table 2). We observed that those in the lower group scored the highest point in all the three sections (44.62 <unk> 7.48, 30.93 <unk> 4.91, and 31.33 <unk> 4.72, respectively). Respondents in the moderate group got the lowest on the first section with an average of 40.30 <unk> 5.49 scores, while those on the high category got the least in the second and third sections (26.43 <unk> 5.35 and 28.43 <unk> 3.87 points, respectively). Spearman's rank showed an inverse correlation between score achieved on the questionnaire and addiction risk (r -0.283, -0.328, and -0.301, respectively) and those in all the three sections (p 0.014, 0.004, and 0.009, respectively). On the PSS-10, those in the lower group had a low score compared with those in the moderate and higher groups (14.20 <unk> 6.31 vs 15.48 <unk> 4.73 and 17.57 <unk> 2.94). However, this study shows that there is no significant relationship between the PSS-10 score and the risk of tobacco addiction (p 0.287). --- Discussion Currently, only one study that had been conducted to analyze the determinants of tobacco smoking addiction in rural areas of Indonesia [14]. Previous studies show that smoking is more prevalent among men than women in rural areas [2,15]. e preliminary survey also found out that no female smoker was recorded in Songgon district; therefore, only male respondents were recruited to participate. A better awareness of tobacco health dangers is associated with a lower addiction risk and a desire to quit smoking [16,17]. is is due to the high awareness level of the dangers which sabotages the victims' experience, making it a worse exposure, and as a result, indirectly reducing the risk of addiction [18]. Several studies reported that social environments and stigmatization from smoking behaviours have led to decreased smoking rates [9,19]. Maintaining a good social control means that a person is capable of inhibiting smoking while maintaining an ethical interaction with others. In Indonesia, especially in rural areas, a cigarette is usually offered during social meetings and occasionally as a sign of friendship. Another cultural aspect is the politeness rejection of a host's offer, which is considered impolite and may offend them [6]. erefore, the smoking rate is unsurprisingly high, but its addiction is a different matter as it is multifactorial and often depends on personal experiences. e authors assumed that social meetings contributed to smoking addiction for those that socialize (extraverted people). However, further study is needed to investigate this assumption. It was presumed that social control might be a better determinant of the addiction risk. e mass and social media campaigns were thought to be effective in changing smoking behaviour among adults [20,21]. In contrast, it was also assumed that smoking advertisements have been enhancing their habits toward tobacco [8]. In Indonesia, campaigns to stop smoking or sensitization on its dangers is rarely carried out, but tobacco advertisement is always seen everywhere [22,23]. Again, it must be emphasized that tobacco addiction is multifactorial and depend on the respondents' perception. It is argued that a person's awareness of mass media reporting the danger of smoking may be a better determinant. is research made some findings different from those of the previous studies that the stress level were not significantly related to addiction risk [24,25], since the stress level in rural areas is lower than that in the urban settlements. It was also observed that the urban settlements experienced more distress compared with the rural areas. [26]. Besides environmental and socioeconomic aspects, rural areas tend to support each other through gotong royong, a term which encompasses communal service and mutual assistance without hesitancy to those in need [27]. erefore, it is assumed that the lower stress levels may lead to a stressrelated smoking behaviour, which has not been considered in this research. is study has several limitations. Firstly, the limited time which restricted the sample size and method. However, the authors managed to reach the minimum sample required. Secondly, using only a single district may not represent rural areas in other provinces or islands in Indonesia. irdly, the authors did not study the smoking behaviour and the social background that lead to addiction. erefore, further study is recommended with more sample sizes in several other rural areas. --- Conclusion Tobacco smoking remains a nationwide problem in Indonesia. Results show that high perceived stress has no correlation with the increased addiction risk in rural areas. However, the increased awareness of its health dangers, good social control, and mass media campaign is significantly in relationship with its decrease. erefore, this study could be a recommendation of the smoking prevention program to address more of the issues rather than focusing on stress management for the population. --- Data Availability e data used to support the findings of this study are available from the corresponding author upon request. --- Conflicts of Interest e authors of this article declared no potential conflicts of interest. --- Authors' Contributions JPS, Sulistiawati, and AK were involved in the conception and design of this research, revision of the article, and the final approval of the version to be published; JPS was responsible for the acquisition of data, analysis and interpretation of data, and drafting the article.
To analyze the determinants of tobacco smoking addiction in rural areas. Methods. A cross-sectional study was conducted on February 2020. e self-administered questionnaire (α � 0.908) and Perceived Stress Scale-10 were used as tobacco smoking determinants and the WHO ASSIST questionnaire V3.0 to determine its addiction risk. eir correlations were analyzed by Spearman's rank-order approach using the SPSS version 23.0. Results. Among 75 male respondents that participated in this study, those on low, moderate, and high addiction risk were 45 (60.00%), 23 (30.67%), and 7 (9.33%), respectively, and significantly correlated with the research questionnaire that consisted three parts: 1. awareness toward the health risk; 2. social control; 3. mass media role in tobacco smoking (p � 0.014, 0.004, and 0.009 respectively), but there was no significant correlation with the stress level (p � 0.287). Conclusion. Increased awareness toward the health risk, good social control, and mass media reporting the danger of tobacco smoking is significantly in correlation with the decreased addiction in rural areas. However, the high perceived stress has no correlation with its increase.
Any discussion on the environmental dimension of the quality of life in the city should be preceded by clarifying the meaning ascribed to the term 'environment' as used in this paper. The notion of the environment carries so many meanings and encompasses so much that omitting a definition here would trigger a multitude of ambiguities and misconceptions. The tradition and evolution of the scope covered by the notion of the environment is particularly long and rich in geographical research. A canonical order of the terminology related to the environment was introduced into Polish geographical studies by Tadeusz Bartkowski (1975Bartkowski (, 1977)). In this paper, the term environment is understood as the natural environment already transformed by human presence and activity, while still providing elements of nature, in the urban habitat. In the paper, we intend to start a discussion on the growing significance of the environmental dimension when assessing the quality of life in cities. We refer to theoretical reflections that stand in opposition to the modernist planning paradigm, which, for years, consolidated the strong tendency to consider nature alone -excluding society, and society alone -without nature. The trend was developed and boosted by a number of factors, such as the industrialisation of production, new technologies, rapid urbanisation, globalisation, expansive and uncontrolled exploitation and the quality of the natural environment (Starosta 2016). The dynamically developing research on quality of life has been continuously reversing the order of the modernist discourse, evidencing the importance of the relations between physical, social and cultural matter (Lefebvre 1994;Ja<unk>owiecki 2010;Löw 2018). In spite of the variety of approaches and concepts on how to define quality of life, it is commonly agreed that the notion of quality of life is made up of two mutually intertwined dimensions: psychological and environmental (Grayson & Young 1994). The factors related to inner, mental mechanisms determine the sense of personal satisfaction, and satisfaction with life; the factors related to external environmental conditions, on the other hand, determine internal impressions and views (Massam 2002). Terms such as the individual/personal quality of life, a subjective sense of well-being, or the level of satisfaction with life are used to identify the group of the internal factors. The external factors refer to various levels and categories of the quality of life and describe such concepts as the quality of life in cities, the quality of community life, the quality of the place, or the environmental quality of life. The variety of factors taken into account in order to assess the quality of life is immense. It is assumed that each of the measures reflects, in a sense, the impact and importance of the specific component in the comprehensive, general sense and view of the quality of life. Meanwhile, the same assumption suggests that the notion of quality of life can be disintegrated to form a set of factors or dimensions. If the correct set of factors is compiled, it will be possible to use it to obtain a credible, comprehensive assessment of the quality of life in the city. The key lies in defining each factor in such a way as to enable its measurement and, consequently, assess its quality and durability. What is important in practice is that each measure must be clearly defined operationally in terms of its structure, so that research findings are repeatable and comparable. Initially, researchers dealing in measuring quality of life focused primarily on social and economic indices, and attempted to develop, then accumulate statistics of various aspects of social life (Kurowska 2011). The need to develop a consistent list of objective social indices (following the example of the economic indices used to assess GDP) has led to the emergence of an entire stream of research called 'the social indices movement', which concentrates on the accumulation and analysis of statistics depicting various spheres of life (Petelowicz & Drabowicz 2016). --- Introduction <unk> 2023 Authors, published by Sciendo. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License. Research and analyses based on subjective indicators, on the other hand, have focused on the individual assessment of daily experience, and form the other extreme. According to August Campbell (1976), studies of quality of life cannot be conducted without referring to the subjective sense of satisfaction and well-being. In research of this type, the quality of life of an individual can only be determined based on the person's own assessment, bearing in mind such mental processes as perception, comparison, evaluation and assessment (Petelowicz & Drabowicz 2016). Paul Harwood (1976), for instance, defines quality of life as an individual's sense of well-being or their satisfaction with various spheres of life. Robert Gillingham and William Reece (1980) state that the individual's quality of life is the level of satisfaction they gain consuming goods and services purchased in the market, and public goods. For empirical studies, one should adopt a possibly broad interpretation of the notion -that the quality of life is the ratio between the degree of the values present at a specific place and their desirable level, taking into account both the resources at an individual's disposal and their personal evaluations and feelings (Cummins 2000), whereas the selection of specific indices depends on the context of the surveys and the adopted assumptions (Rapley 2003). --- The purpose and methodology of the survey The authors of this paper held a quality-of-life survey of the inhabitants of Gda<unk>sk in June 2021, using the indices of the city's spatial correlations. The survey was conducted based on a partnership contract between the University of Gda<unk>sk and the Municipality of Gda<unk>sk. The main purpose of the research project was to monitor changes in the values of the quality-oflife indicators among the residents of Gda<unk>sk in both territorial (district) and socio-demographic perspectives. The field research was carried out on a representative sample of 1,509 adult residents of Gda<unk>sk using the pen-and-paper personal interview (PAPI) method. The sample reflected the structure of the adult population of Gda<unk>sk in terms of sex, age, education and district of residence.1 In the conducted surveys, the respondents could assess and score several dimensions of life in their closest neighbourhood using a six-point rating scale where 1 represented the worst and 6 the best evaluation (the scale copies the school note range). In the paper, we present only a section thereof, narrowed down to the dimension of the quality of life of interest to us and its environmental component at the place of residence (see Fig. 1). The qualitative analysis of the obtained results was performed under descriptive and statistical methods, which enabled the significance of the relationships between the variables to be identified. The dependent variable took the form of an index made up of five detailed indices: the quality of the air, the quality of potable water, the intensity of noise, the accessibility of green areas (woods, parks), and the condition of the green areas (see Fig. 1). The so-structured index enabled the presentation of several variables with a single result (arithmetic mean), which facilitated data analysis and increased measurement reliability (Frankfort-Nachmias & Nachmias 2001). The Mann-Whitney-Wilcoxon test was employed in comparisons of the dichotomous variables, and the Kruskal-Wallis H-test for more than two groups. Whenever a statistically significant result of multiple comparisons was obtained, the Dunn's post hoc tests were performed with Holm's correction. The significance level accepted for analyses was p<unk>0.05. --- The environmental aspects of quality of life as viewed by the inhabitants The weight and significance of the factors making up the sphere of the environment are growing dynamically when assessing the quality of life (e.g. Degórski 2017). Due to public attitudes, the issue of the environment is becoming increasingly prominent in urban policies (Stephens et al. 2019). A new approach to the urban environment and the necessity to calculate the risks posed by climatic changes have triggered a change in the approach to the quality-of-life indicators. The over the entire city and reflected the demographic cross-section of the whole population. In effect, the structure of the sample in terms of social and demographic features was as follows: sex -female (58.1%), male (41.9%); age -18-24 (16.5%), 25-39 (28.0%), 40-59 (23.4%), >59 (32.1%); education -primary, basic vocational, lower secondary (in aggregate) -10.5%, secondary -42.9%, higher -46.4%. environmental diagnosis, which consists in broadly construed attempts at describing the environment and its components, utilises the measurements classified as objective indices, namely the Environmental Quality Index (EQI). The indices themselves are nothing more than objective physical measurements, and the 'quality' term suggests subjective evaluation. Considering that the aim does not always come down to discovering the presence of a physical component of the environment, but rather to capturing the perceived quality of the environment, in this paper we shall refer to the subjective evaluation of the quality of the environment. Therefore, with the intertwining social and environmental phenomena, the conceptualisation of the quality of life as a category, comprising various sets of elements, requires the adoption of objective and subjective views. The surveys, held in 2015 in Gda<unk>sk, reveal that nearly half the population (46%) believe that Gda<unk>sk stands out among other large cities in Poland in terms of the values of its natural environment, and this represents one of the vital components of the city's identity (Za<unk>cki 2020). The fact that the main axis of the city's historical development runs between the coastline of the Bay of Gda<unk>sk and the edge zone of the wooded moraine uplands gives Gda<unk>sk the advantage of the continuum of sea beaches on one end and the Tri-City Landscape Park on the other, and that creates a unique potential for the development of the leisure and recreation function. The linear layout of the city's downtown areas puts the benefits of nature 'within the reach' of a substantial part of the population. The status of both belts is that of spaces rooted in the physical environment, and the features of their significance are comprehended without any effort or special reflection (Bierwiaczonek et al. 2017). Wooded land accounts for 18% of Gda<unk>sk's entire area, and plays not only a protective role but also a recreation and leisure function. There are five nature reserves set up in the woodlands (occupying 270 ha in aggregate). On top of that, there are 448 ha in total of cultivated green within the city itself, including 300 ha that comprise 18 city parks (minimum 2 ha each), and numerous green areas and squares that add up to 148 ha (Studium... 2018). Hence, not surprisingly, accessibility of green areas is evaluated highest by the city dwellers (x <unk> =4.16), though their condition in terms of cleanliness and aesthetics is assessed slightly lower (x <unk> =3.96). Gda<unk>sk offers relatively good conditions concerning water management, particularly the supply of potable water to the inhabitants. Potable water is supplied from 10 intakes, including 8 of the deep type, and 2 of the surface type, all meeting the high EU standards. Water evaluated at 4.14 is, hence, along with green areas, a major environmental value of Gda<unk>sk. The picture is unfortunately poorer for the two other environmental elements: the quality of the air (x <unk> =3.84) and noise intensity (x <unk> =3.22) (Badach et al. 2021). The main contributors to air pollution are manufacturing plants, traffic and indoor air pollution. For years on end, too, intense and intolerable smells from the Waste Processing Plant in Gda<unk>sk-Szadó<unk>ki have remained a major air-related issue. The environmental conditions are further determined by the intensity of noise. Although noise falls below the permissible threshold over a major part of the city, there are areas exposed to the risk of excessive noise levels, including the neighbourhoods around the main traffic routes, the airport, and the vicinities of the port and the industrial estates. The environmental issue, which is of significance in the inhabitants' subjective perception, is reflected in the Gda<unk>sk development strategy. This transpires in the conducted empirical studies, which reveal evident relationships between natural values and the identity dimension of experiencing the city, as well as high scores given to individual components of the environmental conditions. In the latter respect, however, the city of Gda<unk>sk is not a uniform organism. With the superimposed territorial diversity, expressed as a subdivision into districts, differences transpired between individual sub-areas in their subjective evaluation of the environmental conditions. --- Evaluation of the environmental conditions in the city's territorial structure The environmentally friendly city captured in the slogan of 'the green city' comes down to the project of shaping the city so as to follow the line of development deemed desirable by most inhabitants. However, the methods used to implement the green city idea means that not all inhabitants can benefit equally from the effects of the needed and anticipated policy of creating green public spaces and maintaining elements of the green infrastructure. Inhabitants' access to green areas, green facilities or waterfront areas is ever more frequently disputed as a factor of social segregation and injustice in the city space (Anguelovski & Connolly 2022). While some districts benefit from green projects and the introduction of greenery into urban development, others remain neglected and deprived of easy access. This access to open public spaces, the existential significance of which was revealed during the pandemic, should be common in all residential districts instead of being confined to privileged downtown areas (Sagan 2021). Currently, however, the gentrification of districts subject to the process of revitalisation and restoration is an issue of far more social weight; the process includes making districts much more attractive to live in thanks to Hence, analysis of the locations and their access to green areas and waterfronts in the spatial structure of the city is of paramount importance in assessing the city's policy in the context of the inhabitants' quality of life. In order to arrive at a more complete evaluation of the environmental dimension of the quality of Gda<unk>sk residents' lives, an attempt was made to estimate the distribution of the inhabitants' opinions according to their place of residence. The results obtained in individual city districts were arranged under the frequency distribution procedure and, based thereon, three equal intervals were identified, the borders of which being the difference between the maximum and minimum values (arithmetic means). Then, three zones were identified in the city: the districts where the result (arithmetic mean of the 1-6-point rating scale) fell in the high interval (>3.92), districts with the result falling into the medium range (3.63-3.91), and districts falling in the low interval (<unk>3.63) (Fig. 2). An analysis of the average score distribution indicates a deep polarisation of the city's territorial structure in terms of the environmental conditions. On one extreme, we have areas that scored highest for their environmental conditions, namely the districts located in the coastal belt that comprise a large number of city parks (including President Ronald Reagan's Park, the largest in Gda<unk>sk), such as Przymorze, <unk>abianka, Zaspa, and the districts directly beside Tri-City Landscape Park (including the 'Oliwa woods' and the ZOO), such as Oliwa and VII Dwór (see Fig. 2, marked in green). Thanks to their location, the abovenamed areas share the features of relatively low noise and fairly clean air. At the other extreme are the areas where the environmental conditions were assessed lowest. These are the districts located either in the city centre where the built-up development is highly concentrated and the traffic intense, and where the main traffic junctions are located (e.g. the city centre), or they are the old districts (with municipal housing predominant), neighbouring on industrial or storage estates, intended for revitalisation at a later date (Nowy Port, Brze<unk>no, Letnica, Przeróbka, Stogi, Olszynka) (see Fig. 2, marked in red). --- The social and demographic determinants of the evaluation of the environmental conditions In the survey, it was assumed that demographic variables such as sex and age would be the vital predictors determining the evaluation of the environmental values of Gda<unk>sk, followed by the variables defining social status, namely education and income. Yet another variable taken into account was the length of residence in Gda<unk>sk, and variables of psychosocial nature such as the respondents' self-assessment in terms of their sense of happiness and their self-assessed opportunities to attain their own life goals. The analysis performed revealed that there is no statistically significant correlation between social features, such as education or income, and the overall evaluation of environmental conditions (significance above critical value, p > 0.05). A relation of this kind does exist, however, in the case of demographic features -the environmental conditions at the place of residence are slightly better perceived by men than by women (p <unk> 0.05), and by younger versus older people (p <unk> 0.001). The best views of the environmental conditions at the place of residence are shared by people residing in Gda<unk>sk for a period shorter than three years; the worst assessments, on the other hand, come from those who have lived here for more than ten years (p <unk> 0.005). Interestingly, the environmental conditions are valued higher by those who perceive their potential to attain their life goals well and believe they are happy, contrary to those who do not share any such The conducted survey reveals that the subjective perception of various aspects of the city, including the environmental conditions, changes along with the inhabitants' individual experiences. The best evaluation of the environmental conditions in the city came from the dwellers of Gda<unk>sk representing the younger age groups and those living in Gda<unk>sk for a relatively short time (up to 3 years). Prevailingly, the latter are 'young settlers', or people who came to Gda<unk>sk in search of a more attractive job or to take up studies, after which they decided to stay. Relying on common knowledge, one could assume that criticism of the city should decline as years go by; however, in this case, one should consider cognitive dissonance. The decision an individual makes to migrate and settle in the city results in the need to cherish a high view of its values to justify the reasonableness of the choice made. When interpreting the evaluation of the living conditions in the city, one should also refer to the fact that in the course of experiencing space one can identify elements 'which will not be forgotten' or 'which gain particular weight' (Jewdokimow 2007). The issue gains in importance considering that what we face in the contemporary world is the dynamically advancing process of population ageing. Similar trends are observed in Poland, Gda<unk>sk included (Czekanowski 2012;Stephens et al. 2019). This makes it even more necessary to take this fact into account in the processes of planning and developing the city space. --- Conclusions The task of urban policy is to mitigate any negative effects occurring in the 'city tissue' and counteract excessively stark disproportions between individual dimensions of the quality of life, including in terms of environmental conditions. From the perspective of the paradigm of sustainable development, the revitalisation of the most neglected areas is of high significance where such areas are found in districts of fewer environmental values. The conducted survey reveals far-reaching differences in the spatial structure of the city when it comes to the residents' perception of the values of the natural environment. In the city space, extremes that create a peculiar continuum can be identified. One extreme covers the districts located in the vicinities of woodlands and the coastline (where the scores for the environmental values are highest); the other extreme represents the older districts (the city authorities have already included them in part in the revitalisation programme) located in the neighbourhood of industrial and storage estates (where the evaluation scores are lowest). Moreover, the results of the survey presented in the paper indicate that the perception and evaluation of the environmental values at the place of residence are determined by demographic factors such as sex and age. The survey shows that lower satisfaction with the environmental values at their place of residence is shared by women and people in the oldest age group. Even though one might presume that town planning is by no means sex-related and the city space is for everyone, the conducted survey points to the fact that women, men, the * the higher the mean, the higher the evaluation ** the significant difference between categories: 'up to 3 years' and 'longer than 10 years' Source: own elaboration. young and the elderly use the city in different ways and have different expectations of it. Assuming that the quality of life is the ratio between the existing dwelling conditions (environmental conditions included) and the aspirations of the city dwellers, one can conclude that environmental values such as accessibility of green areas, quality of the air and noise intensity rank higher in the hierarchy of needs shared by women and the elderly, and result in their more critical views. The realisation of the ambitious goal of attaining a higher balance between spatial structures will not be an easy process. This is because looming on the horizon are the long-existent risks related to the process of spatial planning. These can include insufficient municipalityowned land, the drive shared by private investors (developers) to generate maximum profit by increasing the intensity of built-up development at the expense of the natural environment, low environmental awareness among the decision makers, and the continually changing law, which constricts the long-term policy of protecting the public interest. As the urban population grows, the expected improvement of life quality in cities will grow too. Considering control of the climatic risk and managing it, all city users will need to be engaged in the process.
The purpose of this paper is to attempt an analysis of the environmental dimension of the quality of life using quantitative surveys conducted among residents of Gdańsk. In the paper, we make reference to the theoretical assumptions ensuing from the concept of a comprehensive and integrated approach to the development of the urban environment, whilst noting the profound impact humans bring to their evaluation of the environmental components. The paper focuses primarily on the inhabitants' attitudes to the environmental values of their place of residence in relation to things such as the condition and accessibility of green areas, air quality, potable water quality and noise intensity. The surveys indicate that views on the city's environmental values are determined by numerous factors, such as the city's territorial structure (districts) and its demographic structure (sex, age), and psychosocial features, such as a subjective sense of mental well-being.
Introduction On March 11, 2022, the World Health Organization (WHO) declared COVID-19 as a pandemic (WHO, 2020). As a result, the vulnerability of at-risk populations heightened worldwide, and health inequalities for many vulnerable people and their communities have worsened (Jefferson et al., 2021). One such vulnerable group is people who are deprived of their liberty in a variety of detention settings. The impact of COVID-19 within such detention settings, as described in developed countries (Crowe and Drew, 2021;Hawks et al., 2020;Paynter et al., 2021;Reinhart and Chen, 2021;Strassle and Berkman, 2020) and indeed in Africa (Jumbe et al., 2022;Mhlanga-Gunda et al., 2022;Muntingh, 2020;Nweze et al., 2020;Van Hout, 2020a;Van Hout, 2020b;Van Hout et al., 2022a;Van Hout et al., 2022b), and South Africa (Van Hout and Wessels, 2021a) is the focus of our Viewpoint. Historical barriers to health in closed settings, such as overcrowding, poor hygiene facilities and resources, as well as poor ventilation, all of which intersecting with already poor menstrual health management conditions within detention facilities, will undoubtedly exacerbate the vulnerability of incarcerated women to COVID-19, with such environments conducive to spread of disease (Muntingh, 2020;Ohuabunwa and Spaulding, 2020). Understanding the intersectional vulnerabilities that exist within detention settings, the United Nations (UN) has called for various measures to be initiated to ensure a decreased risk to public health within these facilities, including the early release of vulnerable incarcerated persons due to issues of over-crowding and having to eat, shower and toilet in communal areas (United Nations Office on Drugs and Crime, World Health Organization, UNAIDS and Office of the High Commissioner for Human Rights, 2020). Such measures are congruent to the normative UN standards of detention, for example the United Nations Standard Minimum Rules for the Treatment of Prisoners (the Nelson Mandela Rules) (United Nations Office on Drugs and Crime, 2016), the United Nations Rules for the Treatment of Women Prisoners and Non-custodial Measures for Women Offenders (the Bangkok Rules) (United Nations Office on Drugs and Crime, 2010), the United Nations Standard Minimum Rules for Non-custodial Measures (the Tokyo Rules) ( UN, 1990), in addition to the African Charter on Human and People's Rights (Organization of African Unity, 1981), and the nonbinding Robben Island Guidelines for the Prohibition and Prevention of Torture in Africa (Niyizurugero and Lesse <unk>ne, 2008). At the global level both before and since the COVID-19, there is a wealth of evidence indicating continued health inequity of women in prisons, with their specific health needs routinely neglected and deprioritised. This is especially the case regarding their sexual and reproductive health (UNDOC, 2009). Lack of access to menstrual health products (MHP) in prison, such as sanitary towels is known as period poverty (IDPC, 2021;Penal Reform International, 2021). Our Viewpoint concerns the right to menstrual hygiene management (MHM) in detention settings, with a focus on the African context and specifically South Africa. Globally, of the 11.5 million people deprived of their liberty, 741,000 are women (Penal Reform International, 2021). Over 1 million are detained in Africa (World Prison Brief, 2022), and in South Africa, women are a minority prison population, with 3 453 women incarcerated in the country (Department of Correctional Services [DCS], 2021; Van Hout and Wessels, 2021b). --- Menstruation health management: cultural dimensions, disparities, and COVID-19 impacts Menstruation is an integral function of cis-women's, trans and gender-nonconforming people's health throughout their reproductive lifespan. Defined as the hygienic menstrual management of menstrual blood through the safe and hygienic use of, and disposal of, menstrual management materials (Kuhlmann et al., 2017), MHM has become an emerging public health endeavour affecting approximately 50% of the world's population who menstruate (Crawford et al., 2019). According to the United Nations International Children's Emergency Fund (UNICEF), approximately 800million women menstruate daily (UNICEF, 2015) and of the 1.8 billion females, non-binary, transgender men that menstruate, millions are unable to manage their menses hygienically and at their own discretion due to broader socio-economic disparities (Sumpter and Torondel, 2013) juxtaposed against cultural misconceptions (Padmanabhanunni et al., 2018;Shannon et al., 2021;Yamakoshi et al., 2020) and taboos (Agyekum, 2002;O'Sullivan et al., 2007;Strassmann, 1992). Poor MHM is generally a consequence of poverty and deprivation (Bakibinga and Rukuba-Ngaiza, 2021;Hall, 2021;Rossouw and Ross, 2021). The impact of the COVID-19 pandemic on MHM remains largely unexplored (Spagnolo et al., 2020). Beyond the COVID-19 devastations, we suspect that millions of women around the world have suffered and continue to suffer an accelerated untold erosion of basic human rights, bodily integrity and dignity due to the lack of access to adequate MHM (Ajari, 2020;Poague et al., 2022;Salim and Salim, 2021). The vulnerability of economically and socially at-risk girls, women, trans and gender-nonconforming people who menstruate is potentially heightened during COVID-19 as a scarcity in sanitary products and adequate water, sanitation and hygiene facilities, disproportionately hampers their agency in managing their menstrual health hygienically at their own discretion (MacKinnon and Bremshey, 2020;Obani, 2021;Poague et al., 2022). Globally, the impact of poor MHM remains largely unknown (Sumpter and Torondel, 2013) due to the deeply historical and cultural construction of menstruation as an individual health concern framed within the private realm. As an individual and private concern, the solution for poor or inadequate MHM is framed as the responsibility of the individual, irrespective of socio-economic circumstances and despite very serious health consequences (Ajari, 2020;Carney, 2020;Harlow and Ephross, 1995;Torondel et al., 2018). Over the past decade, menstrual health has increasingly become a global public health concern (Sommer et al., 2015) that has been adopted as a human rights endeavour because of the social, political and economic disparities associated with MHM (Goldblatt and Steele, 2019). Understanding the harmful gendered effects that women endure during times of pandemic and crisis, the World Bank, UNICEF and WHO, together with other health and gender advocacy agencies, have sounded the alarm by issuing briefs and recommendations aimed at assisting governments in creating gender conscientized health policies during the COVID-19 pandemic. Refrains such as Periods don't stop for pandemics (World Bank, 2020) and World can pause but periods cannot! (Arora, 2020) highlight the urgency that is needed in managing MHM, particularly within contexts already lacking sustainable resources. This is echoed by the World Bank (2020) and their movement to end period poverty and period stigma by 2030. However, slow progress in development efforts hamper MHM in middle to low-income countries, in addition to poverty-stricken contexts in high income countries. 75% of households in low and middle-income countries have inadequate access to handwashing with soap (Eichelberger et al., 2021) which is salient both to MHM and in stopping the transmission of the COVID-19 pandemic. Progress in within the African context, as in many other developmental contexts, have been hampered. --- Complexities of ensuing menstrual hygiene management in African prisons: pre and beyond COVID-19 Menstruation and other reproductive functions have been historically stigmatized as a mechanism of othering women, through its signification of difference between men and women (Frank, 2020;Van Hout & Crowley, 2021). Within the African context, social stigmatisations and cultural misunderstandings of menstruation have resulted in negative attitudes and experiences for women (Padmanabhanunni et al., 2018;Shannon et al., 2021). Researcher findings of such have been evidenced in Mali (Strassmann, 1992) and Ghana (Agyekum, 2002), as well as within the South African context (O'Sullivan et al., 2007;Padmanabhanunni et al., 2018). Framed as a "political silence", (Goldblatt and Steele, 2019, p. 294), menstruation in contexts outside detention facilities are hushed because of the social and cultural stigmas associated with menstruation. Yet, much like women living on the outside, women within the carceral contexts face issues of menstrual equity at the intersection of discrimination. Within detention spaces globally, women lack autonomy over their own bodies, as they are reliant on the state to provide for their basic MHM needs (Weiss-Wolf, 2020). Indeed, the Bangkok rules acknowledge menstrual health, specifically in Rule 5 (United Nations Office on Drugs and Crime, 2010), which states that carceral centres are responsible for the provision of hygienic facilities and MHP, free from cost to women. Unfortunately, there is a lack of literature currently within the African and South Africa context, that explores how the impact of social and cultural stigmatisations, as well as the lack of bodily autonomy, manifest themselves in the menstruating experience within carceral facilities. Even prior to the COVID-19 pandemic, the general and gendered health disparities faced by the incarcerated have largely escaped the priorities of global and African prison health agendas (Barberet and Jackson, 2017;Van Hout and Wessels, 2021b). For example, a systematic review which explored incarcerated female's experiences of carceral health care in sub-Saharan Africa (SSA) over the past two decades, uncovered not only a dearth on incarcerated female experiences but also violations of human rights, coupled with poor health-care provision including lack of prisons system resourcing of sanitary products (Van Hout and Mhlanga-Gunda, 2018). The marginalized vulnerabilities of incarcerated women, though grossly unexplored, are expected to have exponentially heightened since the advent of the pandemic. Harsh and unexpected COVID-19 lockdown restrictions disrupting the supply of menstrual hygiene supplies, combined with pandemic induced economic strain on families, we suspect have invariably affected the MHM of incarcerated women leaving this already at-risk cohort at the mercy of overburdened state resources. In detention settings, deliberate or unintended restricted access to MHP and the inferior quality of MHP mean that incarcerated women may not have a sufficient supply of MHP per cycle (Carney, 2020). Yet just as the pre-pandemic silence, the absence of scholarly work that critically engages with menstruation, internationally and locally, in detention facilities is distressing. Equally concerning are the immense barriers to access of researchers into prisons in Africa (Mhlanga-Gunda et al., 2020). An opinion piece published in the Lancet, titled What are the greatest health challenges facing people who are incarcerated? We need to ask them, summarises the strides that need to be achieved in prioritising menstruation in detention facilities, when the only reference to gendered issues was listed as "gender-affirming care" (Berk et al., 2021, p.703). It is within such narratives that the gross realities of women experience are neatly glossed over. The COVID-19 pandemic draws parallels to poor MHM. One a global catastrophic pandemic, one a seemingly hidden gendered issue, find commonality in that they both are symptomatic, and at the same time aggravated, by ailing health-care infrastructures. Both are an infringement on basic human rights to accessible and equitable health care heightened by pre-existing global health disparities. It is, therefore, unsurprising that the COVID-19 pandemic may amplify the "politics of health and health provision" (Jefferson et al., 2021, p.149) of incarcerated women who are already marginalised and silenced within their contexts of restraint. This holds significance within most carceral contexts for example as documented in South Africa (Van Hout & Wessels, 2021b) where females remain a forgotten minority. South Africa: women in detention spaces and right to menstrual health Prisons in SSA have seen an increase in the incarcerated female population in recent years (Penal Reform International, 2016;Van Hout and Mhlanga-Gunda, 2018;Walmsley, 2017). South Africa has one of the largest prison populations on the African continent (World Prison Brief, 2022), with the latest Department of Correctional Services (DCS) report for 2020/2021 indicating that there are currently 140,948 individuals incarcerated in South Africa. Of this total, 137 495 are men and 3,453 are women, making up a small percentage of 2.45% (DCS, 2021). As one of the countries who are a signatory for the United Nations Standard Minimum Rules for Non-Custodial Measures, the DCS in South Africa follows the delegated guidelines and minimum standards for the provision of health care services of the men and women remanded to their custody (DCS, 2016). Recognition of these rights and responsibilities are protected through local regulations as well and include the Correctional Services Act (DCS, 1998) and the White Paper on Corrections in South Africa (DCS, 2005). Yet there are long standing concerns surrounding the incarcerated population's well-being, including overcrowding, poor nutrition and deteriorating facilities, all of which impact the physical and mental health of incarcerated populations (Agboola, 2016;Ajari, 2020;Van Hout and Mhlanga-Gunda, 2018;Van Hout and Wessels, 2021b). There is a resounding silence within the South African context in highlighting and prioritizing menstrual health equity within incarcerated contexts. Despite the advances that have been made in prioritizing gender, the MHM within correctional facilities remains largely unexplored within the South Africa context (Artz et al., 2012;Artz and Hoffman-Wanderer, 2017;du Preez, 2008;Haffejee et al., 2005;Hopkins, 2016;Luyt and du Preez, 2010). The veiled secrecy that envelopes most carceral contexts in South Africa means that deliberate menstrual discrimination based on multiple intersectionalities goes unchallenged (Carney, 2020). As reported by Van Hout and Wessels (2021a), the longstanding and precarious situation of women in detention settings in South Africa since post-apartheid timeframes needs to be highlighted, with the visibility of women enhanced, particularly with regards to poor living conditions (including a lack of availability of menstrual products), reasonable and safe accommodation and protection from custodial violence. It is therefore unsurprising that the COVID-19 pandemic would raise the alarm as a correctional health crisis, particularly when considering the devastation that HIV/AIDS and tuberculosis has wrought on the South African carceral community. Despite this, media coverage and academic attention surrounding COVID-19 in corrections has been framed as gender-neutral, with incarcerated women all but invisible and the impact of the pandemic on their lives ignored (Ellis, 2020;Van Hout and Wessels, 2021b). The unique health needs of incarcerated female population in an already-overburdened system that is overcrowded and unhygienic, places women at great risk of having their health needs relegated and neglected. --- Gendered impacts of COVID-19 for incarcerated South African women's menstrual hygiene management Unfortunately, as the female population comprise only a fraction of the general incarcerated population in South Africa, there exists a research vacuum and narrative silence around the unique situation posed for incarcerated women, before and during the advent of the COVID-19 pandemic (Agboola, 2016;Mussell et al., 2020;Padmanabhanunni et al., 2018). To date, only one study has addressed the menstrual health narratives of a portion of the incarcerated in South Africa. In the context of a broader study of lived experience, Agboola (2016) explicates the narratives of 10 previously incarcerated women, who discussed the conditions of their carceral MHM. Their accounts corroborate previous findings from research in South African correctional facilities, where access to health-care services is limited, as are necessary general hygiene provisors such as like soap and water, all of which are exacerbated by high levels of overcrowding. For women, the situation is far more dire. It is evident that the incarcerated female population have complex health needs with disproportionate rates of underlying health conditions when compared to women in general, and so necessitates the understanding that they often have greater gender-specific, primary healthcare needs in comparison to their male counterparts, a reality that is particularly evident with regards to menstruation (Agboola, 2016). Both local (Gender, Health and Justice Research Unit, 2012) and international (Corston, 2007) research studies indicates that, on average, incarcerated women are issued with two sanitary pads for each day that they are menstruating, at the cost of the state. However, this resulted in a policing of periods, where women were forced to provide evidence of soiled sanitary towels to correctional staff before replacements were issued (Agboola, 2016). Even in ordinary circumstances incarcerated women's health-care needs necessitate unique undertakings in the male dominated carceral environment, but when resources are diverted into emergency health provisions for COVID-19, it is not unlikely that access to reproductive health services behind bars will be impacted (Rope, 2020). Women have special hygiene requirements which correctional facility authorities are obliged to provide for, along with hygienic menstrual material disposal. Reports during the pandemic have been that globally, lockdown efforts have resulted in limited delivery and access to sanitary products (Barnes, 2020;Sommer et al., 2020;UNICEF and UNFPA, 2020). Women in correctional facilities have had to go without sanitary products during COVID-19 crisis management lockdowns, as MHP such as tampons or menstrual cups are often provided by external support networks like charities or family members, who are no longer able to visit as access to prisons by external visitors has been prohibited. Although tampons and other vital MHP may be available from the correctional commissary, they are often sold at inflated prices which can be cost-prohibitive (Ellis, 2020). "Family and friends visiting prisoners are in many ways the lifeblood of the prison, bringing not only human interaction and contact with the outside world but also resources such as cash, food, bedding, toiletries and so forth" (Muntingh, 2020, p.5). For vulnerable and marginalized women, including the incarcerated population, the pandemic crisis may result in menstruation becoming a time of deprivation and stigma, when faced shortages and reduced privacy under lockdown (UNICEF and UNFPA, 2020). --- A ''new normal'' and incarcerated women agency in menstrual hygiene management As stated, within the detention space, MHM becomes a public matter, rather than occupying its usual space in the private lives of women. Whereas prior to incarceration they were in the sole care of their menstruation and menstrual symptoms, once in a correctional facility, this fundamental aspect of the women experience becomes a public affair. Of course, this impacts their embodied agency within the constraints of their carceral surroundings, which form part of a penal system designed with the male body in mind, a space where women's bodies and needs are invisible (Bostock, 2020). The conceptualisation of such experiences is manifested in the term period poverty, which Bostock (2020) denotes as a form of biopower, where menstrual inequality in corrections and the restriction of sanitary products is used to gain control of women through their biology. In response to social and physical distancing measures and lockdowns to manage the COVID-19 pandemic, issues of carceral accountability and oversight increase as do concerns with incarcerated women's agency, privacy, autonomy, hygiene, and selfsufficiency. "The new restrictions allow for less accountability and more isolation than we have seen in decades" (Mussell et al., 2020, p. 5). To complicate matters further incarcerated women usually come from marginalized and disadvantaged backgrounds, characterized by histories of substance abuse, violence, physical and sexual abuse, all of which exacerbate physical and mental health problems (Agboola, 2016;Parry, 2020;van den Bergh et al., 2011). Even prior to the challenges imposed by the COVID-19 pandemic, incarcerated women found monthly menstrual management, as well as accompanying myths and taboos, led to high levels of menstrual distress, particularly prevalent within the South African cultural milieu (O'Sullivan et al., 2007;Padmanabhanunni et al., 2018;Scorgie et al., 2016). Many vulnerable women state that they have a lack of understanding of the menstrual cycle and are unable to function normally as they are physically and mentally weaker during menstruation, experiencing issues with bodily cleanliness, feeling "dirty" during their menstrual period, as well as "vulnerable" as they believed it was a time of "openness" of the body with "a susceptibility to infection and illness" (Smith, 2009, p.5). The sexual and disgust connotations of menstruation, coupled its secretive demeanour, mean that poor menstrual management resources and misinformation result in its monthly onset ensuing a fraught and anxious time for women. Therefore, raising awareness regarding menstruation and hygienic practices, as largely a neglected area in terms of research, is imperative to dignified menstrual health practices for vulnerable women (Sumpter and Torondel, 2013). The persistence of shame and stigma regarding menstruation requires far more than the provision of sanitary products, it requires the sustained effort and intervention in developing the incarcerated women self-esteem and agency concerning their bodies in order to improve their menstrual health practices (Geismar, 2018). Unfortunately, such bodily empowerment seems unlikely in a carceral environment in the grips of COVID-19, where basic health interventions of sanitation and social distancing are hampered by lacking resources and failing infrastructure. Moreover, the withdrawal or lapse of incarcerated women's reproductive health care and its diversion into COVID-19 crisis health care treats menstruation as a commodity rather than a basic human right, further exacerbating period poverty in female correctional centres. The serious dearth of information on the experience of menstruation and of menstrual symptoms of the South African incarcerated community (Gender, Health and Justice Research Unit, 2012) necessitates the undertaking of academic interest and research to better understand the nature of their menstrual health management and its impact on their lives, both inside and outside of correctional facilities. Even before the advent of COVID-19 there was an increasing need to understand the incarcerated community's MHM and period poverty, alongside studies concerning the unique experiences of transgender and non-binary menstruating people in corrections (Chrisler et al., 2016;Lane et al., 2021). "It is essential to understand the unique and diverse oppressions faced surrounding period poverty to ensure appropriate and proportionate activism, legislation and improvements for menstruating people in prisons" (Bostock, 2020, p. 7). South Africa was the first African country to adopt a constitution that explicitly prohibits discrimination on the basis of gender, sex and sexual orientation (amongst other categories) (Section 9 of the South African Constitution). The Equality Court judgement of September v Subramoney was the first of its kind in South Africa (and Africa) by tackling the equality rights of transgender prisoners, and rights to dignified detention and reasonable accommodation (Van Hout, 2022a). By analogy this case could leverage for greater rights assurances of menstruating women, and women in general in South Africa's prisons. Additionally, the 2020 judgement of Sonke Gender Justice NPS v President of the Republic of South Africa is of further relevance to the situation of women menstruating in prison, and, held that section 7(2) of the Constitution required the State to take reasonable steps to protect the rights of incarcerated persons (Van Hout, 2020b). To this end, the COVID-19 pandemic offers an opportunity for the DCS to fully integrate an empathetic and rights-based approach that is more in line with the South African government's Department of Women's (2019) Sanitary Dignity Implementation Framework for the provision of sanitary dignity. Minister Ronald Lamola (Lamola, 2020) issued a press release assuring the United Nations that South Africa would adhere more closely to the Mandela rules following the pandemic. If our new normal during and after the COVID-19 pandemic can be orientated towards reducing inequalities and increasing empowerment for women, particularly vulnerable and women like those incarcerated, then MHM must be part of that conversation. Any new practices adopted in light of the pandemic should be sustainable and instituted long term, setting a precedent going forward and becoming entrenched practice (Prais, 2020). Although enabling every women in South African to manage their menstruation safely and comfortably is not a simple undertaking, especially in the carceral environment, establishing menstrual health management as an actionable public health issue is imperative (Geismar, 2018). Such adopted practises and policies can do much to establish and maintain meaningful development around menstruation and empowerment in the post COVID-19 era to come. --- Concluding remarks Our Viewpoint highlights the potential equality and basic human rights violations of menstruating women in South African prisons pre-COVID and beyond. Extant jurisprudence can be leveraged to support strategic public litigation, along with various efforts to sensitise government, promote civil society activism and encourage further research to inform policy and practice which sufficiently uphold the rights of women. South Africa has ratified the Optional Protocol to the UN Convention Against Torture, and national preventive mechanisms are advised to fully consider inspections regarding menstrual management provisions in South Africans prison system going forward. Structural inequalities in various contexts around the world have exacerbated COVID-19 and MHM disparities within historical contexts of deprivation. This has very real continuing health consequences for the girls, women, non-binary, and transgender men who lack access to the resources and facilities needed to safely manage their monthly cycle at their own discretion. MHM disparities require multisectoral collaboration between public heath, legal, human rights, and carceral contexts for menstrual equity and human rights issues to advance. It is essential for governments, big businesses and development organisations and projects, to find innovative and cost-effective strategies for meeting the crisis response to the COVID-19 pandemic, but also in achieving a sustainable supply of MHM to those inside and outside of carceral facilities. Within the prison context in South Africa, women face multiple layers of discrimination and punishment that draw attention to the historical discourses of correctional facilities as a site of punishment, surveillance, and discipline. Too often, the voices of those most vulnerable are missing from commentaries and activism on menstrual health issues. There is a growing need for transparency within carceral facilities that research can provide by exploring the lived experiences of women and corrections officers in managing MHM. The COVID-19 pandemic presents unique challenges to access to carceral facilities that need to be confronted. Restricted research access to carceral facilities could signify that any unequitable and in humane retreatment of incarcerated women may go unopposed. Additionally, the absence in menstrual health literature, particularly within the African context, means that the intersection of health disparities and racial discrimination that the COVID-19 pandemic has highlighted, remains unknown and, therefore, unchallenged with carceral contexts, indicating the need for future research prioritisation. Finally, in this Viewpoint, we acknowledge that menstruation is not an exclusive feature of the female body since non-binary and transgender men may also menstruate. There is currently a punishing silence in international and national literature that seeks to understand the lived menstrual experiences of non-binary and transgender men both inside and outside of carceral facilities. The structural restrictions of their menstrual bodies go unchallenged in contexts where historical constructions of masculinity pervade. The is a necessary area of social, legal, ethical and research development in MHM. --- Further reading United Nations (1991), "United nations standard minimum rules for non-custodial measures (the Tokyo rules)", United Nations (UN), 2 April, available at: www.unodc.org/pdf/criminal_justice/UN_Standard_ Minimum_Rules_for_Non-custodial_Measures_Tokyo_Rules.pdf United Nations International Children's Emergency Fund and United Nations Population Fund (2020), "Periods in the pandemic: 9 things we need to know COVID-19 is having a global impact on menstrual health and hygiene", UNICEF and UNFPA, 31 August, available at: www.unicef.org/coronavirus/covid-19-periods-in-pandemic-9-things-to-know About the authors Janice Kathleen Moodley is a Psychological Practitioner and Senior Lecturer in the Department of Psychology at the University of South Africa. Her research is critically orientated and challenges the discursive interactions between psychology, health, gender, racial inequalities and politics within the global South. Janice Kathleen Moodley is the corresponding author and can be contacted at: [email protected] Bianca Rochelle Parry has been a Lecturer and a Postdoctoral Fellow at the Department of Psychology and the Chief Albert Luthuli Research Chair at the University of South Africa, Muckleneuk Campus, since 2016. Her main research focus is on the lived experiences of marginalized communities in South African society, with a particular concentration on women and gender. This application extends to her teaching interests, which include community psychology, qualitative research methodologies and online teaching methods, specifically within the correctional context. For instructions on how to order reprints of this article, please visit our website: www.emeraldgrouppublishing.com/licensing/reprints.htm Or contact us for further details: [email protected]
Purpose -The menstrual health and menstrual hygiene management (MHM) of incarcerated women remains relatively low on the agenda of public health interventions globally, widening the inequitable access of incarcerated women to safe and readily available menstrual health products (MHP). The COVID-19 pandemic has adversely impacted on the MHM gains made in various development sectors in the global North and South, through its amplification of vulnerability for already at-risk populations. This is especially significant to developing countries such as South Africa where the incarcerated female population are an often-forgotten minority. Design/methodology/approach -This viewpoint highlights the ignominious silence of research and policy attention within the South African carceral context in addressing MHM. The ethical and political implications of such silences are unpacked by reviewing international and local literature that confront issues of inequality and equitable access to MHP and MHM resources within incarcerated contexts. Findings -Structural inequalities in various contexts around the world have exacerbated COVID-19 and MHM. Within the prison context in South Africa, women face multiple layers of discrimination and punishment that draw attention to the historical discourses of correctional facilities as a site of surveillance and discipline. Research limitations/implications -This study acknowledges that while this viewpoint is essential in rising awareness about gaps in literature, it is not empirical in nature. Practical implications -The authors believe that this viewpoint is essential in raising critical awareness on MHM in carceral facilities in South Africa. The authors hope to use this publication as the theoretical argument to pursue empirical research on MHM within carceral facilities in South Africa. The authors hope that this publication would provide the context for international and local funders, to assist in the empirical research, which aims to roll out sustainable MHP to incarcerated women in South Africa. Social implications -The authors believe that this viewpoint is the starting point in accelerating the roll out of sustainable MHP to incarcerated females in South Africa. These are females who are on the periphery of society that are in need of practical interventions. Publishing this viewpoint would provide the team with the credibility to apply for international and national funding to roll out sustainable solutions. Originality/value -It is hoped that the gaps in literature and nodes for social and human rights activism highlighted within this viewpoint establish the need for further participatory research, human rights advocacy and informed civic engagement to ensure the voices of these women and their basic human rights are upheld.
Introduction Education is generally regarded as a fundamental human right of all citizens across the globe. This is because, it equips individuals with the desired knowledge, skills and training, which can help them attain self-reliance in decision making and to succeed in all the spheres of life. Based on the provisions of Article 21A of the Constitution of the Federal Republic of Nigeria, the state shall provide free and compulsory education to all children from the age of six to 14 years as the state may determine by law [1], [2]. Besides, Section 18(1) also states that the 'Government shall direct its policy towards ensuring that there are equal and adequate educational opportunities at all levels', while Section 18 (3) shows that the 'Government shall strive to eradicate illiteracy and to this end government shall as and when practicable provide (a) free compulsory and universal primary education; (b) free secondary education; (c) free university education and (d) free adult literacy programme' [3]. Furthermore, Nigeria takes part in major conventions geared towards bridging gender imbalance and for the protection of rights of children. The Organization of African Unity (OAU) Charter declared that 'every child shall have the right to education and full realization of this right shall in particular ensure equal access to education in respect of males, females, gifted and disadvantaged children for all section of the community' [4]. The provision of free education to citizens, especially children and women was also concretized by the Convention on the Rights of the Child in 1991 in which Nigeria with the support of UNICEF (United Nations International Emergency Children's Fund) took bold steps to domesticate the convention into national law. The bill was passed by the National Assembly in July, 2003 and by September, 2003 it was promulgated as the Child's Rights Act of 2003, after the assent of the president [3]. Despite the available laws and conventions as cited, many children, especially the girl child find it extremely difficult to have access to free and quality education, particularly in the Northern part of Nigeria due to poverty, cultural belief systems, restrictions, stereotyping and gender discrimination. For these reasons, the challenges affecting girl child education in Nigeria become major concerns in academic discourses because of their seeming vulnerability amidst socio-cultural and economic barriers. The more the girl child is rendered illiterate, the more the society collapses. This is so because, even without the Universal Basic Education, the girl child may one day become a mother to be shouldered with the responsibility of training her children, which can be catastrophic to the future of the society [5]. This issue according to Robert Limlim, the UNICEF deputy president, 'educating girls is known to be the basis for sound economic and social development. Educated mothers will in turn educate their children, better care for their families and provide their children with adequate nutrition' [6]. In Nigeria, especially in Muslim dominated Northern States, girl child education is conspicuously lagging behind despite policies made to ensure equitable access to education. Maikudi [7] argues that, the problem of girl child education in the Northern region could be traced back to the colonial era when the British educational policy placed more emphasis on co-education. The system was not appealing to the predominantly Northern Muslim communities not until 1929 when the first girls' school was established in the Northern Province. Even during that time, there was low spending on girls' education. To a large extent therefore, the introduction of formal education for the girl child in Northern Nigeria by the British at that time could be seen as a manifestation of their interest to control women's education within the context of minimal literacy and numerous skills. Maikudi [7] further observes that formal education at that time also gives currency to domestic roles as envisaged by the British to train a class of Northern upper class girls as housewives to the growing up of male Nigerian bureaucrats in addition to socialization of their children along the same line. In this regard therefore, the problem of girl child education stems largely from cultural and religious beliefs, the nature of the British co-education, as well as gender discrimination in Northern Nigeria. This paper therefore, attempts a cross-examination of the challenges debilitating against girl child education in Ungogo Local Government Area of Kano State, Nigeria. The choice of Ungogo is largely informed by the fact that it is one of the most populous Local Government Areas located within the Kano metropolis with the largest number of girls that cannot access simple Universal Basic Education within the State. The specific objectives of the paper however, are (1) To find out the roles played by religio-cultural beliefs in depriving girl child education (2) To determine whether poverty leads to deprivation of girl child education (3) To assess the problems of gender discrimination, which also leads to denial of girl child education. --- Literature Review Previous findings reveal that girl child education has recently got the attention of scholars because of its importance to the development of society and its adverse effects on the girls that are denied access to education. Fapohunda [8] observes that 'persistent presence of illiteracy among girl child creates unfavorable environment for meaningful development. Gender discrimination in terms of education exacerbates backwardness, especially in Northern Nigeria by preventing majority of females from obtaining rightful education needed to improve their prospects'. In addition, UNICEF [9] states that 'when girls are denied their full rights to education, it affects the society in its entirety, as no society is sure of its future when the girl child is denied her right to education'. On the other hand, Ojimadu [10] argues that the fundamental rights of a girl can only be developed through sound education and realizing that all other rights of the girl be it economic, social or political lay on catering for her right to education. Many other scholars submit that girl child education reduces social ills including unemployment, disruption of family values, widespread of diseases and insecurity [11]. Furthermore, Oresile [12] maintains that there is a clear linkage between girls' education and sustainable development of a country. This according to him is realized through their roles as future mothers and peace educators as they inculcate in their children the norms, values, and ethics of society. Maimuna [13] posits that education provides the girl child to fit properly into different social roles in the society as she acquires both mental and physical skills to develop her mindset and to contribute meaningfully to her society. On the same vain, Stephen [14] observes that the acquisition of education by the girl child lays the foundation for socio-economic improvements of nations. The Federal Ministry of Women Affairs [15] also avers that 'educational attainment is no doubt the most fundamental prerequisite for empowering girls in all spheres of life. This report makes it clear that without quality education, girls will be unable to participate and be represented in government. A broad range of empirical data also shows that girl child education reduces mortality rates of children because knowledge and awareness ensure the increase of healthy and hygienic maternity. Based on the literature reviewed, it is clear that there is a direct linkage between girl child education and societal development. It is equally established that, the denial of girl child education has adverse effects on the girl, which by extension paved the way for other societal problems. In spite of these efforts, the girl child remains in critical socio-economic and political conditions, which largely stemmed from several factors contributing to the backward state of her education, especially in Northern Nigeria. Despite the relevance of the literature reviewed, it is observed that there is the dearth of sources highlighting the plights of girl child education from a micro level and especially, in the rural or semi-rural areas, which is a gap this present study attempts to bridge. Although the colonial period arguably witnessed a lot of educational activities in Northern Nigeria, especially in the 1920s and 1940s, boys' education received greater attention. It was only from the 1930s that girl child education received attention in the Northern Province. On this basis therefore, girl child education in Northern Nigeria was first hindered by the unevenness in terms of equal access to educational irrespective of gender categories [16], [17], [18], [19]. Maikudi [7] establishes that the problem of girl child education in Northern part of Nigeria in general could be traced back to the colonial era when the British educational policy placed more emphasis on co-education. That system was however, not appealing to the predominantly Northern Muslim communities until 1929 when the first girls' school was established in Northern Nigeria. Even with the establishment of the school, there was low funding on girls' education. For this simple reason, Kurfi [5] and Dauda [20] also posit that the problem of girl child education in the Northern region largely stems from the introduction of Western education by the British colonial government, which laid emphasis on the education of both girls and boys attending the same schools. Apart from the stated religious dimension, Muslims of Northern Nigeria were culturally uncomfortable with the Western system of education, especially that of girl child, which they believe could cause some disasters to them. This development therefore, served as a barrier to the smooth development and acceptance of girl child education in Northern Nigeria. Okpani [21] further concludes that the problem of girl child education has its roots in skepticisms held by the present Northern States in Nigeria about Western education, which was introduced by colonialists and Christian missionaries with emphasis on attendance of both boys and girls. It should however be noted that, the rejection of Western education by Northern Muslims emerged at the beginning of its introduction, but was later embraced by the majority. Even with the recent Boko-Haram insurrections claiming to question the legality of Western education, mainstream Muslims have not subscribed to their baseless ideology. --- Methodology This paper is built on both primary and secondary data. While the administration of questionnaires formed the primary aspect of the data, the secondary sources include; published and unpublished works ranging from books, journal articles to theses and dissertations. The target population of the study constitutes the youth between ages 15-35 years which include both male and female who reside in Ungogo Local Government Area. These categories of people were chosen because they fall within age bracket of youth and they have firsthand information about the problem under study. --- Sample Size Due to time limitation and resource constraint, this study could not cover the total population and as such, the sample size is relatively small. A total of 120 respondents formed the sample size across five political wards that are purposively selected within the Local Government Area. --- Sampling Technique This paper utilized multi-stage clustered sampling technique. The rational for this sampling technique is to have equal representation of units. It is also supported with a purposive sampling technique where necessary. Thus, the following stages were followed: Stage : The researchers also identified 6 locations out of the 6 political wards selected in stage one. The locations were selected using purposive sampling technique that is augmented with a survey method. This selection was however, informed by the fact that the areas happened to be the most populous within the already identified 6 locations. These 6 locations are; Rijiyar Zaki, Ungogo, Panisau, Rimin Zakara, Kurna, Rimin Gata. Stage 3 : 4 major streets were equally selected by the researchers in each of the 6 locations selected in stage 2. Stage 4 : At this stage, 5 households were selected from each streets selected in stage 3. Stage 5 : This is the last stage in the sampling technique within which the researchers administered the instrument of data collection to respondents from each household selected in stage four and thereby amounting to a total of 120 respondents. --- Method of Data Collection Primary data was collected through the administration of questionnaires, while the secondary data was largely gathered from the libraries at Bayero University, Kano and Ahmadu Bello University, Zaria, as well as from the internet. The primary source of data collection is also called a firsthand data collection, in which a self-administered questionnaire strategy was adopted and designed to be the instrument of data collection. In so doing, a total number of 120 questionnaires were administered to obtain information from respondents. There was also an introductory letter in the questionnaire to the respondents, which clearly specified the intention of the researchers. The questionnaire comprises of both close and open-ended questions, which gives respondents the opportunity to express their opinions. The questionnaire was divided into 3 parts; part one contains the bio-data of the respondents, while part two and three contained the main questions of the research. The responses obtained form the basis of analysis presented thereafter. Published and unpublished works such as books, journal articles, theses and dissertations relevant to the research work served as secondary source of data collection. --- Method of Data Analysis A qualitative method of data analysis is used in this paper. In this regard, data obtained from the questionnaire is logically arranged using frequency and tabular representation. --- Finding and Discussion --- Finding In this paper, a total of 120 questionnaires were administered, but only 105 questionnaires were retrieved. In the course of doing the analysis, the questionnaire responses were critically analyzed. This section therefore, presents the interpretation of the data collected and analysed in the course of the study. Table 1 shows that, 90.5% respondents agreed with the fact that poverty is a factor denying girl child education. When asked to explain their position, respondents argued that most families find themselves in poor conditions where they cannot afford to cater for their basic needs in life, as well as the education of their girls. They also added that the cost of education is high, which is not compatible with the poverty situation of most families within the Local Government Area. Respondents further opined that, while enrolling children into school, parents are expected to buy uniforms, learning materials and transport fees, which they cannot afford due their poor state. 9.5% of the respondents however, did not agree that poverty is a major factor affecting girl child education. This indicates that poverty serves as a factor of denying girl child education within the local government area as expressed by many respondents. From the Table 2, it is clear that 83.8% respondents agreed that poverty causes parents to send their girl child to street hawking. Respondents with such stand provided reasons to justify their position based on the fact that poverty curbs parents demand for education and therefore, send their girl child to street hawking to generate income for the family. In some instances, parents send their children to various low paid works such as domestic helping, serving as nannies to younger children, especially in urban areas.16.2% of respondents revealed that poverty does not cause parents to send their girl child to street hawking because whatever they earned from street hawking is too little to sustain the family needs and as such, poverty is not a casual factor for sending girl child to street hawking. Thus, from these responses it can be deduced that poverty forced many parents to send their girl child to street hawking. Items such as kola-nuts, groundnuts, pure water and food are used for the hawking. Table 3 indicates that 71.4% of respondents do not agree with the notion that street hawking helps sustain income for the family stating that the income earned in the street hawking is a meagre one and therefore, it is too little to sustain the family needs. In some instances, girls usually return home with the items without selling them. Others provided that parents have no choice rather than to send their girl child to street hawking because the condition they often find themselves in forces them to do so. 28.6% of respondents however, agreed that street hawking for girl child sustains income for the family stating the fact that some parents are not employed or in some cases, fathers have divorced mothers and the children are under the latter's care. As such, they have no source of income to cater for themselves. Most of such families totally depend on income earned from street hawking, which is used to provide food for the family and maintain a substantial capital for the hawking business. This indicates that the income earned from the street hawking does not adequately sustain the family. Sometimes, children go to bed without eating and drinking despite the street hawking activity. Table 4 indicates that 91.4% of the respondents considered the cost of education to be a factor denying girl child education stating the fact that cost of education nowadays is very high, while parents are expected to pay school fees, buy learning materials, uniforms and also pay for transport and feeding. Most parents cannot afford to pay such fees continuously even if they start paying, when the due payment is over, the girl child is sent back home for the non-payment of fees such as PTA,, examination fees or lack of good uniforms, or books. On the other hand, 8.6% of respondents revealed that the cost of education is not a factor of denying girl child education. From the foregoing therefore, it can be concluded that the cost of education is no doubt a factor denying girl child education in the community. Table 5 shows that 76.2% of respondents agreed that religio-cultural beliefs are directly linked to the denial of girl child education in Ungogo Local Government Area. When asked to explain their position, respondents argued that, cultural practices such as early marriage serve as major barriers to accessing education for girls in the Local Government Area. It has always been part of the peoples' tradition to marry girls out at an early age and once they are married, they have no access to education. Most parents hold certain religio-cultural views about girl child education, which stem from their outright distrust for formal schooling because of its emphasis on co-education. They believe that co-education can affect the morality of the girl child. 23.8% of respondents do not agree that cultural belief is a factor leading to the denial of girl child. Such respondents also argued that some harmful cultural practices such as early marriage are no longer practiced by many families as parents are not capable of making arrangement for ceremonial wedding because of the financial burden associated with it. From the above findings, it should be concluded that there is compatibility between religio-cultural beliefs and denial of girl child education. This is because, marriage is viewed as a protective mechanism against unwanted pregnancy shielding girls' honor from potential shame. Table 6 shows that 96.1% of respondents maintained that gender discrimination is a factor affecting girl child education. This is because, while boys are competing in getting admissions into the universities, girls are left behind struggling with primary or secondary education. More often than not, girls are married out and thus, they cannot continue with their education. On the other hand, 3.80% said that gender discrimination is not a factor because in some instances girls attend school more than boys and performed higher academically. Table 7 indicates that 39.04% of respondents believe that poverty is the major factor debilitating against girl child education. This is followed by 28.57% of respondents who consider cultural practices to be the reason denying girl child education, while 14.28% of respondents argue that the cost of education hinders girl child education. 11.42% of respondents however, consider gender discrimination to be the major problem affecting girl child education and 6.66% of respondents consider low government effort to be the challenge of girl child education in the community. Table 8 indicates that 33.33% of respondents are of the opinion that the government must be involved in order to tackle the problem of girl child education. Although the government has made several efforts to curtail the problem, much needs to be done. 28. 57% of respondents on the other hand, argue that parents should be involved in tackling the problem of girl child education, while 14.28% of respondents believe that the problem of girl child education can be reduced when community leaders and community on based organizations are involved so that they can play a significant role in terms of enlightenment and empowerment. Besides, 9.52% of respondents are of the view that non-governmental organizations should be involved in tackling the problem of girl child education. Hence, it can be deduced from the table that government should be the primary agent for addressing the problems of girl child education. --- Discussion Having presented all the necessary data in tabular form indicating facts and findings about the challenges of girl child education in Ungogo Local Government Area, it became apparent that the challenges facing girl child education in the community cannot be overemphasized. This is because; the research indicated that the denial of girl child education is linked to certain religio-cultural beliefs such as co-education and early marriage. The research also revealed that although figures have shown that only 9.52% of respondents agreed that girls marry between the ages 17-19 years, the situation seems to be worse when such girls are married out without attaining a primary, or secondary school certificate as practiced by many families in the community. Many respondents are of the view that the emphasis on co-education by the Western schools since its introduction by the British colonial government, discouraged many parents from sending their girl child to school. This is simply because; it is alien to their religious and cultural practices. To support these findings, Marope et al [22] states that cultural restrictions militate against girl child education in Northern Nigeria concluding that '...in some cultures, girls are restricted in the kind of role they can play, education inclusive'. Eresimadu [23], Okwara [24], Okpani [21], UNICEF [25] and Maimuna [13] opine that there are three important cultural belief systems, which militate against girl child education thus: early marriage, condemnation of co-education and preference of educating the male child. Furthermore, the research equally showed that poverty is a major factor affecting the state of girl child education in Ungogo Local Government Area of Kano State. It also revealed that because of the failure to properly fund the girl child to go to school, many parents resorted to sending them to street hawking with the intention of sustaining the family economically. In some instances, girls are sent to engage in domestic helping, or to serve as nannies to younger children, especially in the urban areas. The meagre income generated from such activities however, cannot sustain the family. To support this assertion, Okpani [21] and Ojimadu [10] argue that girls engage in street hawking practices to generate income for the family by selling foodstuffs on the street, while the girls miss the opportunity of going to school. Meanwhile, Birmingham [26], Mamman [27], Ojimadu [10], Ikwen [28], Abolarin [29], Kurfi [5] and Maimuna [13] also state that poverty always challenge the state of girl child education in Northern Nigeria. The research also showed that the cost of education serves as a major barrier to girl child education taking into consideration that parents are expected to pay school fees, buy uniforms, learning materials (books, pens, pencils, etc.), transport and feeding costs, and examination fees. Many poor families cannot bear the demand for such cost and as such, they pay little attention to the education of their girls. This finding is supported by UNICEF [25] who argue that most parents do not consider education of the girl child a priority because they have little, or no disposable income to supplement the cost of education. Moreover, this research found out that gender discrimination affects the state of girl child education as many parents prefer to educate their boys than girls. This is because, many girls are denied access to education by virtue of their gender and due to the common believ that at some point in time, a girl is to be married out. This reason further justifies parents' preference for educating boys than girls. To support this, Afigbo [30] opines that girls' inadequate access to education is largely informed by the gender discrimination they face. In line with this finding, UNICEF concludes that more than 100 million children had no primary education in Africa and out of this number, 60 million were girls. The effect of this inequality makes the girl child vulnerable and prone to abuse, sexual harassment and maternal mortality, which are directly related to the lack of qualitative education for girls. --- Conclusion This paper revealed that the challenges of girl child education are still evident and therefore, hamper on girls' access to education. Factors such as religio-cultural beliefs, parental level of education and income play a significant role in determining the possibility of girls having access to education in Ungogo Local Government Area of Kano State. As critical as these two factors are, scholars pay little attention to them as they tend to focus more on government policies towards girl child education. Though this paper has traced the origin of Northern Nigerian Muslims' abhorrence to western education to the insistence of the British colonial government to promote co-education and the fact that the type of education they introduced was seen as Christian in both content and outlook, it is observed that poverty, religio-cultural beliefs and negligence on the side of the government further exacerbated the problems militating against girl child education in the region. In line with this current reality, this paper found out that a lot of efforts have to be put in place so as to properly curb out the problems of girl child education in Ungogo Local Government Area in particular and Northern Nigeria in general. These efforts are also manifold in nature because of the fact that the government, parents, community leaders, non-governmental organisations and the international community have to be involved so as to address the problems within the shortest possible time.
Since the introduction of Western education to Northern Nigeria, especially in the 1920s, many Muslims in the region found it objectionable as it tempered with their religio-cultural values including for instance, coeducation. In light of this therefore, this paper identifies and examines the major challenges affecting girl child education in Ungogo Local Government Area of Kano State, Nigeria. Using both primary and secondary sources that are augmented with a qualitative data analysis, the researchers administered a total number of 120 questionnaires across five (5) political wards of Ungogo Local Government Area that were purposively sampled. Out of the 120 questionnaires administered, only 105 were retrieved representing 87.5% response rate. Data collected is analyzed using descriptive statistics. Results revealed that religio-cultural reasons, poverty, lack of viable government educational policies and parental preference to educate the male child are the major factors curtailing the chances of the girl child to have access to western education in the area of study.
offer opportunities to sharpen our understanding of specific problems participatory approaches are confronted with seen from an insider's point of view. At the same time, some contributions to this issue may stimulate discussion on the role and limitations of pTA in the light of 'outside' experiences, such as those against the backdrop of bottom-up civil engagement or participatory experiments in technology design. Taking a'more relaxed' point of view may help redefine the role of pTA as one specific element in the wider context of technology governance. This does not mean that questions of legitimation or impact are of less importance in the future. They could, rather, open our eyes to new perspectives such as moving away from 'purely' participatory events to more comprehensive approaches, participation being one element among others. One of the case studies presented in this issue demonstrates that the role pTA is able to play within a specific political setting very much depends on the institutional arrangements and different national styles of policy-making. Other case studies, dealing with new procedural developments in the field, impressively show how practitioners of pTA try to react to upcoming requirements, overcome apparent problems and provide some valuable insights into the-sometimes puzzling-world of technology policy. The first paper by Thomas Saretzki reminds us that it is of decisive importance to distinguish between technology assessment and technology policy when legitimation problems of participatory approaches are at stake. In contrast to technology policy, the core function of any modern TA is to mediate between three institutionally and functionally differentiated systems: science, politics and the public. According to Saretzki, legitimation problems indicate first of all that attempts to justify participation in a given case have not been entirely successful in the eyes of the relevant groups of sponsors, participants, organizers or observers. To deal with legitimation problems in a constructive way, Saretzki proposes the development of a multi-dimensional, self-reflective and self-critical approach to TA, which is able to serve as a system of reference for legitimating their own new roles, especially in the context of participatory procedures in TA. Leo Hennen responds to recent criticism regarding practical experiments with pTA. According to this strand of literature, pTA shows a number of crucial problems. In many cases such public deliberation, processes have only marginal impact on political decisions. They also run the risk of being instrumentalized by influential interests groups while showing serious deficits regarding the production of new and authentic layperson expertise. In reference to these main lines of reasoning, Hennen argues in the paper that these criticisms insufficiently take into account the context of participatory TA as an element of policy consulting. Taking into account the specific nature of pTA as a strategy to stimulate public deliberation and collect attitudes, interests and patterns of argumentation used by laypersons, it is able to improve the responsiveness of the political system and to give a voice to perspectives that are not or only poorly represented in political debates and decision-making processes. Against the background of civil society engagement in the fields of biomedicine and nanotechnology, Peter Wehling explores the potential of the so-called uninvited forms of participation and discusses possible consequences for more institutionalized formats of pTA. Similar to several other authors, Wehling refers to recently discussed practical problems and structural limitations with invited forms of pTA and contrasts these experiences with interest-based civil society interventions by patient associations and environmental and consumer organizations. He shows how uninvited initiatives in science and technology build up democratic legitimacy and manage to gain impact on decision-making processes. Wehling comes up with a number of recommendations to rethink and improve existing pTA approaches and methods and discusses new strategies to combine invited and uninvited forms of participation. Based on two national case studies dealing with the governance of xenotransplantation in Switzerland and Austria, Erich Griessler explores the influence of structural conditions and national styles of policy-making on the role and effectiveness of pTA. Griessler shows that experiences with pTA differ fundamentally between the two countries. In Switzerland, the number of public dialogue exercises on xenotransplantation is much higher than in Austria and the possible impacts of these deliberations on policy-making seem to be much more effective. Griessler discusses a number of important similarities and differences regarding political institutions and practices of policy-making in both countries. He suggests that the most important factor for explaining the prominent role of pTA in Switzerland is the extraordinary veto power of the Swiss citizenry, which calls for dialogue formats to avoid potential resistance from the public. Michael Decker and Torsten Fleischer report on recent experiences with, as they call it, 'big style' participation in Germany. Both authors have been involved in a still-ongoing series of citizens' dialogues on future technologies initiated and led by the German Federal Ministry of Education and Research. At least in the German context, these dialogues are to be valued as a unique experiment. On the one hand, several thousand citizens will be involved in the whole procedure. On the other hand, the strong position of the ministry, which is responsible for the entire process and heavily involved in its planning, organization and communication, constitutes an unusual feature. In the paper, the authors allow some first-hand insights into the political background, associated expectations and practical restrictions those procedural innovations are confronted with. Based on first evaluations and internal reflections on the process, they tentatively conclude that the high efforts to guarantee a kind of statistical representativeness are still contested by participants as well as a variety of incumbent political actors. The next paper also deals with new methodological directions in the field of pTA. Niklas Gudowsky, Walter Peissl, Mahshid Sotoudeh and Ulrike Bechtold describe a recently developed method that allows for comprehensive participatory forwardlooking activities. This method, called CIVISTI, brings together expert, stakeholder and lay knowledge in a well-balanced way, preparing long-term oriented recommendations for decision-making in issues related to science, technology and innovation. It comprises three phases. In an initial phase, the invited citizens produce future visions in a bottom-up process. Experts translate these visions into practical recommendations in a consecutive phase. Finally, the same groups of citizens validate and rank the outcome. The authors not only report on first experiences with this new approach, they also address a number of practical challenges and discuss some options for improvement. Diego Compagna draws our attention to the problems of translation between design and use in participatory technology development projects. His empirical material stems from a recently finished 3-year project on service robots in elderly care. Using some analytical concepts taken from classical social constructivist approaches and actor-network theory, Compagna unrolls step by step and reflects on experiences made in the project. He addresses scenarios as developed by designers, developers and future users involved as 'translation tools' and 'epistemic objects' that are able to mediate between diverse expectations and experiences. However, as the process continues, scenarios gain a kind of agency and each participating group is forced to align itself to the scenarios. On a more general level and with regard to similar situations in pTA exercises, Compagna concludes that participatory methods such as scenario exercises must be understood as active translators with the intrinsic ability to recompile and reconfigure the whole process in an unexpected way. In the final paper, Michael Zschiesche offers the opportunity to reflect on pTA in a similar way by providing insights from a related but quite different field of infrastructure projects. In Germany, formal public participation is required in authorization processes according to the Federal Immission Control Act for the approval of industrial facilities as well as in the planning permission procedure for infrastructure projects. Empirical data on those approval procedures show that the right of the concerned publics to be involved in the procedures is not at all made use of in many cases. In particular, procedures according to the Immission Control Act show extremely low rates of participation. Here, only one out of three authorization processes is met by public engagement. Based on secondary sources, Zschiesche also shows that, even in cases where public participation takes place, the actual influence on the outcome remains marginal. To improve the formalized procedure in the future, the author discusses options to combine formal and informal methodsas widely used in pTA-and calls for participatory interventions at much earlier stages of a planning process. The various papers, hence, cover a wide range of positions and empirical case studies. They also allow for some tentative conclusions in line with recent scholarly discussion: As long as TA positions itself as a mediator between science, politics and the public, it has to cope with the multiplicity of participatory methods and strategies. In addition, it must be able to master specific qualities and the limitations of pTA as well as being prepared to adapt methods and methodologies to changing socio-political environments (Rask et al. 2012). Public discourses on emerging technologies and their possible consequences for society and the environment need not be restricted to policy advice as typically provided by TA institutions. Forms of civic expertise with a special focus on societal impacts may play a stronger role both in technology policy (Stirling 2008) and in technology design (Stewart and Hyysalo 2008). TA may profit from such outreach as these other fields may profit from the procedural and methodological expertise TA has developed during the last 30 years. The papers in this special issue once more contribute to this stock of knowledge and clearly offer some fruitful ideas about promising future directions of pTA theory and practice. --- Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
Discussions on the role of participatory approaches in technology assessment and technology policy have a long history. While in the beginning this subject was handled mainly as a theoretical requirement for democratic governance of technology, active involvement of stakeholders and laypeople became popular in TA exercises throughout the 1980s. Since then, a variety of participatory TA (pTA) methods and strategies have been developed and widely used, raising further farreaching expectations. It has been argued that participatory approaches might broaden and hence enrich the knowledge and value base in ongoing technological discourses and eventually improve the factual as well as democratic legitimacy of technology-related decisions (Joss and Bellucci 2002). Moreover, a stronger integration of diverse actors and stakeholders was linked to the promise of better socially embedded solutions, an increased acceptance and enhanced diffusion of technology as well as technology policy. However, practical experiences with pTA have shown that under real-world conditions, it is difficult to meet all these expectations (e.g. Abels and Bora 2004). Despite a continuing and widespread interest in pTA, empirical evidence and theoretical positions on the practical performance of pTA have remained ambiguous. The papers selected for this special issue refer to this ambiguity from different angles and aim to contribute to the ongoing discussion on theoretical foundations, as well as practical experiences and critical appraisals of various forms of pTA. Most ideas, experiences and findings covered by this collection had first been presented and discussed at the yearly conference on technology assessment at the Austrian Academy of Sciences in 2011. 1 In a similar vein, the papers in this special issue M. Ornetzeder (&) Á K.
Introduction Managing a spinal cord injury (SCI) is challenging even in usual circumstances. It is a medically complex condition that requires timely care, support, and diligent self-management to promote wellbeing and prevent serious secondary complications (SCs; [1][2][3]). Undoubtedly, system disruptions created by the COVID-19 pandemic have substantially exacerbated the challenges of living with SCI. Tis study explores the experiences and perspectives of people with spinal cord injury (SCI) and critical stakeholders, to identify secondary complications, access concerns, and potential solutions in the context of the pandemic. Health systems have been experiencing severe stress as they redistribute resources to manage COVID-19 outbreaks [4,5]. For people with SCI, this has curtailed routine healthcare, rehabilitation, and outpatient services, with earlier discharge from inpatient rehabilitation for people who are COVID-negative and suspended or temporarily reduced admissions [4,6,7]. Te use of telemedicine/telerehabilitation and home care has increased to support people at home [4,8], but further evidence is required to assess comparability with inperson consultations across a range of clinical interactions [9,10]. Additionally, changes in service delivery and system capacity have negatively impacted on social and mental wellbeing of staf and the social contact between people with SCI, their families, and health professionals [4]. Unsurprisingly in this scenario, SCs are occurring in physical, psychosocial, and occupational domains for people living with SCI. Tis includes increased vulnerability to infection and respiratory complications [5,7]; signifcantly decreased physical activity including recreational and occupational pursuits [11]; and markedly increased spasticity, pain, and discomfort [7,12,13]. Tese SCs were attributed to pandemic-related social restrictions resulting in reduced walking, extended sitting in wheelchairs or confnement to bed, and insomnia-related pain or discomfort. Symptom reemergence and increased spasticity were also attributed to the postponement of treatments such as botulinum toxin type A injections [12]. In the psychosocial domain, lost access to personal supports such as family, personal networks, and formal support workers increased social isolation and complicated access to healthcare information [4-7, 13, 14]. Lower resilience and quality of life have also been reported, with increased depression and anxiety, particularly around accessing services [6,14,15]. In the occupational domain, social restrictions have reduced access to recreational activities [11]. It is also more difcult to access essential assistive technology, other necessary equipment, repairs, routine medical supplies (i.e., medications, protective consumables), groceries, and transport for healthcare appointments [5,6,14]. Te fnancial concerns and impacts have also been substantial [14]. While Australia limited the spread of COVID-19 in the frst two years of the pandemic through widespread lockdowns, COVID-19 mandates, leave payments (to enable COVID-positive workers to remain at home), and jobkeeper supports (a fortnightly wage subsidy, designed to support the economy during the COVID-19 pandemic by helping to keep businesses trading and people employed), health and social care system functionality was still significantly compromised, impacting on all members of the community including people with SCI and other disability. Since restrictions began to ease in late 2021 including reopening of international borders in February 2022 (to vaccinated tourists and other visa holders), COVID-19 has spread rapidly. By July 2022, Australia had recorded 9,235,014 cases and 11,387 deaths [16]. Queensland initially minimised the spread of COVID-19 through border closures, strict isolation/quarantine mandates, societal restrictions, and lockdowns. Te state only returned to a close to normal situation, when achieving a 90% vaccination rate [17]. Practical guidelines were published to protect the rights of Australians with disability under pandemic-related restricted access to health services, including mobility aids, communication options, visitor and family access, and involuntary hospital discharge [18]. Implementing some of these strategies potentially placed additional demand on already limited service resources, thus challenging service delivery for providers and recipients across primary, secondary, and tertiary healthcare sectors. Severe pandemic-related disruption to SCI services warrants investigation of (a) the personal impacts and how the disruptions are managed by people living with SCI, health professionals, and services; and (b) identifcation of system enhancements to better protect people with SCI and other disability from future pandemic waves or other causes of system disruption. Tis study is part of a larger program of research which examines the impact of health system stress caused by the COVID-19 pandemic on SCs and access to health and rehabilitation services by comparing people with SCI discharged prior to and during the pandemic in Queensland, Australia, using data linkage and survey data. Te aim of this component of the research was to examine the perspectives of a sample of people with SCI and SCI expert stakeholders regarding disruptions in their access to health and rehabilitation services, impact on SCs, and examples of problem solving and innovation in response to service disruption and personal impacts for people with SCI. It was assumed that study participants would report reduced health service capacity and increased SCs due to the pandemic, particularly in the frst several months of the pandemic. --- Method 2.1. Design. Te present study utilised a multimethods qualitative design comprised of a qualitative online survey of people living with SCI, and expert stakeholder forums (ESFs) with experienced SCI clinicians, as well as representatives from community-based SCI consumer organisations (including some who were people living with SCI), and other community services providing services to people with SCI such as compensation agencies. Examining multiple perspectives enabled a comprehensive understanding of the impacts from all critical stakeholders in the SCI rehabilitation journey to be gained. Te survey identifed issues of importance to respondents living with SCI, and the forums enabled key issues to be explored in-depth to generate insights of value to all concerned. Te study setting was the Queensland Spinal Cord Injuries Service which provides state-wide specialist SCI services along a life-long continuum of care that comprises of 2 Health & Social Care in the Community acute management and primary rehabilitation, outpatient follow-up, as well as transitional and community rehabilitation, and outreach services. --- Participants. --- Data Analysis. Descriptive analyses were used to summarise the demographic characteristics of survey respondents. Qualitative content analysis [19] was used to summarise the information provided in the survey responses to the question topics. Tis enabled the development of a comprehensive and coherent summary of respondents' views regarding the topics of interest. Frequency distributions were used to provide an overview of responses, structured to align with the open-ended survey questions. Te analysis was conducted independently by two team members (LB and CH), followed by a meeting to progress the fndings. Minor diferences were identifed and resolved through discussion to ensure consistency. Te forum transcripts were analysed thematically [20], following fve key steps: familiarisation with the data, identifying a coding framework, indexing the data, charting to identify patterns, and mapping and interpretation. A framework approach was adopted to enable prespecifed questions to be addressed [21,22]. Tus, the key themes were structured deductively from the four question topics (disruptions; impacts; opportunities, challenges, and innovations; and implications for service delivery planning and advocacy) and inductively from the comments of participants. Two forum participants agreed to read the fndings, and both confrmed that they accurately represented what was discussed. --- Results Te survey was completed by 34 people with SCI (Table 1). No information was available regarding nonrespondents or their reasons for nonparticipation. Te mean time since injury was approximately 20 years. A total of 16 SCI expert stakeholders participated in one of two forums and all but two opted to attend in person. Participants comprised ten clinicians representing specialist inpatient and community SCI services, as well as representatives from three key consumer organisations (including two representatives with SCI) and one compensation agency. Te duration of ESFs was 90 minutes and 120 minutes, respectively. Due to the complementarity of fndings from the online survey and ESF, the results are presented as a single unifed narrative regarding the impact of the COVID-19 pandemic on physical and mental wellbeing, access to services and supplies, and the use of workarounds to mitigate adversity. Table 2 summarises the survey results. Comments were selected from the survey responses and forum transcripts to shed light on the study results. In this section, "respondent" refers to a person who completed the survey (individual with SCI), and "participant" refers to a person who participated in the forum discussion. --- Impact on Physical and Psychosocial Wellbeing. Unwanted physical impacts were a common concern for SCI survey respondents early in the pandemic, with only two (6%) reporting no physical impact. Te most frequently reported problems were a lack of physiotherapy and no hydrotherapy, followed by lack of exercise and gym access since facilities were shut down or respondents were confned to their home. Another commonly reported concern was muscle stifness linked to reduced physical activity. Te impact on physical wellbeing was discouraging, as one forum participant observed: "When you know what they're capable of and how hard they've worked to get there, and then you're watching that just go backwards and then losing independence and function...that was tough to watch" [ESF1, P1]. Another participant noted: "[Some] just stopped services altogether and we found some reluctant to go to doctors, to physios, to whatever services -other services they might need. And, yes, it's just been that when they get to a point where they absolutely have to go, they're dealing then with a pressure wound or something that is a whole lot worse than it needed to be, had they gone out early" [ESF1, P5]. A forum participant explained that "a lot of the community services that people relied on [were] gone almost, very quickly" [ESF2, P6]. Undesirable impacts on mental health were reported by most SCI survey respondents. Te majority (n = 24, 71%) identifed isolation as a mental health issue, and nine (26%) reported experiencing isolation and mental health issues (n = 8, 24%). Anxiety, worry, or stress were reported by a substantial minority (n = 14, 41%). Others identifed fear, boredom, frustration, and a lack of concentration as a concern. Only six (18%) respondents reported no impact on their mental health. For example, one reported being "very bored [and] we all became depressed due to lack of human contact" [R2]. One individual living with SCI described impacts in terms of injury, "immense stress...pain, and exhaustion" [R12] for his wife who had become his sole caregiver. In contrast, however, another respondent noted "liv[ing] rurally, so nothing much changed" [R20]. ESF participants had noted "signifcant increases for the majority in DASS [depression, anxiety, and stress] scores" [ESF1, P1], and reduced mental wellbeing. "We know that social connection is such a protective factor, and it completely dropped of for a lot of people... --- Mental health has been the big issue and big concern" [ESF1, P6]. One SCI survey respondent reported a relationship breakdown, and the partner of another was diagnosed with mental illness. A participant noted that it is not surprising that there would be signifcant impacts on family, given that access to professional support workers was often challenging and that even "getting support Te number of people with SCI who experienced diffculties was marginally lower (n 26, 76%) by a year later. Although several (n 6, 18%) reported no impact on their usual services, almost half (n 15, 44%) reported difculty in general, and more than a third (n 13, 38%) reported that no support was available or that it was difcult to fnd. A similar number (n 12) reported restricted community access. 6 Health & Social Care in the Community Specialist SCI inpatient and ambulatory services were variably impacted through the course of the pandemic. Survey respondents noted that they were unable to "access...GP, hospital, and specialist care" [R17] or "attend my regular SCI rehabilitation sessions" [R22]. In the early stages, the need to create hospital inpatient capacity resulted in very rapid discharge planning for existing patients and consequently increased responsibility and stress on specialist SCI community services: "I think the early discharges, whether they are by the health system wanting people to be discharged, or people wanting to be discharged themselves, the fallout is just that we're seeing bigger problems at home" [ESF2, P6]. Later, as all community services were increasingly curtailed, discharge from inpatient services was often delayed, for example by inability to get home modifcations completed by community service providers in a timely manner: "If we're all on lockdown or there are restrictions, then you don't get your home mod[ifcations] started. You can't discharge; you have a backlog for people coming in the front door if people can't get home" [ESF2, P2]. One of the forum participants noted that delayed discharges also meant that others needing specialist spinal rehabilitation were placed in "acute wards or other hospitals" [ESF2, P3]. Another remarked that a down-side of being able to go out into the community again after a longdelayed discharge from the Spinal Injuries Unit was that, for people with SCI who "have been in a cocoon for six months, twelve months", this freedom also generated fear around "How vulnerable am I?" [ESF1, P4]. A minority of SCI survey respondents experienced lack of fexibility in their interactions with government agencies such as Centrelink (which delivers income support payments and services) and the National Disability Insurance Scheme (NDIS), and one had to pay a fee for cancellation of support during a snap lockdown. Lack of access to therapy, surgery, consultations, and exercise remained a problem as the pandemic continued. One SCI survey respondent reported difculty accessing vaccinations, while two were concerned about challenges accessing clear information around the limits of COVID-19 vaccination including effcacy. Another reported that it was helpful that supermarkets "had disabled-only times" [R18] and access was easier with "less road trafc" [R18], but such benefts were not experienced by those "unable to access shopping and chemist" [R7]. In contrast, almost a quarter of respondents (n 8 people with SCI, 24%) reported no problems with obtaining personal or home support or community access during the entire pandemic. --- Impact on Equipment, Consumables, and Repairs. For some SCI survey respondents, the cost of consumables was a problem (n = 6, 18%), and an equal number reported difculties with delays and deliveries. Access to equipment and parts was difcult for almost a quarter of respondents (n = 8, 24%). For example, one respondent "waited 9 months for parts for an essential item (hoist) to be repaired" [R9] while others had problems "from almost day 1 with continence supplies" [R29] or being "unable to purchase examination gloves" or to attend "massage [fortnightly] and hydro[therapy][twice a week]" [R21]. In contrast, six identifed no problems with equipment or consumables. One ESF participant provided further insight into such disruptions, "[W]e had to get special permission for all of our suppliers to come in to provide equipment...we had to keep communicating and highlighting that as it impacted on people's rehab[ilitation] and potentially length of stay to ensure that the expectation was understood, that things couldn't move as quickly as we would normally move them" [ESF1, P9]. Another forum participant noted: "Tat's been a massive problem getting [equipment and aids], getting [allied health]...to the people, getting equipment to the people, ordering the equipment. And that goes with telehealth as well with getting -suddenly -iPads, computers, technology" [ESF2, P6]. A minority of SCI survey respondents (n = 6, 18%) identifed no problems with technology. Despite its potential advantages, technology was problematic for several respondents who struggled with virtual communication. In contrast, some ESF participants "found telehealth to be a nice escalation pathway now" [ESF1, P2]. Positive impacts were also identifed, including easier, disability-friendly access to shops, because "at times less road trafc allow [ed] safer short distance travel, easier access at shopping centres" [R16]. --- Secondary Complications. A substantial minority of survey respondents (n = 15 people with SCI, 44%) reported a diverse range of physical complications due to pandemicrelated restricted access to services, including muscular deterioration, skin problems, weight gain, neurological problems, and hypertension. For example, "physiotherapy services shut and my legs ended up becoming very tight" [R28]. Consistent with these reports, an ESF participant described seeing: "[S]ituations where the person has not been able to come to get the acute treatment necessary, post-injury. Tey have been remotely hospitalised and very quickly they developed...UTIs, pressure injuries, sepsis...[They lack] expertise in managing SCI...By the time they get to the Spinal Injuries Unit...they've got to get extended medical treatment, which delays rehabilitation" [ESF2, P1]. Te diversity of experience with COVID-19 or servicerelated complications is reinforced by the absence of complications for a substantial number of respondents (n 13 people with SCI, 38%), in contrast with the SCI respondent who disclosed suicidal thoughts, and another who lost employment. For one respondent, the COVIDimposed isolation was intensifed by "marriage breakdown and separation" [R17]. --- Solutions and Workarounds to Mitigate Negative Impact. Almost a quarter of respondents (n = 8 people with SCI, 24%) were unable to identify any solutions, and three had resigned themselves to their circumstances. Six (18%) SCI respondents regarded technology as a solution, and a similar number reported that shopping less frequently and shopping online were solutions. Six respondents (18%) tried a selfdirected approach to exercise, eight planned for and sourced alternative supports (24%), four restored or developed new work or home routines (12%), and seven (21%) used or developed new personal strategies including avoiding watching the news, relaxation techniques, and increased hobby activities. For some respondents, workarounds were not necessarily positive, as one "had to move into a tense living arrangement with my ex-partner" [R7], and another reported "confusion about access to medical services unrelated to my SCI" [R6], while acknowledging that "phone consultations and telehealth were most welcome" [R6]. Solutions were also constrained by undercurrents of fnancial concerns such as having "lost my job and hav[ing] no personal income" [R7] or concerns about personal choice regarding vaccination mandates, with "no supplies in our area" [R5] or "the mandate to have our health workforce vaccinated...and staf leaving" [R22]. ESF participants were more positive in recognising the opportunities and challenges of imposed change: "[It's] changing everybody's expectations. We've all come along the journey and had to learn we can't get everything we want now, although we need it. We can't get all the services we want; we can't get it the way we want it. I think the balance of learning that this is new, and we all have to accept it and also learning that everybody, emotionally and mentally, are heightened...balancing that as well has been difcult, but that includes everybody. Tat's the service users, the service providers who are also humans with a family in this pandemic" [ESF2, P6]. --- Discussion To our knowledge, this is one of the frst Australian studies to examine the impact of COVID-19 pandemic-related health system stress on a sample of people with SCI. It also reveals how service providers and people living with SCI in the community have innovated in attempts to mitigate the impact of pandemic-related disruptions. Te survey and forum results together contribute to our understanding of these impacts for people living with SCI in three ways. Firstly, people with SCI experienced service disruption, particularly to health and community services and personal supports. Secondly, the impacts of the disruptions were measured by secondary complications in physical health and psychosocial domains. Lastly, people with SCI and those who support them, accommodated and generated change, to try to fnd solutions for ensuring access to care during the pandemic. Tese impacts were evident in the beginning of the pandemic and stayed relatively stable over the prolonged period created by widespread restrictions, lockdowns, and other pandemic responses. Te scale, complexity, and duration of disruption to healthcare, rehabilitation, and community support services, and to the supply of equipment and consumables has been unprecedented and is consistent with international research [4,6,23,24], as is the fnding of disrupted in-home personal support [5]. In addition to closed, delayed, or rationed services, participants were confronted with their own selfpreservation instincts of not wanting to interact with services for fear of contracting COVID-19 [7]. Although intended to alleviate concerns about loss of face-to-face interactions, the rapid growth in telehealth consultations was found to generate new challenges related to unfamiliar or unreliable technology, as well as safety concerns, for example when undertaking physical therapy virtually [4,5,8]. Service users and providers were challenged by the scarcity and higher cost of supplies and by uncertain service accessibility and safety. Disruption led to multiple concurrent and intersecting impacts. Isolation was implicated as a key contributor to SCs in physical and psychosocial domains, particularly poor mental health. Supporting previous research [6,14,15], respondents reported increased anxiety, worry, and stress, and one disclosed suicidal ideation. Te impact also extended to quality of life and wellbeing, with respondents reporting increased fear, boredom, frustration, poor concentration, relationship breakdowns, and increased burden on family members/support workers. It was almost inevitable that hard-won levels of physical health sufered because of pandemic-related disruption, with the reporting of increased muscle stifness, loss of strength and mobility, increased pain/discomfort, and other medical complications, consistent with previous research [7,[11][12][13]15]. Tese results all support the study assumptions. Coupled with the pressure of ongoing needs, the large number of impacts compelled people to respond. However, it is noteworthy that for more than three quarters of respondents attempts to innovate were unsuccessful, which seems consistent with their reported frame of mind. Despite working in survival mode, providers continued to explore and test alternatives to ensure adequate support was reaching those who needed it, with some success, including increased use of telehealth. Te limitations of technology as an alternative means of communication are not new, especially for inexperienced users [4]. Taken as a whole, the results of this study demonstrated resilience in people with SCI and in the health professionals who care for them. Tey were confronted by new large-scale challenges and at least attempted to resolve them in ways to preserve quality of life and progress with rehabilitation. Nonetheless, some respondents clearly experienced unwanted physical and mental health impacts, reduced or disrupted access to usual services and community, increased SCs, and difculty accessing support and equipment. In contrast to many natural disasters, the COVID-19 pandemic is a marathon, which adds endurance as a further need. Tis has implications for all concerned, including those who may have used alternative sources of support to mitigate or delay the development of SCs. --- Implications. Since some issues arising early in the pandemic failed to resolve, new and collaborative approaches are needed to manage complex issues that resist or overwhelm usual strategies. Te pandemic provides a new opportunity to develop and evaluate crisis management plans and strategies, and to add them to standard resources as valuable action-ready back-up plans in any future disruptions to the supply chain and coordination of health care and support. If online solutions become standard options, work is needed to improve the low (less than 20%) efectiveness of technological solutions (i.e., telehealth) reported by the study participants and to now shift the focus of education and training away from health professionals and service providers to the end users. We suggest that this population who prize their independence and resilience [25,26] would welcome training initiatives to improve their technological capacity. Training in telehealth and technology for delivering care and rehabilitation would require signifcant investment in materials, hardware, and training for both people living with SCI and healthcare personnel. While some face-to-face attendance is non-negotiable due to the hands-on nature of physical rehabilitation (i.e., physiotherapy and hydrotherapy), the integration of technology in health-related care could have great benefts for this population in many other aspects (i.e., general check-ups and mental health support), particularly those with limited mobility. A key priority is to conserve and consolidate the team of specialised health care and support workers who carry the burden of bridging gaps between needs and resources. Tere is opportunity to build on the peer support and crosspollination that exists within this network. Exchanging ideas can contribute practically to what is being learned. Te composition of partnerships could be explored in a brief that enables new thinking to enter the process. In summary, forward planning is needed on multiple levels. State-wide planning for service delivery during prolonged periods of disruption is needed to safeguard the availability of resources as well as specialised healthcare and support workers able to care for vulnerable populations. Centralised planning is needed to enable consumables to be stockpiled, with simpler access to products and equipment. Decentralised planning is needed to proactively ensure local back-up plans are in place. Finally, multilevel advocacy and planning is needed to protect the capacity and availability of healthcare and support workers and to maintain cohesion between these workers and community organisations. Te core implication is that we must work together to avoid the scenario in which vulnerable people who depend on specialised health care and support fail to receive them, only to inadvertently develop complications that isolate them further from the quality of life that is their right and increase the need for access to the very services which are restricted. --- Strengths and Limitations. Tis small study provides rich insights into the daily realities of pandemic-related disruptions to specialised services that are needed long-term by people with SCI. Te results identify and explore the complexity of multiple interconnected factors that have afected the health and well-being of people with SCI. Tey also highlight the motivation that generates important strategies to protect and sustain adequate care under unprecedented ongoing conditions that interrupt the timely delivery of needed services. Tese initiatives reveal exceptional use of human characteristics such as resilience, autonomy, and resourcefulness in seeking to close gaps that can lead to adversity for people with SCI. However, the study is limited by convenience sampling and small sample sizes, which may have resulted in some potential bias. For example, the proportion of female respondents does not match the gender distribution in the SCI population. Additionally, the timing of the study meant that COVID had not yet spread widely throughout the community due to the pandemic-related restrictions and vaccination mandates. Terefore, the fndings may relate to the impact of the lockdown restrictions themselves versus the impact of the rapid spread of the virus. Data saturation is unlikely to have been reached, although there were common themes from both ESFs. Terefore, it is important to note that these study results provide insights from one small population of people with SCI who are linked to services provided by one tertiary hospital and the network of community services that continue to support people with SCI beyond discharge from hospital. While some results will be generalisable, the experiences of provider and user groups in other geographic settings and health systems may vary, leading to diferent implications. --- Conclusions Tis multilevel, multimethods, qualitative study provides valuable insights that a survey or single-level qualitative inquiry alone could not provide. Te results present the nature of pandemic-related disruption, its impact, solutions, and implications, which may inform future rehabilitation practice and research in the study setting and elsewhere. While the research was conducted during the early stages of the pandemic in Queensland, Australia, when COVID-19 cases were relatively low, future research should examine the disruptions and implications of the pandemic in more recent times now that lockdowns are a thing of the past and almost all pandemic-related restrictions have been lifted but where COVID-19 is widespread in the community, to ascertain if the impacts vary in type or signifcance. (a) For people who depend on services and support? (b) For people who care for them? (c) For people who provide services and support? Opportunities and challenges of alternative modes of service delivery and personal support Tinking now about alternative ways to provide services and support. --- Data Availability Te deidentifed survey and forum data used to support the fndings of this study are available upon request from Prof Timothy Geraghty ([email protected]). --- Appendix --- A. Online Survey Demographic Questions --- Other points Tat brings us to the end of our prepared questions, but there may be other issues that need to be discussed. (7) Have we missed anything? --- Conflicts of Interest Te authors declare that they have no conficts of interest.
As part of a larger study examining the perceived impacts of health system stress in Queensland, Australia, caused by the COVID-19 pandemic, this study explored the experiences and perspectives of a sample of people with spinal cord injury (SCI) and critical stakeholders to identify secondary complications, access concerns, and potential solutions in the context of the pandemic. Tis study utilised a multimethods qualitative design. Tirty-four people with SCI completed an online survey between August and November 2021, recruited from an online Spinal Life Australia Peer Support Group. Sixteen SCI expert stakeholders, recruited from the Queensland Spinal Cord Injuries Services, consumer support organisations, and funding agencies, participated in one of two expert stakeholder forums in September 2021, focusing on impacts of the pandemic on the services they provided. Survey and forum results were analysed thematically. Results highlighted service disruption wherein people with SCI faced difculty accessing health and community services (including rehabilitation) and personal supports. Reduced access led to secondary complications in physical health, psychosocial, and occupational domains. Solutions for safeguarding access to care, including action-ready back-up plans, efective technology and training, collaboration of service networks, and forward planning for system disruption, consumables access, staf support, and advocacy are required to best support vulnerable populations and the supporting staf in times of crisis. In conclusion, COVID-19 disrupted access to specialist SCI and mainstream health, rehabilitation, and social care services, resulting in functional decline and physical and psychosocial complications. While people with SCI and their service providers attempted to innovate and solve problems to overcome service access barriers, this is not possible in all situations. Improved planning and preparation for future system disruptions mitigates risks and better protects vulnerable populations and service providers in times of severe system stress.
Introduction Continuing care retirement communities (CCRCs) provide a variety of residential options for older adults, offering a unique setting with a range of services that are responsive to changing care needs as one ages. Since the 1990s, there has been a rapid growth in the construction of CCRCs, and there are now approximately 1900 nation-wide (Zarem, 2010). CCRCs represent a unique setting for aging-in-place (Resnick, 2003b), and provide residents the ability to stay at one facility even as their health needs change (Shippee, 2009). These facilities offer a single source for long-term care needs, including independent housing, assisted living, and nursing services (American Association of Retired Persons, 2013). With convenient access to alcohol, drinking may be commonplace for the majority of residents of CCRCs (Resnick, 2003a). Yet, even with the rapid growth of CCRCs in recent years, little research has focused on alcohol use in these settings. Instead, much of the extant research has focused on residential retirement communities, such as Leisure World (Adams, 1995;Paganini-Hill, Kawas, & Corrada, 2007), rather than CCRCs. In some instances, the drinking quantity and frequency of drinking were found to be higher in these semi-structured communities than within general population-based samples (e.g. Adams, 1996). Other research has focused on assisted living programs; for example, Castle, Wagner, Ferguson-Rome, Smith, and Handler (2011) surveyed nurses' aides who worked in assisted living programs in Pennsylvania. The aides reported that they believed that a majority of residents drank alcohol and that 34% drank daily. Aides in the study believed that 28% of residents'made poor choices for alcohol consumption' and 11% had 'alcohol abuse problems'. Although study findings are somewhat limited in their reliability due to collateral reporting and poorly defined measures of alcohol use, they remain compelling. Further investigation of alcohol use within different types of retirement communities is needed to better understand whether there are unique patterns of and motives for alcohol use, enabling optimal design of interventions for these settings. --- Alcohol and health among older adults To understand the importance of alcohol use in CCRCs, it is important to recognize the relationship between alcohol use and aging. Alcohol consumption tends to decrease as people age (Moore et al., 2005). However, compared to younger adults, older adults may be at higher risk even while consuming less alcohol because they have higher blood alcohol levels for a given dose of alcohol and have increased brain sensitivity to the effects of alcohol (Vestal et al., 1977). Because of these risk factors, recommended drinking limits for persons aged 65 years and older are lower than for younger individuals; guidelines suggest no more than seven drinks per week and no more than three drinks on a given day (National Institute on Alcohol Abuse and Alcoholism [NIAAA], 2010). Individuals who cross that threshold are considered 'at-risk'. Older adults also have greater medical comorbidity and take more medications that may increase risks associated with alcohol use (Moore, Whiteman, & Ward, 2007) compared with other age groups. Using this broader definition of risk (i.e. consumption and comorbidities and medications), Moore et al. (2006) identified 18% of men and 5% of women (age 60+) in a nationally representative sample as at-risk drinkers. Conversely, there are known health benefits of drinking among individuals who do not drink heavily and for whom alcohol is not contraindicated. Low to moderate use of alcohol can lead to positive health outcomes related to cardiovascular disease (Corrao, Rubbiati, Bagnardi, Zambon, & Poikolainen, 2000), cognitive functioning (Stott et al., 2008), andmortality (McCaul et al., 2010). Alcohol use at moderate levels is also linked to decreased functional impairment for older adults (Karlamangla et al., 2009). Although research has focused on the health effects of alcohol use among older drinkers, less is known about alcohol use among older adult residents of CCRCs and similar independent living settings. More in-depth approaches are necessary to investigate drinking among older adults in these living situations. Psychosocial factors such as drinking motives may be important as these may influence the extent to which alcohol use is an unhealthy response to psychosocial issues such as depression. Conversely, alcohol use may be important as a means of socialization, a core component of successful aging (Depp & Jeste, 2006). --- Older adults, drinking motives, and affective states Drinking motives theory focuses on proximal reasons people drink (Cooper, 1994), which may help us understand alcohol consumption among CCRC residents. These motives can be categorized as a positive reinforcement, such as drinking for social and enhancement reasons, or as a negative reinforcement, such as coping and conformity reasons (Cooper, 1994). These motives are seen as a result of the direct pharmacological effects of drinking and/or the 'instrumental' effects of drinking (e.g. social conformity or social enhancement) (Cox & Klinger,2004). If individuals have expectations about the effects of drinking, then their motives will reflect those expectancies. For instance, if alcohol is perceived as a method of decreasing tension, an individual will drink to reduce tension. Alternatively, beliefs about the enhancement or social effects of alcohol (e.g. alcohol facilitates socializing) will be consistent with drinking motives focused on attaining positive experiences. Although much of the research on drinking motives has focused on adolescents and young adults, Cooper's theory provides a broader conceptualization of the proximal factors associated with alcohol use. In drinking motives theory, negative affective states are central to understanding alcohol use. Alcohol use among older adults is theorized as a means of coping with painful life experiences and other forms of psychological distress (Folkman, Bernstein, & Lazarus, 1987). Overall, findings in this area vary by the cause of the affective state (Glass, Prigerson, Kasl, & Mendes de Leon, 1995), one's coping repertoire (Bacharach, Bamberger, Sonnenstuhl, & Vashdi, 2008), drinking history, and measurement of alcohol use (Sacco, Bucholz, & Harrington, 2014). Much less attention has been focused on positive reinforcement motives and drinking among older adults or the notion that alcohol use among older drinkers is motivated by social or enhancement motives rather than coping motives. Using drinking motives theory as a conceptual framework, we explored alcohol use among older adults in a CCRC. First, we investigated relations between drinking motives (e.g. social) and context of drinking, such as whether a person drank alone or drank outside of their home. Second, we hypothesized that negative affect (as both time-invariant and timevarying covariates) and coping motives (as a time-invariant covariate) would be associated with increased drinking and that positive mood and social motives would be associated with lower levels of consumption. In this study, drinking motives are stable characteristics of the individual that may influence drinking habits. Negative and positive affect are conceptualized as time varying factors that impact the likelihood that one will consume alcohol. Together we conceived of drinking motives interacting with affective states to influence consumption, such as individuals with high coping motives being particularly likely to drink as a result of negative affectivity and, conversely, people with social motives being more likely to drink for social or enhancement reasons. We endeavored to explore these associations of drinking motives in light of the dynamic nature of mood and affect by measuring daily variations in affective states. --- Method Study design and sample This was a descriptive pilot study conducted at one CCRC located in the Washington, DC suburban area with participants who resided within the independent living level of care. The CCRC has more than 2500 residents with most (88%) in independent living, 8% in assisted living, and 4% in a nursing home; there are multiple venues where alcohol is served. Participants were recruited for this study via flyers, pamphlets, and informational videos. Inclusion criteria included being a current drinker (defined as having an alcoholic beverage within the last two weeks), residing independently within the CCRC, English fluency, and the ability to communicate over the telephone. We focused on independent living as residents at this level of care likely had less disability and greater access to alcohol. Individuals were excluded from the study if they displayed clinically significant cognitive decline as measured by the Mini-Cog Screen (Borson, Scanlan, Chen, & Ganguli, 2003). Of the 81 people who expressed an interest in participating, 77 (95%) were eligible. Among eligible participants, 72 (89%) consented to participate, 3 refused, and 2 were lost to follow up before consenting to participate; 71 (99%) of those who consented to participate completed all aspects of the study. One individual did not complete the eight days, reporting that the protocol was too burdensome. --- Procedures Data were collected from participants in three phases by research assistants: (1) an initial face-to-face interview on day 1, (2) daily surveys administered via telephone on days 2-8, and (3) a final daily survey and telephone interview on day 9 (see Figure 1 for the measures at each phase). The initial face-to-face interview consisted of a Mini-Cog test used for eligibility (Borson et al., 2003), the consent process, and a structured survey instrument. Participants were called every day for eight days beginning the day after the initial interview. During these telephone calls, participants were surveyed about their activities, emotions, and drinking behaviors from the day previous to the call. The final telephone interview included questions about drinking motives, at-risk use of alcohol, and alcohol history. We opted for one-day retrospective phone calls based on pilot research conducted at a different CCRC where handheld devices or written diaries were perceived as being too burdensome among individuals in this age group, and that a morning phone call asking about the previous day was the most feasible option for older adults (Sacco, Smith, Harrington, Svoboda, & Resnick, in press) Four research assistants and the lead author collected daily diary information in a total of 569 phone contacts. Two doctoral level research assistants were responsible for 70% of the daily phone contacts, with the lead author personally making 20% of calls, and two master's level trainees conducting the final 10%. All interviewers were trained by the primary author on scripted phone surveys and in-person interviews. The lead author observed each research assistant conducting in-person and phone-based interviews on multiple occasions. --- Initial face-to-face interview measures Sociodemographic variables included education, age (in years), gender (0 = female, 1 = male), marital status, and length of residence (in years) at the CCRC. SF-12v2® (McHorney, Ware, & Raczek, 1993;Ware & Sherbourne, 1992) was used as a general health screening tool to measure dimensions of health disability over the past four weeks. The SF-12v2® is designed to yield a population-based norm of 50 on a scale of 1-100 with a standard deviation of 10 and contains two major subscales, the Physical Component Scale (PCS) and the Mental Component Scale (MCS). The SF-12v2® is a valid and reliable measure of health status for older adults (Resnick & Nahm, 2001). --- Health disability-The Medical Outcomes Study Depressive symptoms-The Patient Health Questionnaire-9 (PHQ-9; Kroenke, Spitzer, & Williams, 2001) was used to measure the presence and severity of depressive symptoms. The PHQ-9 contains nine items that ask about the frequency of DSM-IV (Diagnostic and Statistical Manual of Mental Disorders: Edition IV)-based depression symptoms over the past two weeks. Response options range from 0 (not at all) to 3 (nearly every day). Levels of depressive symptoms can be derived from the sum of the nine items: minimal symptoms (0-4), mild symptoms (5-9), moderate symptoms (10-14), and moderately severe symptoms (15-19) and severe symptoms (20+). The PHQ-9 displays good reliability and is a valid measure C of depressive symptoms among older adults in primary care (Phelan et al., 2010). Internal consistency for the current study was acceptable (<unk> =.73). --- Daily telephone call measures Alcohol consumption-Participants were queried regarding drink consumption on the previous day using a standard drink graphic from the NIAAA (2010) provided to them during the in-person interview. In the United States, a standard drink is 0.6 ounces or 14 grams of pure alcohol. This is roughly equivalent to 12 ounces of beer, 8-9 ounces of malt liquor, 4-6 ounces of wine, 3-4 ounces of fortified wine, 2-3 ounces of liqueur or aperitif, or 1.5 ounces of brandy or spirits (National Institute on Alcohol Abuse and Alcoholism, 2005). They were also asked about where they were when they drank (i.e. in their home, somewhere else in the CCRC, or in the larger community) and whether they drank alone. Positive and negative affect-The Positive and Negative Affect Schedule (PANAS) Short-Form (Mackinnon et al., 1999;Watson, Clark, & Tellegen, 1988) was administered daily. The PANAS Short-Form scale is made up of 10 items, including 5 positive (e.g. excited) and 5 negative (e.g. scared) adjectives that represent dimensions of subjective wellbeing (Kercher, 1992). The respondents rated their level on these items from 1 (not at all or very slightly) to 5 (extremely) with the previous day as the time-frame. Scores were averaged for both the positive and negative subscales. Internal consistency values for this study were acceptable at <unk> =.70 for positive affect and <unk> =.76 for negative affect. Following Curran and Bauer (2011), PANAS positive and negative affect scales were recoded to create individual mean positive and negative affect scores and deviation scores from each person's mean on each day. The within person variable represents the level of variation from his or her average positive and negative affect each day. The between person variation quantifies each individual's mean level of positive and negative affect across the eight days data were collected. --- Final telephone interview measures Drinking motives-Drinking motives were assessed using the Drinking Motives Questionnaire Revised Short-Form (DMQ-R SF; Kuntsche & Kuntsche, 2009) based on the measure originally developed by Cooper (1994). The DMQ measures four types of drinking motives: enhancement, social, conformity, and coping. Enhancement motives refer to drinking to enhance positive mood and social motives refer to drinking for social reasons. Coping refers to drinking to manage negative emotional states and conformity motives are focused on drinking to fit in a specific group. In the DMQ, items relate to the frequency of drinking for specific reasons (e.g. 'how often do you drink to cheer you up when you are in a bad mood') over a 12-month timeframe. In the short form of the DMQ, 12 questions are asked, with 3 response options (0 = never, 1 = sometimes, 2 = almost always). We created and scores using the mean values for each subscale. The DMQ measure displays acceptable validity in older adults (Gilson et al., 2013). Internal consistency for the coping and social subscales was acceptable in this study at.74 and.79, respectively. However, internal consistency reliability for the enhancement and conformity subscales was problematic in this study with values well below the acceptable range for Cronbach's <unk> (.48 and.51, respectively); therefore, these subscales were not included in analyses. At-risk and unhealthy alcohol use-The Alcohol Use Disorders Identification Test (AUDIT) (Babor, Higgins-Biddle, Saunders, & Monteiro, 2001) and the Comorbidity-Alcohol Risk Evaluation Tool (CARET) (Moore, Beck, Babor, Hays, & Reuben, 2002) were used to screen for unhealthy drinking measured over a 12-month timeframe. The AUDIT is a 10-item measure designed to identify hazardous and harmful drinking in the general population. (Allen, Litten, Fertig, & Babor,1997;Reinert & Allen, 2002); a score of 8 or higher on the AUDIT indicates hazardous or at-risk use. The AUDIT is a broad alcohol screening measure designed for general population use. Conversely, the CARET was used to assess areas of unhealthy alcohol use specific to older drinkers. The CARET includes quantity and frequency variables and other indicators from the AUDIT, but goes beyond them by assessing for alcohol consumption along with comorbid medical conditions, alcohol use with medications that are contraindicated, exceeding recommended consumption guidelines specific to older adults, and driving after drinking. The CARET, like the AUDIT, measures at-risk drinking, but evaluates consumption and health specific to older adults (see Table 1; Barnes et al., 2010). Because the CARET measures drinking risks across a continuum of use and sets guidelines that are elder specific, it is an appropriate measure to use specifically with CCRC residents. Four dichotomous indicators of alcohol related risk were derived from this measure: exceeding alcohol consumption guidelines, hazardous alcohol and medication co-use, hazardous alcohol with comorbidities, and driving while under the influence. --- Data analysis Data analysis was conducted in three steps. First, univariate and bivariate statistics were generated to describe the sample sociodemographic and person level drinking characteristics. Next, we analyzed the context of drinking, specifically drinking alone and drinking outside one's home (defined as somewhere else in the CCRC or in the community). Two logistic regression models were used to examine sociodemographic, affective, and motivational influences on drinking alone and drinking outside of the home. Independent variables in these models included sociodemographic variables (i.e. gender, age, marital status, SF-12v2® score), PANAS positive and negative affect (measured as within person deviation and between person differences) and drinking motives. Finally, we estimated two Poisson regression models to predict the number of drinks consumed in a given day. In the first model, sociodemographic variables and PANAS variables were included to examine the role of positive and negative affect on the number of drinks consumed. In the second model, social and coping drinking motives were added. Generalized estimating equations (GEE; PROC GEN-MOD; SAS Institute Inc, 2008;Zeger & Liang, 1986) with a first-order autoregressive (AR1) correlation structure were estimated for all regression models due the nesting of days within persons. GEE is a regression method that addresses violations to the assumption of independence (i.e. correlated residuals because the data are longitudinal). In GEE, within person errors are allowed to correlate over time; model estimates and standard errors adjust for these correlations. The autoregressive correlation structure estimates correlated errors that are symmetrical and strongest on observations (i.e. days) closest to each other and less correlated as they are farther apart (Hanley, Negassa, Edwardes, & Forrester, 2003). --- Results --- Sociodemographic characteristics In the final sample (n = 71), the average age of participants was over 80 years old and almost two-thirds were women (see Table 2). The vast majority of participants were White and college educated. The sample was primarily currently married or widowed. Mean participant SF-12v2® health scores were 46.87 (SD = 10.29), indicating the sample had somewhat poorer health than the general population of adults (t = 2.56, p =.013). Sample demographic characteristics were similar to those of independent living residents at the CCRC, however the study sample was slightly younger (82 vs. 85 years), lived at the site for slightly shorter durations (5.3 vs. 5.6 years), and contained lower percentages of women (63% vs. 67%) and married persons (37% vs. 49%) than the CCRC population as a whole. Differences between the sample and the overall census of the CCRC were likely a result of two factors. We recruited individuals who were current drinkers, a distinct subpopulation that are more likely to be male and younger from population studies of older adults (Moore et al., 2009). Also, all participants resided within independent living, meaning they were healthier and likely younger than nursing home and assisted living participants. In this sense, the study is generalizable to current drinking older adults in CCRCs. --- Alcohol-related characteristics The average percentage of drinking days among participants was 57% (or about four days per week) and the average number of drinks per drinking day was 1.28 (SD =.68; see Table 3). People drank when they were alone 43% of the time on average, with variability (SD = 37.1%) among individuals. People drank most commonly in their apartments (68.71%) or somewhere else in the facility (26.36%). The lowest proportion of drinking occurred off-site (8.27%). Hazardous alcohol use based on AUDIT scores was uncommon (3%) in the sample (see Table 2). The CARET identified larger percentages of at-risk drinkers (see Table 2); 4% to 62% of participants endorsed specific at-risk drinking patterns such as drinking above NIAAA (2010) drinking guidelines and drinking with comorbid conditions or medication that may interact with alcohol. Over 60% of participants endorsed a medication interaction at-risk drinking pattern, making it the most common type of at-risk drinking. Within the sample, social motives showed the highest mean values, but coping-related drinking were weaker motives for drinking in this sample (see Table 3). --- Mood and depressive symptoms Table 3 shows the mean scores for positive and negative affect on the PANAS Short Form. Participants displayed higher mean levels of positive affect than negative affect. Based on the PHQ-9, 84% of the sample did not endorse any depressive symptoms, 8% endorsed minimal depressive symptoms, and only 6% endorsed mild symptoms or greater. Similarly, the MCS of the SF-12v2® indicated 9% at risk of depression, but MCS (M = 54.28) scores were above the norm of 50 for the general population. --- GEE models Results from the GEE models examining sociodemographic factors, affect, and drinking motives in predicting the context of drinking (drinking alone or in the community) are presented in Table 4. Current marital status and social drinking motives emerged as significant predictors of drinking alone. For married individuals, the odds of drinking by themselves were 85% lower (OR adj =.15; p <unk>.001) than for those who were not married. Social drinking motives were inversely associated with drinking alone (OR adj =.73; p =. 001), with each one-point increase in social motives associated with a 27% decrease in the odds of drinking alone. All other predictors in the model were not associated with drinking alone. Only one factor predicted drinking outside one's home within person negative affect, which was associated with a decreased likelihood of drinking outside of one's home in the CCRC or in the community (OR adj =.84; p =.039). On average, a one-point increase in deviation from one's negative mean value on a given day was associated with a 16% decrease in the odds of drinking outside one's home. Results from the two models exploring the extent to which sociodemographic factors, positive and negative affect, and drinking motives influence the amount people drank on a given day are reported in Table 5. Adjusting for sociodemographic factors (Model 1), higher levels of between person positive affect were associated with less drinking (IRR [Incident Risk Ratio] =.90; p =.028). A one-point increase in one's average PANAS positive affect scale was associated with 10% lower drinking. When drinking motives were included in the model (Model 2), the role of positive affect persisted (IRR =.91; p =.002). Although there was some decrease in model fit overall based on quasi-likelihood information criterion indices (QIC and QICu;Pan, 2001), both coping and social drinking motives were significant predictors, with each one-point increase in coping drinking motives associated with a 17% increase (IRR = 1.17; p =.038) and each one-point increase in social motives associated with a 16% increase (IRR = 1.16; p.003) in drinking. In addition, we found that lower SF-12v2® PCS disability was associated with greater drinking (IRR = 1.01; p =.035), with each 10-point increase in the SF-12v2® associated with a 10% greater count of drinks on a given day. We were interested in exploring the extent to which motivational effects of alcohol use varied, based on the affective makeup of the individual. To see if coping motives affected consumption among those with higher negative affectivity and social motives among those with higher positive affectivity, we added two interaction terms (not shown). Neither interaction term was significant, suggesting that the influence of drinking motives was relatively uniform across the levels of negative and positive affect. --- Discussion Individuals reporting higher levels of positive affect drank less than those with lower levels of positive affect and although coping and social motives were associated with greater consumption, drinking to cope did not confer a specific risk in this sample. Within person variations, both positive and negative affect were not associated with differences in drinking behavior. Negative or positive affect on a given day was not associated with increased drinking for that day. --- Independent living and drinking behavior Our findings suggest that AUDIT-defined hazardous drinking is rare in this sample, but alcohol use with symptom and disease comorbidities and/or use with contrain-dicated medications (based on the CARET) are more common than heavy drinking or problem drinking (identified by the AUDIT). Findings suggest potential program development opportunities for CCRCs to increase awareness and education around drinking with specific disease comorbidities or medication contraindications. Despite these identified comorbidities and that nearly half of respondents in the study drank over the recommendations set by NIAAA (2010), few older adults endorsed specific alcohol-related problems. These findings mirror national studies that have found low prevalence of problem or disordered drinking among adults over age 65 (Blazer & Wu, 2009b;Grant et al., 2004), but higher prevalence of at-risk drinking due to consumption levels above guidelines (Barnes et al., 2010;Blazer & Wu, 2009a). Among older adults, the relationship of elder specific drinking guidelines to later health consequences remains uncertain, with some studies demonstrating mortality effects (Moore et al., 2006) and other studies more equivocal about elder specific drinking limits (Lang, Guralnik, Wallace, & Melzer, 2007). Given the proportion of drinking alone, it would appear that context specific factors such as peer influence and alcohol availability may be less important in influencing drinking among older adults than younger groups. Unlike in adolescent populations (Paschall, Grube, Thomas, Cannon, & Treffers, 2012), alcohol availability may not affect use, and older adults may not be as influenced by peer behavior in the same way as younger populations are. It is possible that peer influences among older adults are a function of social networks more than context specific effects. Older adults who consume alcohol at higher levels may have different social networks (due to health, relative age, etc.) than those who drink less or who abstain completely. The peer effect in older adult drinking may also be different for heavy or problem drinkers than for low risk drinkers (Lemke, Brennan, Schutte, & Moos,2007). Nonetheless, our qualitative research suggests that older adults themselves perceive that their drinking is influenced by their peers (Burruss, Sacco, & Smith, in press). Because much of the work in this area is retrospective and cross-sectional, further research is needed to unpack complex relationships between alcohol use, social networks, contextual specific factors, and drinking. --- Drinking motives and older adults Interestingly, although a large proportion of drinking was done alone, this sample endorsed social motives at the highest levels and coping motives at much lower levels in contrast to the notion that drinking is a means of coping among older adults. Our finding is consistent with other research on older adults that has identified much higher endorsement rates for social motives than for coping motives (Gilson et al., 2013). In our study, social motives were negatively associated with drinking alone suggesting that stated drinking motivation is consistent with behavior. Both social and coping motives were associated with the amount consumed during a given day. One other published study has identified associations between coping motives and alcohol-related problems among older adults, as well as between social motives and quantity, frequency, and binge drinking (Gilson et al., 2013). Although relatively little is known about the motives for using substances among older adults, it has been proposed that older adults primarily use substances to cope with life changes or transitions, loss, depression, and loneliness -aspects thought of as unique to late life (Schonfeld & Dupree, 1991). This literature, however, has focused on older adults' substance problems and more research is needed on motives for drinking among older adults with less severe (e.g. hazardous) or no drinking problems. --- Positive and negative affect and alcohol consumption Much research has focused on approaches to drinking that emphasize alcohol use as a form of tension reduction among older adults, including the stress and coping and self-medication models (Brennan, Schutte, & Moos,1999;Bryant & Kim, 2013;Hunter & Gillen, 2006;Sacco, Bucholz, & Harrington, 2014). The underlying premise of this research is that negative affectivity in the form of stress or stressful events precipitates drinking (Holahan, Moos, Holahan, Cronkite, & Randall, 2001). Much of the research in this area has tended to focus on either cross-sectional analysis or longer term longitudinal assessment of the role of stress and coping on use and consequences. We did not find temporal relationships between negative affect or positive affect and the amount consumed on a given drinking day. Residents with higher mean levels of positive affect drank less. There are a number of factors that may explain this finding. Older adults living in the context of a CCRC may experience less mood variability (Carstensen et al., 2011) than younger age groups. In the absence of marked affective variability, reactive drinking may be less common. The relationship between positive and negative affect is also more complex in older adults than it is among younger groups in that positive and negative emotions may coexist more readily (Grüuhn, Lumley, Diehl, & Labouvie-Vief, 2013). Also, the study sample was made up of older drinkers who did not report alcohol-related problems. Drinking to cope with negative affect may be unique to younger populations of older adults or those with problem drinking histories (Cooper, Frone, Russell, & Mudar, 1995;Gilson et al., 2013). Future research could explore motivational models of drinking among older adults to discern whether daily positive and negative affect influences consumption among problem drinkers. This study suggests that negative affect does not impact consumption but that those with more positive affect drink less. --- Limitations Although this study provides novel insights into drinking patterns specific to older adults, our findings should be interpreted in light of specific limitations. Our sample of older adults was recruited from a single CCRC potentially limiting generalizability to the general population of older adults or to those with identified problem drinking. However, findings may be relevant to a population of current drinking older adults living in CCRCs and other congregate forms of living. A larger number of more representative studies are needed to understand affective and motivational factors that can be generalized to the older adult population living in CCRCs. Because data collection occurred one time per day, we did not explore the impact of within day variation in affect. Similarly, older adults were asked about the previous day, rather than momentary affect. Recall bias under this protocol was likely less than using methods that require recall over a longer timeframe, but the potential for recall bias was still present. The intensive longitudinal approaches used in this study may induce reactivity among participants (Bolger & Laurenceau, 2013, p. 21), but the use of a daily response schedule may be associated with less reactivity due to habituation (Bolger, Davis, & Rafaeli, 2003). Drinking motives were treated as time invariant, but it is possible that motivations for drinking could vary over time in a given individual. --- Conclusions Alcohol use among older adults living in CCRC settings is largely motivated by the desire for socialization, although residents also report drinking to cope. Harmful drinking may be rare among older adults in these settings but hazardous drinking based on comorbidities, concurrent medication use, and other aging-specific factors may be common. Future research on drinking among those in CCRCs should consider the extent to which hazardous use, broadly defined, leads to harmful outcomes. In considering the role of mood, affect, and drinking among older adults, future research should further explore the relationship between drinking and context with an emphasis on identifying factors that are associated with unhealthy drinking. --- Data collection process. Table 1 Risk indicators in the CARET. --- Item --- Acknowledgments --- a n = 324 for this analysis based on subsamples of drinking days only. b OR only presented for significant effects. a n = 563 based on data from 66 participants over 8 days and 5 participants with data over 7 days.
Objectives-The purpose of this pilot study was to describe patterns of alcohol consumption among continuing care retirement community(CCRC) residents and to explore the role of drinking motives and affective states on drinking context and consumption. Method-We utilized a phone-based daily diary approach to survey older adults about their daily alcohol consumption, context of drinking (e.g. drinking alone), positive and negative affect, and their motives for drinking. Data were analyzed descriptively, and regression models were developed to examine associations between sociodemographic factors, affect, drinking context and motives, and alcohol consumption. Results-CCRC residents drank most frequently at home and were alone almost half of drinking days on average, although the context of drinking varied considerably by participant. Problem alcohol use was rare, but hazardous use due to specific comorbidities, symptoms and medications, and the amount of alcohol consumption was common. Respondents endorsed higher social motives for drinking and lower coping motives. Social motives were associated with decreased likelihood of drinking alone, but negative affect was associated with decreased likelihood of drinking outside one's home. Coping and social motives were associated with greater consumption, and higher positive affect was associated with lower consumption.
INTRODUCTION Care is intrinsic to the human condition. Although the term is frequently used in discussions about comprehensiveness and humanization of health practices, its definition is still imprecise due to the complexity inherent to it. In general, care can be understood as the interaction between two or more people for the purpose of alleviating suffering and achieving wellness, mediated by knowledge focused on this end (1). It is often carried out through normative actions reduced to procedures, prescriptions and regulations, to the detriment of a type of care that values the life projects of the other (1)(2). The perspective of care as a daily construction in interactions that involve relationships of power makes the person the main focus (3). Such a perspective broadens the understanding of different ways of caring and the various factors that influence practices. Consequently, it helps diminish the barrier that separates professionals and researchers from users. Therefore, care is linked to social and cultural issues, and may differ from person to person in distinct contexts. Thus, it is pertinent to the present study, especially for its connection with homeless people. The number of homeless or street people is rising in Brazil and various countries, illustrating the extremes of inequality and social exclusion in the world (4). The context of the street is where numerous people seek to be welcomed, supported and sheltered, although they are constantly subjected to unhealthy conditions and human conglomerations, as well as deprivation of food and water, exposure to climatic variations and situations of violence (4)(5). In the street context, many get entangled with alcohol and other drugs, and are vulnerable to chronic, psychiatric and infectious diseases, such as skin conditions, lice infestations, tuberculosis and sexually transmitted diseases (6)(7). The specificities of life in the street, associated with the complexity of factors, renders people susceptible to various social and health problems that challenge different professionals, such as nurses, nursing technicians, physicians, social workers, dentists and oral health technicians, psychologists, community health agents, occupational therapists and social agents from diverse sectors and services in society. According to data from the National Survey on homeless people (5), it is common for them to go to emergency hospitals when they are sick, as well as seek to perform hygiene habits and maintain food intake. The data shows that these people adopt healthcare measures aligned with the context in which they are inserted. These aspects lead to the concept of social representations (8) which understands them as a type of knowledge that produces and determines behaviors and shows that something absent can be added and something present can be modified. Therefore, it is important to explore the meanings attributed to health care by this segment of the population, since social representations can be reflected in practices and behaviors of the social group (8)(9), and thus have a close relationship with Nursing and health teams. The analysis of these representations makes it possible to rethink healthcare practices, as well as implement policies to promote the access of homeless people to health services, with a decrease in the various forms of prejudice, violence and vulnerability to which they are subject. This contextualization gave rise to the following question: How do homeless people represent health care practices? The representation of an object can reveal its multiple facets and make it possible to understand specificities of the individual and/or group in relation to the object represented. In this sense, the Theory of Social Representations, in its structural approach, focused the cognitive processes of social representations, endeavors to study the influence of social factors on thought processes through the identification and characterization of relational structures (9). The purpose of this study was to identify and analyze the structure and content of the social representations of homeless people in relation to health care. --- METHOD This was a qualitative study and the empirical data was produced from May to August 2016. Seventy-two homeless people, registered in two institutional shelters (Unidades de Acolhimento Institucional), located in the city of Salvador, Bahia, Brazil, participated in the study. The shelters were founded in 2014 and are part of the Municipal Network of the Unified Social Welfare System, in order to provide temporary shelter and the means for people 18 years of age or older to have a place to stay, social interaction and a reference address. These two shelters can accommodate between 33 and 51 people, respectively. The group investigated was chosen according to the following criteria: be 18 years of age or older and appear to be able to interact with the researcher. For data production, the Free-Association Test -an instrument widely used in Social Representations Theory-based studies -was used due to the possibility it provides for spontaneously capturing mental projections and implicit or latent content that may be hidden in discursive and reified content (9). The instrument was comprised of two sections. The first involved the data identification and health characterization of the participants. The second, the test itself, composed of the prompting phrase "taking care of your health means", where each participant was requested to state up to five words or short expressions that immediately come to mind. Then the participants chose, from among the terms cited, the one considered the most important, justifying the choice. The test was applied individually, in a reserved room, and lasted 10 minutes on average. Two software for processing qualitative data were used: one to identify the combination of the frequencies of evoked words with the average order of evocation (10), and the other for building a word cloud (11). This, in turn, was used to confirm the centrality of the elements that made up the probable central core. The justifications for the terms considered most important were transcribed in their entirety and used as the basis for the four-quadrant chart, which facilitated comprehending the meanings assigned to the terms evoked. The study was assessed and approved by the Research Ethics Committee of the School of Nursing of the Universidade Federal da Bahia (EEUFBA), under Opinion No. 1.477.800/2016. The rules and guidelines for conducting studies involving human beings were respected, in compliance with Resolution No. 466/12 of the National Health Council. In conducting the study, the confidentiality and anonymity of the homeless people were ensured -through using the letter P, in reference to participant, followed by the number in order of occurrence -as well as their privacy and freedom to participate or not in the study, and to withdraw at any time. --- RESULTS In the group that was studied (72 people), most were women (50), predominantly 21 to 31 years of age (34), in conjugal relationships (47), black in terms of race/color (60), belonged to a religion (50), reported not having completed elementary school and were informally employed (44). As for length of time being homeless, a period of less than five years was the most common (34). In relation to health conditions, most reported not having any comorbidities (40), but among those who did the most prevalent ones were: hypertension (5), syphilis (3), HIV-seropositivity (4) and renal lithiasis (3). Most said they use health services (71), primarily in hospitals (52). In terms of health needs, they said they went to health units for prevention and orientation (40), medical treatment (38), tests (28) and to get medication (27). Most reported having used some type of psychoactive substance in their lives (62), the most prevalent being alcohol (54), followed by marijuana (48). The analysis of the corpus revealed that, in response to the stimulus "taking care of your health means", the investigated group evoked 327 words, of which 47 were distinct. The minimum frequency was 5 and any terms with a lower frequency were excluded from the composition. The average frequency was 15 and the average order of evocation was 2.9. The necessary processing calculations were done through the software itself, based on Zipf's Law (12), which enabled expressing the content and structure of the social representation, as shown in Chart 1. Chart 1 -Four-quadrant chart in reference to the stimulus "taking care of your health means" -Salvador, BA, Brazil, 2017. --- Elements from the central core Frequency <unk> 15 Average Order of Evocations <unk> 2.9 --- Elements from the 1 st periphery Frequency <unk> 15 Average Order of Evocations > 2.9 --- Element Freq. The upper left-hand quadrant, called the central core, contains the terms which obtained a higher frequency and lower average order of evocation. In the present study, the central core was comprised of the term "doctor", which had a higher frequency and was the most readily evoked. This is confirmed by the justifications of the participants for this term: The doctor will look at you, consult with you and see what you need (P 11). --- (...) The doctor knows more (P 32). He'll do all the tests, see whether your pressure is high or low and check how you're doing (P 45). The other terms that made up the central core were: "taking care of yourself ", and "eating", which is an intersubjective and functional dimension. The justifications for these terms, when defined as being more important, underscored the fact that health care involves a personal commitment, prioritizing on health and nutrition, as illustrated in the following excerpts: If we don't take care of our bodies, who will? First, you have to take care of yourself and have self-esteem to be healthy (P 22). Women need to take care of themselves and take preventive measures, always go to the doctor and take care of their intimate parts, right? Apply vaginal cream. In my first pregnancy, I didn't www.ee.usp.br/reeusp Social representations of health care by homeless people Rev Esc Enferm USP • 2018;52:e03314 even know how to apply vaginal cream. My mother-in-law taught me (P 52). --- Try to go to the doctor, eat healthy food. No one can live without food (P 58). Drugs are not healthy, alcohol is not healthy. You have to eat healthy things that give your body energy (P 68). In the upper right-hand quadrant, also called the first periphery, are found the most important peripheral elements of the representation, since they had the highest frequencies, even though they had been evoked later on (9), and include the terms 'taking preventive measures', 'hygiene' and 'happiness'. In the hierarchization process, 'taking preventive measures' was referred to 17 times in the justifications, demonstrating the importance of the maintenance of hygienic care for the group studied: In order not to harm yourself or others (P 5). Through hygiene, you do not contract bacteria, which avoids having to go to the doctor or taking medication (P 4). --- If you don't keep yourself clean, you're susceptible to catching a disease (P 14). The term 'happiness' introduces an affective and subjective dimension in relation to the object investigated. Five of the participants indicated and justified this term as being the most relevant, as shown in the following narratives: Happiness is when the body and mind are in equilibrium, and health is also balanced (P 8). Happiness is the ultimate goal of life. Your achievements in life don't matter if you're not happy. They're not worth anything (P 9). --- When you're happy, it's much easier to work things out in life. It's much easier to solve things. I think that happiness comes first in everything (P 26). The lower right-hand quadrant, called the second periphery, contains the least frequent and most belatedly evoked elements, which have pertinence in the representational field due to their significant participation in reference to daily practices (9). This quadrant was composed of six terms: 'physical activity', 'test', 'treat', 'beauty', 'healthy' and 'body', as illustrated in the following segments: Checkups are important for avoiding diseases. What's important in life is health, peace and freedom (P 17). --- When I engage in physical activity, I feel very good (...) lighter, and my breathing improves (P 19). If we don't take care of our bodies, who will? First, you have to take care of yourself and have self-esteem to be healthy (P 10). --- If you don't treat the disease, how is it going to be cured? (P 24). You have to take preventive measures against disease, have greater joy (P 60). In the lower left-hand quadrant, technically referred to as the contrast zone, the elements are low in frequency and readily evoked by the participants (9). It is composed of seven terms: 'good','medication', 'important', 'life','sickness','responsibility' and 'love'. These aspects reinforce the elements arranged in the central core. Ten participants assigned greater importance to the terms: When you have self-esteem, you generally take care of your health (P 12). --- If you don't get treated, you aren't living. Without health, you won't live for a long time (P 16). Because love is what drives everything; I am love, an insurmountable love; I believe in this love, immeasurable, free of charge, in simplicity (P 71). In the word cloud (Figure 1), which randomly groups and organizes the terms taking frequency into account, it can be seen that the word 'doctor' appeared the most in the corpus (36), followed by the term 'taking preventive measures' (31) and 'taking care of yourself'(30), as well as the terms 'hygiene' (27) and 'eating' (22). --- DISCUSSION The set of words prompted by the stimulus "taking care of your health means" and its distribution in the four-quadrant chart reveals that the social investigation of the group was anchored in habits and actions disseminated over the years by the biomedical model and permeated with specificities inherent to the context in which these individuals are inserted. The conception that health care has evolved from strictly healing and individualized techniques to comprehensive and collective practices (13) appears to be reflected in the terms contained in the central and peripheral system of the four-quadrant chart. In general, the terms embody technical aspects related to the treatment and cure of diseases, with intersubjective and attitudinal elements that reveal the involvement of the person in the healthcare process. According to the principles in the structural approach of the Social Representations Theory (9), the words arranged in the upper left-hand quadrant characterize the possible central core of the representation, since they were evoked more readily and due to their high frequency. It is worth noting that the central core is the most stable part of the social representation, with fewer possibilities of change. In this study, the elements that composed it were: 'doctor', 'taking care of yourself'and 'eating'. Although the participants referred to a concept of care still rooted in the biomedical model, in which physicians play a central role, they indicated co-responsibility based on self-care, i.e., care which depends on the person. As expected, the holding of power and knowledge to prescribe tests and drugs for treating and curing diseases was attributed to physicians. This conception may explain the fact that the 'doctor' element had a higher frequency and was the one most readily evoked by the group. The other terms from the central core, 'taking care of yourself'and 'eating', as well as the others that made up the four-quadrant chart, confirm the idea of the person as the main care focus and its daily construction (3). In the peripheral system, there are various terms related to the group's understanding of an expanded definition of the concept of health and, consequently, of health care. The first periphery -upper right-hand quadrant -consists of elements which, due to their importance, often reinforce the central elements (9). In this study, the terms 'taking preventive measures', 'hygiene' and 'happiness' implicate the person in the healthcare process and have attitudinal dimensions related to practices or actions that permeate caring for one's own health. In the group investigated, prevention is a daily task that is not only limited to measures against catching diseases, but also encompasses protection against situations of violence (14) and injuries that can damage physical and mental health. Such situations are linked to the reality of the context in which they are inserted (14)(15). As expected, the main synonyms of hygiene are 'healthy' and 'fragrance'. The conditions of homeless people, associated with dirt and poor hygiene, are factors that prevent and/or hinder access to health services and increase social exclusion. In Brazil, the hygienic mentality was propagated between the second half of the 1940s until the mid-1960s, to address commercial issues of the industrial era. The production and commercialization of new and varied products related to health and hygiene were widely covered by the press, disseminating a new modern and healthy way to live (16). This production and commercialization of new products and their dissemination in the media is still the case today, and imposes upon the society a concept of hygiene associated with aromatic products, while at the same time condemning the natural smells of the human body. The presence of the term 'happiness' refers to a subjective dimension of the social representation of the investigated group in relation to health care. Although many intellectuals have devoted themselves to the study of happiness, there is no still consensus as to what this feeling is. Feelings of happiness are unique to each person and may be individually or collectively associated with physical, social, affective or other factors. Satisfaction with one's health is an extremely important feeling for increasing the likelihood a person will say they are happy (17). This satisfaction does not depend on social relationships, but on people's feelings toward themselves, since their health may be affected by a sickness, but they are nevertheless happy. A feeling of satisfaction is very important for people to claim they are happy. It is possible to have a life that is not based on prescriptive happiness, i.e., a life whose objects of desire break away from historically and socially established criteria, for example: graduation, success at work, marriage, family (17)(18). From this perspective, wanderers, migrants, homeless people and individuals living in diverse cultural contexts will not view themselves as disadvantaged within the broader narrative on happiness (18). As strange as it might seem, being homeless may also be a form of feeling happy, distant from socially formatted contexts and that constitute an element that triggers/ generates a disease cycle. The street often becomes a place of refuge, liberation and for establishing new relationships. The elements that make up the second periphery -lower right-hand quadrant -formed by words less readily and frequently evoked, had a lower significance or importance for the group examined (9). In this study, 'physical activity', 'test', 'treat', 'beauty', 'healthy' and 'body' also had characteristics with a positive connotation for the object represented, in an attitudinal and image-related dimension. These terms are complementary and indicate that taking care of one's health helps ensure better quality of life, guided by the biomedical model, in an intimate association with the term 'doctor', found in the central core. This association reveals a more pragmatic and procedural need for this care (12). The terms 'physical activity', 'beauty' and 'body' complement each other and disclose an image-related dimension of the social representation of the object investigated. These terms also denote positive aspects in relation to health care and the involvement of the individual. Physical activity was inherent to the daily lives of the participants, in their attempts to find ways to maintain hygiene habits and obtain food and an adequate place to sleep and rest. However, the appearance of this term in the second periphery may be rooted in the idea that has been propagated that engaging in physical activities helps maintain a healthy life, beauty and the body, and prevents health problems (19). It is clear that, for the social group, health is related to the body and beauty, regardless of the location where they are. It is worth noting that beauty is relative, despite standards socially disseminated by the media. In any case, the concern about beauty and one's body involves actions focused on the complex process of health care, revealing the implication of self-care. The set of words that make up the contrast zone -lower left-hand quadrant -contains elements that obtained a low frequency and average order of evocation, but were considered important for the group that was investigated (9). The set of words that comprise the contrast zone are: 'good','medication', 'important', 'life','sickness','responsibility' and 'love', and also point to an image-related, intersubjective and functional dimension of the representation. Some terms make reference to the biomedical model, but promote the individual's involvement in health care and love as an element of this care. A predominance of terms with a positive connotation was also noted (good, important, life, responsibility, love). The term 'important' refers to the symbolic value associated with the object and indicates involvement in the development of control strategies and responsibility for one's personal health. The 'love' element deals with an affective dimension and is associated with the term 'happiness', placed in the second periphery, reinforcing the expanded concept of health. This term denotes that care involves the participants' need for self-esteem, in order to take better care of themselves and others. The social representation also fulfills a function in relation to familiarity with the group, and the affective dimension is presented on the basis of this transit, supported by individual and collective memory and by daily experiences and situations (20). According to the principles of the structural approach of the theory, the peripheral system is linked to daily reality, encompasses elements of transition and is responsible for updating the central core (9). This dynamicity promotes transformation of the social reality and helps modify behaviors, conducts and actions related to their health as homeless people. With respect to the word cloud, the terms more emphatically expressed are represented by the expressions 'doctor', 'taking preventive measures', 'taking care of yourself ', 'hygiene' and 'eating', where two of the terms belong to the first periphery of the four-quadrant chart. This, therefore, illustrates the centrality of the terms in the central core and reinforces how the rules of medical knowledge are important for the group that was studied and, at the same time, shares and appropriates knowledge based on expanded health care, using notions of prevention and promotion of complications, as well as individual practices for taking care of health. --- CONCLUSION In studying the social representations of a group of homeless people in relation to health care, the centrality of cultural elements in regard to health and specificities in the daily lives of the investigated group were noted and that warrant being considered in professional care practices. The predominance of the term 'doctor' in the central core reflects the idea of health care linked to diagnosis, treatment and, at times, cure of a certain disease and, at the same time, reveals one of the problems faced by homeless people, which is access to health services. The set of evoked words represents health care as a daily construction, rooted in actions for meeting basic human needs established by the context of the street. Co-responsibility for health care is inherent to this context. The terms evoked reveal aspects of the image-related, cultural and biological dimensions of health care. The data produced cannot be generalized due to the limitation of the group studied and the dynamicity of social representations. Its originality and unprecedented nature permit reflection on the formulation of professional practices aligned with the needs and realities of homeless people, in addition to indicating the need for further studies on the topic. In this sense, it is believed that the data can be used in initiatives to train health professionals, especially nurses, in order to reduce conflicts in the care provided and decrease health complications in the homeless population. --- RESUMO Objetivo: Identificar e analisar a estrutura e o conte<unk>do das representaç<unk>es sociais de pessoas em situaç<unk>o de rua sobre cuidados em sa<unk>de. Método: Pesquisa qualitativa, fundamentada na abordagem estrutural da Teoria das Representaç<unk>es Sociais, com pessoas em situaç<unk>o de rua, vinculadas a duas unidades de acolhimento institucional. Para a produç<unk>o dos dados, foi utilizado o teste de associaç<unk>o livre de palavras, cujos dados foram processados por dois software e analisados à luz da referida teoria. Resultados: Participara da pesquisa 72 pessoas. O conjunto de evocaç<unk>es do quadro de quatro casas remete a aç<unk>es individuais, sociais e culturais. Os termos médico, cuidar de si e alimentaç<unk>o compuseram o n<unk>cleo central da representaç<unk>o, sinalizando dimens<unk>es imagética e funcional do objeto investigado. A nuvem de palavras confirmou a centralidade dos termos. Conclus<unk>o: O grupo investigado representa o cuidado em sa<unk>de como uma aç<unk>o dinâmica, vinculado à pessoa e ao contexto e ancorado em elementos da concepç<unk>o higienista. --- DESCRITORES Pessoas em Situaç<unk>o de Rua; Assistência à Sa<unk>de; Autocuidado; Enfermagem em Sa<unk>de P<unk>blica; Enfermagem de Atenç<unk>o Primária. --- RESUMEN Objetivo: Identificar y analizar la estructura y el contenido de las representaciones sociales de personas en situación de calle acerca de los cuidados sanitarios Método: Investigación cualitativa, fundamentada en el abordaje estructural de la Teor<unk>a de las Representaciones Sociales, con personas en situación de calle, vinculadas a dos unidades de acogimiento institucional. Para la producción de los datos, se utilizó la prueba de asociación libre de palabras, cuyos datos fueron procesados por dos softwares y analizados a la luz de la mencionada teor<unk>a. Resultados: Participaron en la investigación 72 personas. El conjunto de evocaciones del cuadro de cuatro casas remite a acciones individuales, sociales y culturales. Los términos "médico", "cuidar de s<unk>" y "alimentación" compusieron el n<unk>cleo central de la representación, se<unk>alando la dimensión de imágenes y la funcional del objeto investigado. La nube de palabras confirmó la centralidad de los términos. Conclusión: El grupo investigado representa el cuidado sanitario como una acción dinámica, vinculado con la persona y el contexto y anclado en elementos de la concepción higienista. --- DESCRIPTORES Personas sin Hogar; Prestación de Atención de Salud; Autocuidado; Enfermer<unk>a de Salud P<unk>blica; Enfermer<unk>a de Atención Primaria. --- Erratum -Social representations of health care by homeless people In the article "Social representations of health care by homeless people", DOI: http://dx.doi.org/10.1590/s1980-220x2017023703314, published by the journal "Revista da Escola de Enfermagem da USP", Volume 52 de 2018, elocation e03314, on page 1: Where was written: Dejeane de Oliveira Silva 1 1 Universidade Federal da Bahia, Salvador, BA, Brazil. Now read: Dejeane de Oliveira Silva 1,2 1 Universidade Federal da Bahia, Salvador, BA, Brazil. 2 Universidade Estadual de Santa Cruz, Ilhéus, BA, Brazil.
Objective: Identify and analyze the structure and content of the social representations of homeless people in relation to health care. Method: Qualitative study, based on the structural approach of the Theory of Social Representations, conducted with homeless people, linked to two institutional shelters. To produce the data, the free-association test was used. The resulting data was processed by two software and analyzed according to the theory above. Results: Seventy-two people participated in the study. The set of evocations from the four-quadrant chart refers to individual, social and cultural actions. The terms 'doctor', 'taking care of yourself ' and 'eating' composed the central core of the representation, indicating image-related and functional dimensions of the object investigated. The word cloud confirmed the centrality of the terms. Conclusion: The investigated group represents health care as a dynamic action, linked to the person and context, and is anchored in elements of the hygienist conception.
Introduction Conspiracy theories are nothing new in human history. Scholarly research on conspiracy theories began in the 1930s (Butter and Knight 2018), and it has been a field that is highly multidisciplinary and diverse (Mahl, Schäfer, and Zeng 2022). Various researchers have proposed definitions for conspiracy theories. Keeley (1999) defines conspiracy theory as "a proposed explanation of some historical event (or events) in terms of the significant causal agency of a relatively small group of persons-the conspirators-acting in secret." A more general definition of conspiracy theory is provided by Wood, Douglas, and Sutton (2012) as "a proposed plot by powerful people or organizations working together in secret to accomplish some (usually sinister) goal." Conspiracy is sometimes considered a form of misinformation. Misinformation is commonly defined as "false or inaccurate information that... spread regardless of an intention to deceive. (Tomlein et al. 2021)" This suggests that any malicious intent of the content creator is not a necessary condition of misinformation, but incorrectness of information is. Thus, there is a stark difference between conspiracy and misinformation; the intent of powerful people (or organizations) is crucial for the definition of conspiracy. This difference proves the need for in-depth research on conspiracies that should be differentiated from those on misinformation. A belief in conspiracy often correlates with anomia, lack of interpersonal trust, and having political beliefs at extreme ends of the political spectrum (especially on the right-hand extreme) (Goertzel 1994;Sutton and Douglas 2020). Conspiracy theories, in contrast to non-conspiracy views, tend to be more attractive as they satisfy one's epistemic (e.g., the desire for understanding, accuracy, and subjective certainty), existential (e.g., the desire for control and security), and social desires (e.g., the desire to maintain a positive image of the self or group) (Douglas, Sutton, and Cichocka 2017). This results in undesirable outcomes like decreased institutional trust and social engagement, political disengagement, prejudice, environmental inaction, and an increased tendency towards everyday crime (Pummerer et al. 2022;Douglas, Sutton, and Cichocka 2017;Jolley et al. 2019). Additionally, conspiracy theories can form a worldview in which believers of a type of conspiracy tend to approve of other conspiracies as well (Wood, Douglas, and Sutton 2012;Dagnall et al. 2015). Polls have also shown that "everyone believes in at least one or a few conspiracy theories (Uscinski 2020)". Hence, a holistic understanding of conspiracy theories cannot be achieved in isolation from a specific type of conspiracy. In today's context, conspiracy theories are widely propagated on social media (Enders et al. 2021;Mahl, Zeng, and Schäfer 2021). Conspiracy narratives are nourished by information cascades on social media and reach a larger audience (Monaci 2021). Consequently, these false narratives tend to outperform real news in terms of popularity and audience engagement within online environments (Coninck et al. 2021;Vosoughi, Roy, and Aral 2018). Enders et al. (2021) show that usage of 4chan/8kun has the highest correlation with the number of conspiracy beliefs, followed by Reddit, Twitter, and YouTube. Among the social media services, YouTube is one of the most influential sources of news and entertainment (Center 2012). It has 2,562 million monthly active users, and it is the second most popular social network worldwide as of January 2022 (Statista 2022), contributing to a billion hours of video viewed daily (Goodrow 2017). Audit studies show that video recommendations on YouTube can lead to the formation of filter bubbles on misinformation topics (Hussein, Juneja, and Mitra 2020). Similarly, exposure to conspiracy videos might result in undesirable outcomes. For example, the belief that the 5G cellular network caused COVID-19 has resulted in more than 200 reports of attacks against telecom workers in the United Kingdom (Vincent 2020). The belief in white genocide conspiracies resulted in the death of 51 individuals in New Zealand (Commission et al. 2020). The belief in conspiracy theories is no doubt an issue of concern. However, most existing research focuses only on specific types of conspiracy theories, and not all datasets are available for research communities. In this work, we build YOUNICON, a curated dataset of YouTube videos from channels identified as producing conspiracy content by Recfluence (Ledwich and Zaitsev 2020). We aim to help researchers to study the patterns of production and consumption of conspiracy videos, such as how individuals interact with those videos from an aggregated (video) or individual level (comments) 1. YOUNICON comprises the following information: • Metadata of all 596,967 videos from 1,912 channels that produced conspiracy identified by Recfluence (Ledwich and Zaitsev 2020) • A list of 3,161 videos manually labeled as being about conspiracy or not • 37,199,252 comment IDs of comments in all videos with basic metadata and scores from the Perspective API 1 • 100 videos manually labeled for the type of conspiracy. YOUNICON will be a valuable resource for studying YouTube as a medium of conspiracy theory production and consumption. The contributions of this paper are as follows: • Curate a large-scale dataset of videos with conspiracy content (https://doi.org/10.5281/zenodo.7466262) • Perform exploratory analyses on the dataset to understand its key properties • Discuss potential uses for the dataset --- Related Work and Datasets --- Conspiracy Detection Table 1 highlights several existing datasets that have been used for conspiracy theory detection research. Existing literature often focuses on misinformation (Lin et al. 2019;Kumar et al. 2020) or specific conspiracy theories related to COVID-19, alien visitation, anti-vaccination, white genocide, climate change, or Jeffery Epstein (Moffitt, King, and Carley 2021;Marcellino et al. 2021;Phillips, Ng, and Carley 2022). Most works focus mainly on Tweets as the unit of the study (Moffitt, King, and Carley 2021;Galende et al. 2022;Phillips, Ng, and Carley 2022;Mahl, Zeng, and Schäfer 2021). For example, Galende et al. (2022) study Tweets explicitly containing the word "conspiracy. " Phillips, Ng, and Carley (2022) Conspiracy Taxonomy Mahl, Zeng, and Schäfer (2021) used network analysis of co-occurring hashtags in Tweets to assign hashtags into topic groups qualitatively based on their thematic relationship. This resulted in the 10 most visible conspiracies, which include Agenda 21, Anti-Vaccination, Chemtrails, Climate Change Denial, Directed Energy Weapons, Flat Earth, Illuminati, Pizzagate, Reptilians, and 9/11 Conspiracies. While co-occurring patterns of hashtags reveal a partial taxonomy of conspiracy, a more comprehensive one is found on Wikipedia. On Wikipedia, a list of conspiracy theories is constantly being updated (Wikipedia contributors 2022). Upon closer inspection of the list of conspiracy topics from Wikipedia, we found that it covers well the conspiracies in Mahl, Zeng, and Schäfer (2021) (see Table 3 for details). Hence, we will use the taxonomy of Wikipedia for YOUNICON. --- Data Collection On YouTube, interactions between content creators and consumers occur as follows: a content creator posts a video with a title, description, and tags. A content consumer views, likes, or comments on a video. A "view" represents a playback of a video, a "like" is positive feedback to the video by users, and a "comment" is the way in which online collective debates grow around the video (Bessi et al. 2016;YouTube 2022). A comment can be a reply to a video (a top-level comment) or a reply to other comments. --- YouTube Channels about Conspiracy Ledwich and Zaitsev (2020) curated a list of US-based political channels in the Recfluece project. They classify each channel based on its political leaning, channel type (e.g., mainstream news, AltRight, etc), and topical category (e.g., conspiracy, libertarian, organized religion, LGBT, etc). We downloaded an entire list of YouTube channels from Recfluence on 25 February 2022 and extracted only channels with the "conspiracy" label. We then used the YouTube Data API2 to collect the basic information about these channels. Out of the 2365 channels with the "conspiracy" label, 1912 channels were accessible by the YouTube API. The rest of the channels were deleted from YouTube and thus excluded from the following analysis. While Recfluence provides a quite extensive list of US-based political channels, the resulting list could be improved with more channels. However, all the pipelines used in this work will still be valid. --- Video Metadata In contrast to Recfluence (Ledwich and Zaitsev 2020) that provide channel-level conspiracy information, YOUNI-CON focuses on video-level conspiracy. For the extracted conspiracy-related channels from Recfluence, we collect the metadata of every video published on those channels. The metadata includes the title, description, tags, number of likes, number of views, duration, and published date. We (Galende et al. 2022) Twitter conversations dataset of conversations with more than 1000 Tweets that contain the word "conspiracy". Manually labeled by research assistants. collect these metadata for 1,049,413 videos in total. To get a better sense of the content that was presented in the videos, we also collect transcripts or subtitles of the videos using a PyPI package, youtube-transcript-api 3. We only consider those videos with English transcripts, which are 761,565 videos in total. --- More than 4,500 conversations --- Semi-automatic We further filter out non-English videos by detecting the language of the videos based on their titles, which often summarizes the gist of the video. In particular, we use the Fasttext language identification model (Joulin et al. 2016), which can recognize 176 languages, with a threshold of 0.5 to determine the language with the highest probability for a video. Between the two Fasttext language identification models, we used the larger and more accurate one (i.e., lid.176.bin). As a rule of thumb, channels with less than 80 percent of their videos that are in English are excluded from the rest of the analysis. For all textual metadata, we apply common preprocessing techniques (e.g., remove emojis, URLs, punctuation, and numbers, and convert them to lowercase). Then, we filter out videos that do not have all the metadata. This results in a collection of 596,967 videos with all metadata, which are title, 3 https://github.com/jdepoix/youtube-transcript-api description, tag, and transcript. Additionally, we collect top-level comments as a part of the video's features. We filter out comments' authors if 1) their comments detected as English are less than 80%, or 2) they leave only one comment. We also eliminate the toplevel comments written by the same video creator to focus on the behavior of the viewers. As a result, we obtain 37,199,252 comments. For these comments, we use the Perspective API to perform scoring for toxicity, identity attack, and threat4. --- Dataset Construction Figure 1 is a flowchart that summarises the dataset construction proposed in this paper. The following sections will explain the proposed method in detail. Table 2 summarises the variables available in the YOUNICON. --- Video Labeling The procedure of manual annotation is done in accordance with the Institutional Review Board (IRB) guidelines under We use Amazon Mechanical Turk (AMT) for data labeling. Our labeling task, known as Human Intelligence Task (HIT) in AMT, asks an AMT worker whether a given video contains conspiracy or not. The title, description, tags, and the first 1,000 characters of the transcript of each video are given to AMT workers. We select workers located in the US, with a past HIT approval rate of greater than 98% and 5,000 HITs approved, and compensated them at a rate of 0.05 USD per HIT. For each video, we recruit three workers and determined a label based on the majority vote. In contrast to misinformation where there is a clear-cut answer, determining whether a video contains conspiracy can be more challenging as an individual's political or religious belief might affect their decision about conspiracy videos. Thus, we follow a majority voting scheme for each video's label. Labeling is conducted in two stages. In the first stage, we sample 2,200 videos. After labeling, we find that this dataset is somewhat imbalanced; only around 20% (436 of the videos out of 2,184) contain conspiracy theories. Although 20% may seem like a relatively large proportion of the videos with conspiracy theories, we note that all these videos are from the channels that are categorized as 'conspiracy' in Recfluence (Ledwich and Zaitsev 2020). To make YOUNICON a better-balanced dataset of conspiracy and non-conspiracy videos, we use Machine Learning models to get pseudo-labels first. We finetune the RoBERTa-large model by using the sampled videos. We split the data into train, validation, and test sets and use the concatenated texts as features for the model. This model attained an accuracy of 0.74, with a positive F1 of 0.5273 and a negative F1 of 0.8207. We used this trained model to assign 'conspiracy' or 'non-conspiracy' pseudo-labels to all the videos in the full dataset. Then, we sample 1,000 videos with 'conspiracy' pseudo-labels and manually labeled them in the same manner. After these 2 rounds of labeling, we obtain a dataset of 3,161 videos (1,144 conspiracy videos (36.2%)). Fleiss' Kappa, an extension of Cohen's kappa, is used to measure inter-rater reliability (Fleiss 1971). A score of 0.4111 is calculated, implying that there is a moderate agreement between raters in the dataset (Landis and Koch 1977). --- Topic Classification Going beyond whether a video is about conspiracy or not, we also assign a conspiracy topic to a video based on conspiracy taxonomy compiled on Wikipedia (Wikipedia contributors 2022). In doing so, we first parse the text of the "List of conspiracy theories" page on Wikipedia (Wikipedia contributors 2022). This Wikipedia page contains summaries of the popular conspiracy theories, which include Aviation, Business and Industry, Deaths and Disappearances, Economics and Society, Espionage, Ethnicity, Race and Religion, Extraterrestrials and UFOs, Government, Politics and Conflict, Medicine, Science and Technology, Outer Space and Sports (Table 3). We exclude the category of "Sports" because our dataset, based on Recfluence (Ledwich and Zaitsev 2020), is unlikely to contain Sports-related conspiracies. The topic "Fandom, celebrity relationships, and shipping", a new topic added on 18 May 2022, which is after our Wikipedia data collection, is also not included in this analysis. The topic classification consists of two stages: 1) keyword extraction and 2) topic inference. For keyword extraction, we identify representative words of each topic using log-odds ratios with informative Dirichlet priors (Monroe, Colaresi, and Quinn 2008), which is a widely used technique for a large-scale comparative text analysis (An et al. 2021;Kwak, An, and Ahn 2020). It estimates the log-odds ratio of each word between two corpora i and j given the prior frequencies obtained from a background corpus. We rank the words based on their log-odds scores and obtain a list of representative words for each of the conspiracy theories. The background corpus used in this analysis is the "google 1-gram" (Michel et al. 2011), extended with the counts of the vocabulary used in the "list of conspiracy theories" Wikipedia page. For each conspiracy topic, we compare the corpus of one topic against the concatenated corpus of all other topics. For each topic, we use the top ten keywords as the preliminary keywords for topic inference (see Table 3 for the list of words extracted). We also add the subtopic names listed in Wikipedia to the keywords of each topic. We then convert 21 keywords to bigrams or trigrams to be more distinguishable (e.g., the keywords new, world, and order should be considered as a trigram, not three unigrams) and remove 62 keywords related to countries or locations (Malaysia, Wuhan) or those that are generic (January, human). Having the representative keywords for topics at hand, we infer the topic of the video by simply using a keywordmatching method. We match the keywords in each topic to the video's features by using spaCy's PhraseMatcher. We assign a topic by choosing one with the highest frequency of --- Exploratory Data Analysis To provide a brief overview of the dataset, we conduct an exploratory analysis. We first compare the difference in engagement of videos with conspiracy theories and those without conspiracy theories. --- Results --- Conspiracy Detection We use the annotated data of 3,161 videos to build a classifier that detects whether a video is about conspiracy or not. Since our data is slightly unbalanced (1,144 videos are conspiracy), we perform under-sampling to balance the classes for the training. For testing, we use the holdout test set sampled from the initial (first-round) 2,200 annotated videos. As a feature, we use all video's textual meta information, including title, tags, description, and transcript. Since the deep learning models can take 512 tokens at maximum (Liu et al. 2019), we truncate the video description and transcript, using the first 200 tokens. The feature input, called combined, is created by concatenating the first 200 tokens or words for both the video description and transcript, followed by the title and tags. In order to compare the performance of the models, other than simply accuracy, recall, or precision, we use the F1score: F 1 = 2 <unk> Precision <unk> Recall Precision + Recall, which is calculated for both the positive and negative classes. To account for class imbalance, F1 weighted, which is the F1 score weighted by the support, is also used. Table 5 summarizes the prediction results of various models. Dummy Classifier predicts all videos as negative (or non-conspiracy), yielding an accuracy of 0.8, which is the same as the proportion of non-conspiracy videos in the test set. Traditional machine learning models, including Naive Bayes, Logistics Regression, and Support Vector Machine with Linear Kernel (SVM), are also tested. All three models, Naive Bayes, Logistics, and SVM slightly outperform the Dummy classifier, obtaining a weighted F1 of 0.7141, 0.7930, and 0.7863, respectively. We also explore pre-trained language models, such as RoBERTa-large. The training set is further split into 80-20 train-validation split for finetuning of the pre-trained model. The learning rate of 1e-5 is used with a batch size of 4 with random seed 13 for finetuning. Our results in Table 5 show that the pre-trained models result in better performance in all metrics but recall, obtaining an accuracy of 0.8575 and weighted F1 of 0.8624. In Table 5, we also show the prediction results based on individual features. By comparing the weighted F1 of models built based on each feature, we observe that the tags are best among the individual features, followed by video description, titles, and transcript. --- Zero Shot and Few Shot Classification We further conduct experiments to examine if it is possible to detect conspiracy theories via zero and few-shot learning. Zero and few-shot learning are techniques that aim to make predictions for new classes with limited labeled data. We test pre-trained Natural Language Inference (NLI) (Bowman et al. 2015;Williams, Nangia, and Bowman 2018) and Natural Language Generation (NLG) (Lewis et al. 2020;Zhang et al. 2022) models on zero and few-shot settings. We use all the features of videos as the input for those models. For the NLG models, we test on both auto-regressive generation and sequence to sequence models5. However, we find that the generated results of zero-shot and most few-shot models are simply a repeat of the given text, from which we cannot infer the classification labels. The model could generate clear classification indicators (i.e., yes or no in our setting) only for the few-shot settings with 128 fine-tuning data instances. However, it predicts all inputs as non-conspiracy. As for the NLI models, we apply the top three most popular fine-tuned zero-shot inference models from the Huggingface website6. Considering the NLI is not a binary classification task, we neglect the score of neutral prediction and activate the scores of entailment and contradiction predictions as the final binary output. To help the NLI models better understand the objective of detecting the conspiracy from short texts, we concatenate the input text with the assumption statement (i.e., That is a conspiracy.). The model would then give out the answer about whether the assumption statement entails or contradicts the given text. The contradicting answer means that the model predicts the given text as nonconspiracy. The zero-shot test results are in Table 6. The best positive f1-score of 0.57 still does not outperform our proposed conspiracy detection method. Yet, we demonstrate the possibility of those NLI models for the conspiracy detection task. --- Topic Classification We perform topic inference to understand the type of conspiracy theory of a video published on YouTube. For the ground-truth dataset, we randomly sample 100 videos and label them by the first author based on Table 3. In Figure 2, we investigate how sensitive our topic inference method is. The method has two parameters: dominance and the minimum number of words matched. Dominance (Zumpe and Michael 1986) is a metric that is commonly used to study the diversity of a community. A higher dominance score suggests a higher percentage of the words matched with one topic (i.e., if dominance is 1, all words matched are in one topic). Hence, having a threshold for dominance to be higher will ensure that the matched topic will have higher accuracy, but lesser videos are likely to be matched. The number of words matched also interplays with the retrieval and accuracy. Figure 2 shows this relationship. For example, when we consider a match of the topic to be that it requires at least one word matched, 76 videos are matched with a topic, but the accuracy of matching is 0.789. If we increase the threshold to at least 10 words and the dominance score to be greater than 0.6, only 14 videos are matched with a topic but all matching is correct. When there are at least 2 words matched and a dominance threshold of greater than 0.5, the accuracy is 0.842, and 57 videos are matched with a topic. We explore the topics covered by conspiracy videos using the method outlined above. By using the parameters of at least 2 words matched and a dominance threshold of greater than 0.5, we apply the matching to all the videos with conspiracy theories in our dataset to understand the distribution of the conspiracy topics. Out of the 1,144 conspiracy videos in the dataset, 770 videos have been matched with a topic. Figure 3 shows the distribution of the detected topics. Topics are relatively well distributed, and the top four topics are "Ethnicity, Race, and Religion," "Government, Politics, and Conflict," "Science and technology," and "Extraterrestrials and UFOs." --- Discussion In this paper, we propose a new dataset, YOUNICON, for the detection of conspiracy theories on YouTube over various topics. While conspiracy theories have been studied for decades across different disciplines, a large-scale dataset of videos on popular social media services will accelerate research on the production and consumption of conspiracy theories on online platforms. YOUNICON offers a plethora of opportunities to study the subject of conspiracy theories from the text data. First, we hope that the automatic detection of conspiracy theories can be deeply explored by the machine learning community and potentially result in real-world tools to assist and facilitate the work of fact-checkers (e.g., pointing out not only conspiracy theories videos but the exact time the conspiracy theory appears within the video). Our study gives a first step in this direction by exploring standard classification techniques, providing the first assessment of the potential of automated detection of conspiracy theories, and also a baseline for future comparisons. Second, we hope researchers can use the dataset to study the dynamics of conspiracy theories on systems like YouTube. As this dataset contains all videos 0.0 0.4 0.5 0.6 0.7 0.9 dominance that are available in the channel's lifetime (as long as it is not removed from the platform), we are able to study how these content creators have evolved their production strategies over time. For example, do channels focus on a particular type of conspiracy over time or do they adopt a more generalist approach and produce a variety of content? Are there relationships between engagement and topics of conspiracy? The dataset has the potential to answer such questions. Similarly, for the content consumers (or the video audience), the comments included in the dataset can act as a peephole for us to analyze the behaviour of their consumption of conspiracy theories. In other words, researchers can potentially trace a conspiracy pathway, and look at how people can get involved in the echo chambers of conspiracy theories. Future works can include looking for better ways to perform topic classification. While Wikipedia's list of conspiracy theories is used here, this classification can serve as a starting point for a better taxonomy to be developed. Given the advances of large language models (LLMs), it would be worth exploring the prompting approach with recent LLMs or the in-context learning approach with prompt tuning for the conspiracy detection task. --- FAIR Consideration The proposed dataset follows the FAIR principles of Findability, Accessibility, Interoperability, and Reuse-ability. The dataset can be found and accessed through Zenodo at the DOI: https://doi.org/10.5281/zenodo.7466262. Keywords for the topics of conspiracy theories are also shared as a CSV file for the use of other researchers for works related to conspiracy theories. Hence, the data satisfies reusability and interoperability. --- Ethical Consideration We carefully designed our dataset from the data collection period. We collect only publicly available data on YouTube with the use of YouTube's Data API. Also, our approach is approved by the Institutional Review Board of Singapore Management University (IRB-22-129-A071( 922)). To safeguard the interests of our labelers on Amazon Mechanical Turks, they are informed that the content that the conspiracy theories are not true and that withdrawal from the study is without penalty. Helplines are also provided to the participants in the event of any negative emotions.
Conspiracy theories are widely propagated on social media. Among various social media services, YouTube is one of the most influential sources of news and entertainment. This paper seeks to develop a dataset, YOUNICON, to enable researchers to perform conspiracy theory detection as well as classification of videos with conspiracy theories into different topics. YOUNICON is a dataset with a large collection of videos from suspicious channels that were identified to contain conspiracy theories in a previous study (Ledwich and Zaitsev 2020). Overall, YOUNICON will enable researchers to study trends in conspiracy theories and understand how individuals can interact with the conspiracy theory producing community or channel.
Background Women in prison have poor self-reported health and high levels of social disadvantage, experience of trauma and mental health problems [1][2][3][4]. In Australia, women are usually in prison for less than 6 months, re-incarceration is common and the majority report problematic substance misuse [3][4][5][6]. Approximately 8% of people in prison in Australia are women and the imprisonment rate for women is increasing, currently standing at 33 prisoners per 100,000 female adult population [7]. Aboriginal and Torres Strait Islander women are over represented in prison, related to historical and systemic disadvantage, and these women are even more likely to experience serial incarcerations with short sentences or on remand [5]. Poor access to health care is common for women in contact with the criminal justice system. Substance misuse and struggles related to accommodation, socioeconomic disadvantage and family needs can mean health is neglected in the community [8,9]. In a national survey of people in prison in Australia in 2015 [10], 48% of women reported they did not access the health care they needed when they were in the community. Additionally 15% said they did not access needed care in prison. The main reason reported for men and women not accessing care in either setting were reported to relate to choosing not to and lacking motivation to seek care. Additional barriers in the community were reported to be cost, substance misuse and competing priorities, while in prison, waiting times and health care not being available when needed were the other major barriers. Although prison is often a time of compromised wellbeing due to the deprivation and loss of choice and control inherent to incarceration [11], it can also be a window of opportunity to improve health through access to overdue health care [12,13]. Furthermore, the importance of managing health well across the interface of prison and community is clear. Leaving prison is a time of vulnerability, associated with high morbidity and mortality [14][15][16]. Health problems at release decrease the likelihood of successful community re-entry [15]. However, the ideal of post-release continuity of care can be disrupted by complex health and social support needs, relapse to substance misuse, poor health information transfer and difficulty in establishing connections with community healthcare providers [17][18][19]. In this study, we examined the ways in which women in contact with the prison system experience access to health care, particularly those with histories of problematic substance misuse. We focused on women who were exiting prison and aiming to re-establish their lives in the community, and explored their experiences of both prison and community health systems. Through understanding their experiences of healthcare access, healthcare providers and health services may be better enabled to provide equitable care for this marginalised group. --- Theoretical framework We used the conceptual framework of candidacy as described by Dixon-Woods and colleagues [20] to examine the women's healthcare access. The framework was first developed to examine equity of access to the United Kingdom National Health Service, thus providing a useful lens on how access is determined and enabled for people in disadvantaged situations. It emphasizes that healthcare access is contingent and subject to constant negotiation. Candidacy has been applied to healthcare access in diverse situations including people with intellectual disability [21], mental health problems [22], multiple sclerosis [23], young people seeking sexual health care [24], women who were sex workers needing primary care [25] and children with asthma [26]. It has not yet been applied to people in contact with the criminal justice system and people with histories of substance misuse in the research literature. As explained by the candidacy framework, potential service users identify a health need and seek care (labelled 'identifying' and 'appearing'). After care has been requested, providers are seen as 'adjudicating' the claims, deciding whether and in what way care will be delivered. Providers' judgements can be based on how deserving potential service users are and how well they will do if given treatment, which can disadvantage those in more deprived circumstances [20]. Limited resources, such as in prisons and hospitals, may increase adjudications of ineligibility by raising thresholds for what is thought to be a legitimate need. The candidacy framework also considers the 'navigation' and 'permeability' of services. To navigate services, potential users must be aware of them and have adequate resources such as transport and time. Permeability refers to the ease with which people can use services, including through feeling comfortable and having the capabilities to access the service. For example, services which align with user cultural values are more permeable and services with complex or rigid referral and appointment systems are less permeable. --- Methods Given the ethical and practical challenges of recruiting people in prison as research participants, we report our methods in detail according to the Standards of Reporting Qualitative Research guidelines [27]. The principal researcher (PA), who undertook all interviews, was employed as a part-time general practitioner (GP) in the prison health service and also worked as a GP in the community. --- Setting This study took place in 3 women-only correctional centres in New South Wales (NSW) Australia. Health care for women in NSW prisons is delivered predominantly in state-owned correctional centres through a Board-governed network under the NSW Ministry of Health [28]. Health care is primarily delivered by general and specialist nurses [6,29]. Women see GPs and other medical practitioners after being triaged by nurses to waiting lists. This differs from the community model, where GPs provide most primary health care and are directly accessible under universal health insurance. --- Sampling and data collection We invited women who were within 6 weeks of release to participate in two interviews, firstly in prison and then 1-6 months after release. Women were eligible if they had been in prison at least 1 month, could be interviewed in English without an interpreter and if they had not received health care from PA beyond treatment for minor selflimited problems. We identified potential participants through self-response to flyers, custodial lists and nursing and correctional staff knowledge of pending release dates. Women were invited by staff to meet with the researcher. Initially all eligible participants who responded to flyers were recruited. To ensure maximum variation, nursing staff subsequently identified participants who varied in age, ethnicity, custodial history, health status, healthcare utilisation and engagement in transitional support programs [30]. PA undertook the consent process with all participants, emphasising the voluntary and confidential nature of the research and that decisions to participate would have no effect on their health care or relationships with healthcare providers. Interviews in prison were conducted in prison health clinics or general visitor areas under general surveillance of correctional officers outside the interview rooms. Postrelease interviews in the community were by telephone. Participants received a payment of $10 AUD into their inprison account consistent with usual research practice in NSW prisons, or a $50 AUD supermarket voucher, if in the community. Interviews were semi-structured and questions explored needs, expectations and experiences of health care with participant-led content encouraged. Focused questions were added to explore themes identified in the emerging analysis [31]. Interviews were audiotaped and transcribed verbatim. --- Data analysis Given that many participants spoke of experiences in multiple incarcerations, we analyzed women's pre and post-release interviews together as continuing narratives of their experiences of health care. We used inductive thematic analysis informed by constructivist grounded theory [31]. The constructivist approach was considered appropriate for this research as it is encourages recognition of, and ongoing reflection on, how researcher perspectives, position and privilege influence the analysis. PA undertook open coding on all transcripts concurrently with data collection. WH and JD independently coded a third of selected information-rich transcripts to enhance rigor and JD also provided interpretations arising from her Aboriginal cultural expertise. Focused coding and analysis proceeded with repeated reference back to the data, memo-writing, checking of the emerging analysis in new interviews with participants and research team discussions. We further reviewed the findings using the theoretical lens of candidacy to generate additional insights on healthcare access. --- Results We interviewed 40 women prior to release and 29 of these women in a second interview. Their characteristics are described in Table 1. The majority of women had problematic substance misuse (35/40). The average duration of pre-release interviews was 28 min and second interviews, 22 min. The location of interviews and reasons for not participating in a second interview are shown in Fig. 1. Seven women returned to prison within 6 months of release. One woman died of an overdose. Due to commonality of experiences across prison and in the community, findings from both settings are presented together. Women's experiences pertained largely to primary health care delivered by prisonbased nurses and doctors and by community GPs, but also to hospital-based providers including Emergency Departments. The major themes related to the opportunity to access health care in prison and the constraints in that environment; being seen as legitimate seekers of care; the experience and fear of being blocked from care; and the services and personal capabilities which promoted access to care. These are explored below with illustrative quotes. --- Prison as a health care opportunity Despite the many disadvantages of being in prison, women also believed it to be an opportunity to seek overdue care for preventive health and neglected health problems. Although good health was seen as desirable in the community, it could be difficult to achieve. Increased focus on health in prison was possible because of decreased substance misuse, mental health treatment, time on their hands, fewer competing priorities and a desire to make positive life changes. When you come in here, is when you really are straight and you really want to know if you've got anything... your head becomes clearer and then you do think about your health as you're getting older. (Participant 4). Some women moved in and out of prison so frequently that they saw prison health services as their main provider. The only time I -I literally see doctors and that is in gaol... I'm not out long enough to get that appointment. (Participant 30). For some, health care in prison was better aligned with their needs than care they had experienced in the community due to prison clinicians' understanding of addiction and its comorbidities. Women believed community GPs lacked interest and skills in substance misuse management and therefore women were more likely to disclose and seek care for this in prison. Hepatitis C treatment in prison was often mentioned as a healthcare opportunity and one which could create personal meaning out of being in prison. I wanted to take something positive out of this experience, 'cause it's been an ordeal -... to address whatever I could to make the most of this time rather than to have it dead time. (Participant17). --- Constraints in prison care -'the waiting game' However, prison could also be experienced as a missed opportunity. The key systemic constraint was long and unpredictable waits for care. Several women referred to this as 'the waiting game'. Preventive health care delivered by nurses was effective and valued, but if women required access to a GP, secondary care or specialized investigations, waiting times could be substantial. Some women saw the waits as acceptable because care was ultimately delivered, particularly during longer sentences. Other women strongly felt waiting put them at risk of health complications, and waiting could be interpreted as a judgement that their problems weren't important, or as withholding of care. Their frustration was magnified by wanting to have care completed while in prison, as they believed they would not follow up in the community. Women with shorter sentences reported deflection of health care requests because investigations or specialist care could not realistically be achieved before release. Some women did not seek care while in prison because of previous experiences of waiting. In community n= 20 In corrective service transitional housing n=2 In drug rehabilitation centre n=1 Died post release (n=1) Fig. 1 Participant outcomes have been so much easier than out there. Like, my life's full-on out there. (Participant 8). Another constraint was the limited range of care compared to the community. Usual medications, alternative therapies, dietary preferences and preferred healthcare options were not always available. You've got more options out there. You've got counsellors, um, you've got groups that you can go to. (Participant 32). --- Legitimacy and stigma in prison and the community Women perceived they were frequently judged not to have health problems worthy of receiving care and were denied health care both in prison and in the community. They described this as a battle to be seen as legitimate patients and experienced this as personal rejection, linked to the dual stigmas of substance misuse and imprisonment. The drug user could be having a leg hanging off and [the community GP thinks]'Oh well. She just got released from gaol. She'sshe looks like a user, so couldn't harm her to wait another 10 min, 5 minutes, whatever. I'll just see this family'. (Participant 30). Being refused care at GP practices in the community could be experienced as a profound and traumatizing rejection. This could occur because of past behaviours leading to permanent barring from practices, or when GPs suspected prescription drug misuse. Some women believed that their requests for mental health care were misinterpreted by community GPs as drug seeking due to stigma and lack of GP skills. Waiting room signs aimed at deterring prescription drug misuse could reinforce perceptions of lower status and women reported a heightened sensitivity to the inclusion of past medical opinions in their health records. [Community GPs] treat you like, you know, you're nobody really... It has to be something in my file that someone's put in there that, straightaway, discriminating against me. (Participant 13). Participants who did not have a history of substance misuse perceived prison healthcare providers to be accustomed to managing women with addictions, and the system to be set up accordingly, such that they also experienced lack of credibility in their claims to care. While their access to community providers was satisfactory, in prison they felt a need to differentiate themselves from other prisoners with substance misuse histories. At times this appeared to relate to their own negative attitudes to addiction. They reported that women with substance misuse problems took excessive healthcare provider attention, with providers disbelieving their own, more legitimate claims to care. The ones that are not druggies, they're the ones that really need help. (Participant 39). Some women felt that healthcare providers both in and outside the prison didn't believe them when they discussed their medical histories, and particularly their reported medications, requiring 'proof' before instituting treatment. They considered this to be emblematic of their ongoing struggle to be seen as 'legit'. One participant expected community GPs to be suspicious of any information she gave them, even official paper-based test results which needed follow up. Maybe they'll think [the test result] it's not legit or something... They would think it was fake... because it's got to do with prisons and criminals. (Participant 7). With such experiences over time, some women chose not to seek care in prison or the community because they assumed providers would not be receptive, or the care they would receive would be substandard. In the community, women could choose not to disclose their incarceration to avoid differential treatment. The doctors outside don't know that you've been to gaol. You don't have to tell them anything, you know what I mean. So there's no real stigma when you're out. (Participant 11). Conversely, access was facilitated by having a health condition which was prioritized by healthcare providers, such as HIV or schizophrenia. When seeking healthcare access these otherwise stigmatising conditions could reinforce women's status as legitimate patients both in prison and the community, increasing their ability to access services and receive continuity of care. Some health services were considered inclusive of people with histories of substance misuse or incarceration, such as sexual health services and services which catered for marginalised members of the community. In Aboriginal Medical Services, women reported there was usually no stigma related to their status as ex-prisoners, however substance misuse could still be a source of stigma. [I go to] Aboriginal medical centres 'cause not many discriminate I don't think. I don't know. Well there's some do I reckon and some don't really. When you say you're a drug user and they blurt "huh," you know what I mean? (Participant 33). Despite anger at not being seen as legitimate when they believed care was needed, some women also acknowledged the complexity of prescription drug misuse, the danger this posed to them, and the prescriber's role in accurately judging the legitimacy of requests for medications. You get the doctor to write it for you anyway, which is not the doctor's fault. It's the person's fault for lying. (Participant 4). --- Being let down and blocked from care Women related experiences of feeling uncared for and let down by providers in prison and in the community. Women commonly reported not being called up to the prison clinic or contacted by community providers despite their attempts to seek care, interpreting this as withholding of care and a judgement they were not important. I want to be treated like a normal patient, you know, that wants to get something done... It's just gaol, it makes you feel like a number, you know. But, um, yeah, I guess, when you get out, you just, yeah, no-one reallyno-one cares for when you get out. (Participant 7). Differential treatment was seen to have serious implications. Women feared the possibility of being blocked from care despite a serious health problem, fearing misdiagnosis, uncontrolled pain or life threatening illness. This was seen as a risk both when in prison, for accessing hospital emergency departments whilst a prisoner, and when accessing GPs and hospitals in the community. I said, "Oh, no I don't use drugs anymore," but what [the community GP] wrote was reflecting on me as a drug-user, and I was treated differently. Yeah. Especially when I went to hospital for my gallstones, one time, they wouldn't medicate me because they thought I was a morphine seeker... I wouldn't even know how to seek morphine. (Participant 8). --- Capabilities, self-efficacy and supporting access Capabilities for accessing both prison and communitybased health systems related to family support, self-efficacy, assertiveness and knowledge of and compliance with the rules of different systems. Those who did not successfully meet formal requirements, for example by carrying their medical benefits cards or attending appointments, were likely to conflict with providers. Some women described being vocal and determined in seeking care, changing providers when necessary until they received the care they needed. Women who lacked confidence in their ability to manage their health often invoked their previous lack of success. Mental health problems, addiction, social isolation and poor life experiences and circumstances decreased their sense of self-efficacy. Self-efficacy was reported to be increased by existing personality traits and resilience, personal growth and overcoming addiction. If I can't look after myself, who's going to look after me?... I've always known how to get help. (Participant 2). Some believed the passive role they assumed in prison decreased their confidence in accessing care after release. Others reported increased selfconfidence at release related to overcoming preincarceration health problems or to positive healthcare experiences while in prison. Healthcare providers could be important in supporting women's selfconfidence. I've addressed more issues since coming to gaol than I ever did... I've taken a good look at all that has affected me in my life so it's been quite Transitional programs, care coordinators or mentors were seen to be effective facilitators to care on leaving prison. They were valued for practical and emotional support particularly for women who had little family support. Linkages with community healthcare providers were also enhanced by transitional case managers who also acted as advocates and communication brokers. If you're unsure, and if you're not very good at speaking or whatever, like, to go to the doctors or communicatingor anywhere that you need to go, [the care coordinators], you know, they'll help you with that. (Participant 22). --- Discussion Women in our study experienced significant barriers to healthcare access both in prison and in the community, particularly related to their histories of substance misuse. Many sensed that they were not perceived to be legitimate patients with legitimate healthcare needs, which created a fear of being blocked from care when it was urgently needed. --- Candidacy for health care The candidacy framework can be used to uncover vulnerabilities in access [20]. In our study of women in contact with the criminal justice system, concepts related to making claims to care (identifying and appearing) and judging of eligibility by providers (adjudication) were illuminating. --- Claims to care Dixon-Woods and colleagues note that marginalised groups may be more likely to identify themselves as candidates for care through a series of crises rather than planned health care, resulting in high uptake of emergency care compared to preventive care [20]. This accords with findings from a large survey of Australian prisoners in 2009, who reported high uptake of hospital emergency department care in the community [3,4]. The increased help-seeking behaviour seen in prison [10] has been suggested as linked to increased distress caused by incarceration [32]. However, in our study, the main motivator for seeking care was greater self-identification of candidacy due to decreased substance misuse, fewer competing priorities and a desire for positive life change. Women wanted to address overdue healthcare needs. Prison was seen as a healthcare opportunity, however one which could be missed due to system constraints. In our research, prison health services were seen to perform well in providing preventive health care but were less able to deliver complete investigation or management of more complex health needs within the confines of a prison sentence. In prison, care is delivered within a correctional system which is ill-designed for healthcare delivery. There are time-limited windows of access within a regulated daily schedule, and a transient prison population serving sentences which may be short or include frequent movements between prisons [6]. After women identified a healthcare need and appeared to the prison health service, the rest of the prison sentence could be spent waiting for the health management plans made in those consultations to be implemented. Waiting had a negative effect on relationships with prison healthcare providers and could be interpreted as providers withholding care or judging women's claims as unimportant. --- Relationships with healthcare providers Women describe a struggle to establish their legitimate access to care both in prison and in the community because of negative provider adjudications. Prescription drug misuse affects therapeutic relationships both in prison and the community. Prison doctors perceive one of their key tasks is judging patient credibility [32] and the challenge in being considered a legitimate patient in prison has been described [33,34]. In the community, stigma is compounded by healthcare provider discomfort and lack of skills in managing ex-prisoners or substance misuse problems [18,35], which the women in our study readily identified. Mental illness is a known source of stigma within primary care which can hinder help seeking [36]. In our study, the stigma of mental illness was not seen to impede healthcare access. Rather, women perceived their mental health care was suboptimal because they were not taken seriously by providers who suspected exaggeration related to their addictions. Provider adjudication had a profound emotional meaning for many women in this study, imbued with expectations and experiences of rejection and withholding of care. In other studies of access using candidacy theory, service users could feel devalued by negative interactions with providers [23] and frustrated by delays in diagnoses [26] or ineligibility for programs [21]. However the fear of being denied future care for serious illness illustrates the heightened significance of provider adjudications to women with substance misuse and in contact with the criminal justice system. Overcoming stigma may require women to be articulate and persistent both in and out of prison, consistent with the candidacy concept that negotiation between providers and users is a key factor in accessing care. The power imbalance between providers and patients can make negotiations challenging for patients in many healthcare situations, but even more so for prisoners, who have controls and limits on their choices in prison. Although prisons may aim to release more empowered individuals with control over their lives, agency may decrease in prison and persist after release, as part of the institutionalization that can be fostered by serial incarcerations [37]. --- Experiencing'medical homelessness' A key aim of primary care is to reduce health inequalities by providing coordinated whole-person care, also an underlying principle behind the recent emergence of patient-centered medical homes [38]. However, in the same way that women's lives are destabilized by lack of accommodation on leaving prison [39], our research also shows that they are destabilized by a lack of access to trusted and reliable medical care. Furthermore, women can be caught in an ongoing state of waiting and exclusion during cycles of prison and community-based health care, leading to a persistent state of transition and'medical homelessness'. Their medical homelessness is characterized by ineffectual attempts to access care, transient relationships with healthcare providers, disrupted medical management and a profound sense of exclusion from health care. Health system constraints, provider judgements that their claims to care are not legitimate and experiences of poor provider skills in managing addiction and its comorbidities contribute to a sense that they have no place in either prison or community-based health care. Experiences of rejection contributed to an ongoing state of inadequate care by engendering avoidance and helplessness in our participants. At a practical level, women in contact with the prison system are a transient population. Women may frequently move between prison and community on multiple short sentences, a particular problem for Aboriginal and Torres Strait Islander women. Custodial decisions may lead to them being placed in different prisons or in unfamiliar community locations on release. Developing trusting therapeutic relationships with providers when displaced from familiar settings is difficult, and even more so if the basis for trust is eroded by providers who assume drug seeking, regardless of the presenting health problem. Although control in the prison environment led to some women being more able to seek care, their custodial situation also created barriers which meant women could leave prison feeling their needs were not met. Women on remand are not eligible for all prison-based health programs and not all services available in the community are accessible in prison. If health care is not completed prior to release, initial efforts may be wasted by failures of continuity due to disconnected systems of care [9,19] or by choices to not to disclose incarceration after release [18]. Women who have been in contact with the criminal justice system have often had poor life experiences including trauma, abuse and violence. Our participants' sense of personal rejection and of falling between the cracks of health care are likely to be based both on experienced events as well as on psychological vulnerability related to life trauma and experiences of being let down throughout their lives. Their deep and often lifelong disadvantage is perpetuated in the personal and structural barriers they face in accessing health care both in prison and the community. --- Overcoming barriers to care Skilled and empathic healthcare providers assist in overcoming barriers to care. Women in contact with the prison system value community GP acknowledgement of, and assistance with, the broad issues that have an impact on their wellbeing, as well as skilled management of substance misuse and a non-judgemental patient-centred approach [18]. Exposure of students and trainees to people in prison or with substance misuse problems may decrease stigma and promote more effective health care for these people [40,41]. This should include training in traumainformed health care so that healthcare providers are aware of the psychological dynamics that may impact on the development of therapeutic relationships with people in contact with the custodial system [42]. This may assist providers to avoid re-traumatizing vulnerable patients, for example through words and actions which reinforce the sense of withholding care. Family and other advocates can greatly assist access to care [21]. However, women leaving prison often lack social connectedness and support in the community [43]. Access may be facilitated by prison and community providers working together prior to women leaving prison to plan for care following release [44]. Care navigation through re-entry programs can provide instrumental and relational support to promote health care access [9,30,45]. Given the risk of medical homelessness, our study reinforces the importance of resourcing transitional programs to assist women to link with skilled, non-judgmental community care on release. --- Limitations The participants in this study had high reported health problems and needs particularly related to substance misuse. Although our participants also reported mental and physical health problems, their primary focus when reporting barriers to healthcare access revolved around current or past histories of substance misuse. Our findings are likely to be more transferable to other people who struggle with substance misuse, both inside and outside prisons. Although our participants were reflecting on their experiences as women within the Australian prison and community health system, the applicability of candidacy concepts suggests wider relevance for marginalised groups, particularly those caught in a pattern of serial incarcerations or of substance misuse. The roles of the primary researcher and interviewer as a visiting GP within the prison health service and as a community GP were made known to the participants. Although this may have enhanced the research through shared understandings of complex health systems, it may also have inhibited participants from expressing their views completely, and led to lack of identification of findings which may be novel to an outsider. However, the fact that the women freely shared in their experiences of suboptimal care suggests that they did not feel constrained by a fear of further impacting on their access to care. The inclusion of researchers who are not involved in delivery of prison health services and a cultural adviser assisted in ensuring the analysis was comprehensive and inclusive of multiple perspectives and interpretations. --- Conclusion Women in contact with the criminal justice system, and particularly those with histories of substance misuse, can face difficulties in accessing health care both in prison and in the community. For those women who cycle in and out of prison, healthcare access can be conceived as an ongoing state of'medical homelessness'. Their experiences of poor community provider skills in managing addiction and provider judgements, both in prison and in the community, that their claims to care are not credible may contribute to a persistent state of waiting and exclusion during cycles of prison and community-based care. Consideration of the vulnerabilities and points of exclusion for women caught in this cycle will assist in determining how to ensure healthcare access for this marginalised population. --- Availability of data and materials Data from this project will not be shared. Consent from participants was not sought to share the data more widely than for the purposes of this study. --- Authors' contributions PA led study conceptualization and design, data acquisition and analysis and drafting the manuscript. JD contributed to data analysis and cultural mentorship. PM contributed to study conceptualization, data analysis, and manuscript revision. WH contributed to study conceptualization and design, data analysis and manuscript development. All authors read and approved the final manuscript. --- Ethics approval and consent to participate Approval was obtained from the ethics committees of Justice Health & Forensic Mental Health Network(G31-13), University of Western Sydney (H10322), Corrective Services NSW (13/259026) and the Aboriginal Health and Medical Research Council of NSW (910-13). Each participant gave written, informed consent to take part in an interview and for the interview transcript to be used in this research. --- Consent for publication Not applicable. Competing interests PA is a visiting general practitioner and member of the Board of Justice Health & Forensic Mental Health Network. --- Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Background: Women in contact with the prison system have high health needs. Short periods in prison and serial incarcerations are common. Examination of their experiences of health care both in prison and in the community may assist in better supporting their wellbeing and, ultimately, decrease their risk of returning to prison. Methods: We interviewed women in prisons in Sydney, Australia, using pre-release and post-release interviews. We undertook thematic analysis of the combined interviews, considering them as continuing narratives of their healthcare experiences. We further reviewed the findings using the theoretical lens of candidacy to generate additional insights on healthcare access. Results: Sixty-nine interviews were conducted with 40 women pre-release and 29 of these post-release. Most had histories of substance misuse. Women saw prison as an opportunity to address neglected health problems, but long waiting lists impeded healthcare delivery. Both in prison and in the community, the dual stigmas of substance misuse and being a prisoner could lead to provider judgements that their claims to care were not legitimate. They feared they would be blocked from care even if seriously ill. Family support, self-efficacy, assertiveness, overcoming substance misuse, compliance with health system rules and transitional care programs increased their personal capacity to access health care. Conclusions: For women in transition between prison and community, healthcare access could be experienced as 'medical homelessness' in which women felt caught in a perpetual state of waiting and exclusion during cycles of prison-and community-based care. Their healthcare experiences were characterized by ineffectual attempts to access care, transient relationships with healthcare providers, disrupted medical management and a fear that stigma would prevent candidacy to health care even in the event of serious illness. Consideration of the vulnerabilities and likely points of exclusion for women in contact with the criminal justice system will assist in increasing healthcare access for this marginalised population.
Introduction The improvement of women's sexual and reproductive health and rights remains important in the fight to reduce child and maternal mortality. Low-and middle-income countries (LMIC), often have maternal mortality ratios (MMR) due to childbirth-related complications. Estimates show that the Sub-Saharan Africa region has a soaring MMR of 542 per 100000 live births [1]. Three countries made the largest contribution to the MMR in the region; these include South Sudan (1150 maternal deaths per 100000 live births), Chad (1140 maternal deaths per 100000 live births) and Sierra Leone (1120 maternal deaths per 100000 live births) [1]. These high rates are partially due to various challenges which tend to intensify them such as lack of access to and provision of healthcare services, lack of or inadequate use of family planning services, malnutrition, and other issues [2][3][4][5]. Many countries have made significant strides toward meeting their Millennium Development Goals (MDG) targets, especially when it comes to the use of maternal health services [6]. However, the progress made at the national level tends to hide the inequalities that still exist at lower levels of geography (i.e., districts and chiefdoms). The level of maternal health service use differs between different socioeconomic groups within a country. It remains to be seen if the Sustainable Development Goals (SDGs) will build on the MDGs in increasing the use of maternal health services while reducing inequalities between socioeconomic groups. Sierra Leone has had challenges trying to reduce the MMR and improve maternal health service provision. After years of struggles in dealing with high maternal mortality levels and poor uptake and provision of maternal health services (i.e., home birth deliveries) due to unaffordability, the government introduced the Free Health Care Initiative (FHCI) around 2010 as a way of improving the use of maternal and child healthcare [7,8]. In poor communities, outof-pocket (OOP) expenditure on health becomes unrealistic. The FHCI removed user fees for women and young children needing to use healthcare services [9]. This led to some improvements in the update of life-saving maternal health services. The percentage of births that were delivered at home decreased over time in the country from 71.8% in 2008, 24.4% in 2013, and 16.4% in 2019 [10][11][12]. High financial costs often become a barrier to healthcare use, especially in rural areas where women are expected to travel long distance and pay more for transport to reach health services, especially in rural areas [13][14][15]. However, there is evidence of the existence of health inequalities in the country despite many improvements in maternal healthcare use. Studies show the existence of wealth-based health inequalities in some parts of the country [16,17]. To analyse inequalities in maternal healthcare, we adopted the framework developed by the Commission on Social Determinants of Health (CSDH). The CSDH framework argues that social position is an important determinant of health inequities [18]. The framework considers different elements of inequality such as socioeconomic status, education, race and geographic location as well as other elements [18][19][20]. In this study, we included maternal education, and household wealth index as the structural determinants of health; we also include the place of residence in the analysis. The structural determinants are part of the social and economic context of individuals and these are often regarded as the actual social determinants of health [18]. Inequalities in healthcare, especially in LMIC have drawn a lot of attention in recent times [21]. Addressing the health needs of the populations in lower socioeconomic positions is crucial in improving the overall health of the entire population. Although a few studies on healthcare inequalities have been conducted in Sierra Leone, many have focused on different aspects of healthcare inequality, and have used other measures and datasets, than those considered in this study [16,17,22,23]. This study aims to explore the extent of health inequalities in maternal healthcare as well as possible changes in these inequalities in Sierra Leone. --- Methods --- Data sources We used cross-sectional data from the 2008, 2013, and 2019 Demographic and Health Surveys (SLDHS). The DHS collects nationally representative data on various health-related interventions. This data is publicly available for download upon request. The DHS data are among the widely used sources of data for analysis of health-related inequalities. The DHS conducted in Sierra Leone sampled 7 758 households in 2008, 13 006 households in 2013, and 13 793 households in 2019, with a response rate of 97.6%, 99.3%, and 99.5% respectively [10][11][12]. For all data collection points, women of reproductive age (WRA) who were either usual household members or women present in the household on the night before the survey were eligible for interviews. We indicate the study sample in Table 1. --- Maternal health indicators This section presents the maternal health indicators used in the study. We selected the following indicators: (i) four or more antenatal visits, (ii) skilled antenatal care providers, (iii) births assisted by a skilled birth attendant (SBA), and (iv) births delivered in a facility. We defined four or more antenatal care visits as women who had at least four or more antenatal care visits for their most recent pregnancy; this definition has been used elsewhere [24,25]. We defined skilled antenatal care providers as women whose antenatal visits (for the most recent birth) were attended by a skilled provider. We defined births delivered in a facility as births that were delivered in a health facility; the health facilities included a government hospital, government health centre, government health post, other public sector, private hospital/clinic, and other private sector. We also defined births assisted by a skilled birth attendant (SBA) as births that were assisted by a skilled birth attendant (i.e. skilled provider). A skilled provider included a doctor, nurse/midwife, or auxiliary midwife. We dichotomised the selected indicator variables and we coded them as 0 = no and 1 = yes. --- Inequality stratifiers and measures This study used three stratifiers to measure health-related inequality (maternal education, household wealth index, and place of residence). The household wealth index was computed for each household [using the Principal Components Analysis (PCA) method] to disaggregate the sample into equal-sized quintiles (i.e. poorest to richest) [26]. We measured the prevalence of four maternal health indicators for each of the data points considered in this study. We used rate ratios to measure absolute differences in inequalities using the selected inequality stratifiers. The rate ratios provide a general description of the extent of inequalities. The rate ratios for the wealth index were measured in terms of highest versus lowest household wealth quintile (RRhhwi = Rhighest quintile Rlowest quintile). The rate ratios for maternal education were measured in terms of highest versus none (RReduc = Rhigher Rnone). The rate ratios for residence were measured in terms of urban vs rural (RRur = Rurban Rrural). The main limitation of the measures above is that they provide a basic picture of inequalities and ignore the differentials that often exist between all the categories of the inequality stratifier. For instance, in computing the rate ratios for the household wealth index, only two extremes are considered (richest quintile to poorest quintile) and not the rest of the quintiles. To remedy this, we used the concentration index. The concentration index is the most used measure of health inequalities in public health studies. It shows the magnitude of healthrelated inequalities and whether these inequalities are concentrated among those with low socioeconomic status or those with high socioeconomic status. The index value becomes negative when the health intervention is concentrated among the poor and positive when it is concentrated among the rich [27]. If the concentration index is negative, then the health indicator is said to be concentrated among individuals with low socioeconomic status, while a positive concentration index shows that the health indicator is concentrated among individuals with a high socioeconomic status [28]. Therefore, to further quantify inequalities in the selected indicators, we employed the concentration index. Specifically, we employed the Erreygers corrected concentration index. E<unk>h<unk> 1<unk>4 4 m <unk>b h <unk> a h <unk> C<unk>h<unk> where b h and a h refer to the maximum and minimum bounds of the binary health indicator, <unk> refers to the mean of the health indicator, and C(h) refers to the concentration index [29]. The Erreygers corrected concentration index is recommended for use when the variable is binary [30]. The concentration curve is used to visualise the extent of inequalities in terms of the concentration index and the inequality stratifier is ranked across the x-axis and the cumulated fraction of the health intervention is plotted on the y-axis, and a diagonal line represents the line of equality [27]. Where the health intervention lies below the equity line, then it is said that there are pro-rich inequalities in that society, and when it lies above the equity line, then it is said there are pro-poor inequalities. We used the conindex command in Stata to estimate the corrected concentration index [31]. Various studies have used DHS data and applied similar methods to analyse the trends, determinants, and inequalities in maternal, child, and reproductive health interventions as well as service coverage [4,28,[32][33][34]. --- Complex samples For all the data points, the SLDHS used a two-stage cluster sampling approach to select respondents for the surveys [10][11][12]. As such, we needed to adjust for data representation in our analysis; therefore, we used the Stata svyset command to account for the under-and over-sampling of certain enumeration areas. An alpha (<unk>) level of 0.05 was considered statistically significant. We used Stata version 14.2 [35] and Microsoft Excel for all analyses of this study. --- Ethical considerations We conducted all analyses using publicly available data from the SLDHS. The Institutional Review Board of Macro International, Inc. reviewed and approved the collection of data for all periods of the SLDHS data used in this study. Permission was granted to the authors by the DHS program to use this data for this study. For more information on the ethical review processes used by the DHS program. See more details on the ethical considerations in DHS data here: http://goo.gl/ny8T6X. --- Results --- Prevalence and rate ratios Table 2 shows the ratios of education-related inequalities among WRA between those with no education and those with higher levels of education. There was an increase in the use of maternal health services over the three periods. The use of delivery care services (facility-based delivery and skilled birth attendance) doubled between 2008 and 2019. The ratios for the selected maternal health indicators indicate the existence of inequalities that favour women with higher levels of education. Moreover, there was a decrease in inequalities between women with no education and those with higher levels of education from 2008 to 2019 as shown by the decrease in ratios. Table 3 examines the prevalence of maternal healthcare use as well as the wealth-based inequality ratios for the selected maternal health indicators. The use of maternal health services increased with socioeconomic status, where there was a higher use of these services among women from the richest households. In terms of the ratios, the findings showed that inequalities favoured women from the richest households. The ratios showed a decline between 2008 and 2019, indicating a decrease in pro-rich maternal health inequalities. Table 4 shows the prevalence of maternal healthcare use by urban-rural residence and ratios of urban to rural inequalities. The use of maternal health services was higher among women from urban areas than those from rural areas, except for the use of antenatal services in 2019. The ratios for antenatal services in 2019 indicated that inequalities slightly favoured women from rural areas. In general, the ratios in 2008 and 2013 showed that inequalities favoured women from urban areas for all indicators. --- Concentration curves The concentration curves show that there are inequalities in the use of maternal health services favouring those with a higher socioeconomic position (women with higher levels of education and women from the richest households). There is a higher use of maternal health interventions by wealthier women than by poorer women (Figs 1234). The inequalities decreased over time, as portrayed by the narrowing of the curves, particularly in the use of antenatal services. Figs 1 and2 shows that the inequality gap has decreased over time. Moreover, by 2013 and 2019, the inequality gap had almost closed in terms of the use of antenatal services. Furthermore, the findings show that there is high inequality in the use of health facilities for delivery, and skilled birth attendants, as shown by the wide curves. --- Concentration indices The wealth-based and maternal-education-based concentration indices show that there have been improvements (as shown by the decreasing levels over time) in inequality in the use of maternal healthcare use for about eleven years, from 2008-2019 (Tables 5 and6). The biggest decrease in the concentration index was for the use of births assisted by a skilled birth attendant, which decreased from 0.330 to 0.113 for wealth-based inequalities and 0.230 to 0.095 for education-based inequalities (Tables 5 and6). Conversely, there was high inequality in the use of delivery care services (births delivered in a facility and births assisted by a skilled birth attendant), in both 2008 and 2013. --- Discussion This study aimed to explore health inequalities in maternal healthcare in Sierra Leone. The findings show that considerable progress has been made in the use of maternal health services; the measures employed in the study show that inequalities in maternal healthcare use have declined since 2008. Our findings suggest that maternal health inequalities favour women from wealthy households, educated women, as well as women from urban areas. This could be because women with better socioeconomic status (wealthy households and higher education) tend to live in urban areas and can better pay for use and the available health services compared to their counterparts [36,37]. Moreover, improvements in wealth and education-based inequality were evident in 2013 and 2019 for the use of antenatal services. Although inequality declined over time, the use of delivery care services remained highly unequal. Our findings are similar to other studies which found substantial inequalities in the use of delivery services; similar studies show that inequalities in delivery care services tend to favour wealthier and more educated women [3,34,37,38]. A study in Sierra Leone also found that maternal education made a considerable contribution to inequalities in institutional delivery [16]. The cultural aspects of the population as well as their perceptions of modern medicine and related health provision are critical in understanding the use of maternal healthcare services [16,39]. Additionally, the findings show that there is some degree of inequality favouring populations in urban areas; this supports the literature which argues women in urban areas tend to have greater access to maternal health services compared to women in rural areas [40,41]. These findings speak to the rural-urban gap in the provision of healthcare services between rural and urban areas as well as the related barriers, such as costs and distance, disproportionally faced by women in rural areas [13][14][15]42]. The introduction of the FHCI and the removal of user fees might have contributed to the increase in the use of maternal health services as well as the reduction of inequality in the use of these services. Witter and colleagues have conducted numerous studies, which monitored and evaluated the main pillars of the FHCI concerning how these pillars have been implemented on the ground [43,44]. The authors argue that the use of maternal health services in the country has increased, and this increase could be attributed to the implementation of the FHCI [43,45,46]. It is difficult to pinpoint the exact contribution of the FHCI to increasing the use of health services since some of these health services had high uptake rates before the implementation of the FHCI [46]. Although the FHCI may have contributed to an increase in the use of maternal healthcare services, there is still some level of inequality that exists in the use of maternal healthcare services in the period highlighted in this study. A comprehensive analysis of the impact of this initiative (and other related initiatives) on the use of maternal health services and related inequalities in the country will be important for future research. --- Strengths and limitations The main strength of this study is that we used nationally representative datasets from three collection periods to better estimate inequalities in maternal healthcare use. The study uses cross-sectional data, as such, the data cannot serve as the basis for establishing causality among variables. There may be recall bias because of the longer recall time, where respondents are required to report on past occurrences of the use of certain healthcare services. --- Conclusion Our findings show that despite efforts by the government to increase the use of maternal healthcare services among women with a lower socioeconomic status, the use of these services remains favourable to those with a higher socioeconomic status. To ensure balance among the different socioeconomic groups, policy initiatives need to prioritise women with lower socioeconomic status (those with the most unequal maternal health services) through projects aimed at reducing poverty and increasing their educational levels, especially among women from rural areas. Moreover, further studies are necessary to study the specific impact of the FHCI and similar initiatives on the use of maternal healthcare services in the country, and what impact these initiatives have had on the reduction of health inequalities. --- The dataset is freely available for download and use upon registration on the Demographic and Health Survey Program website (https://dhsprogram.com/data/new-userregistration.cfm). --- Author Contributions Conceptualization: Mluleki Tsawe, A Sathiya Susuman. --- Data curation: Mluleki Tsawe. --- Formal analysis: Mluleki Tsawe. Investigation: A Sathiya Susuman. Methodology: Mluleki Tsawe. --- Software: Mluleki Tsawe. Supervision: A Sathiya Susuman. Writing -original draft: Mluleki Tsawe, A Sathiya Susuman. Writing -review & editing: Mluleki Tsawe, A Sathiya Susuman.
Sierra Leone is one of the countries with poor health outcomes. The country has made some progress in the uptake of maternal health services. Despite improvements in the national coverage rates, there is no evidence of how equal these improvements have been.
social networks. Much progress has been made both on obtaining and analyzing empirical data [2]- [5], and mathematical modeling [6]- [9]. In a more recent set of extensions, scientists have begun studying the simultaneous propagation of multiple memes, in which not only the interaction between nodes (or equivalently referred to as individuals) in the network, but also the interplay of multiple memes, plays an important role in determining the system's dynamical behaviors. These two forms of interactions together add complexity and research value to the multi-meme propagation model. This paper proposed a series of mathematical models on the propagation of competing products. Three key elements: the interpersonal network, the individuals and the competing products, are modeled respectively as a graph with fixed topology, the nodes on the graph, and the states of nodes. Our models are based on the characterization of individuals' decision making behaviors under the social pressure. Two factors determine individuals' choices on which product to adopt: the endogenous factor and the exogenous factor. The endogenous factor is the social contact between nodes via social links, which forms a tendency of imitation, referred to as social pressure in this paper. The exogenous factor is what is unrelated to the network, e.g., the products' quality. In the microscopic level, we model the endogenous and exogenous factors respectively as two types of product-adoption processes: the social conversion and the self conversion. In social conversion, any node randomly picks one of its neighbors and follows that neighbor's state with some given probability characterizing how open-minded the node is. In the self conversion, each node independently converts from one product to another with some given probability depending on the two products involved. Although individuals exhibit subjective preferences when they are choosing the products, statistics on a large scale of different individuals' behaviors often reveal that the relative qualities of the competing products are objective. For example, although some people may have special affections on feature phones, the fact that more people have converted from feature phones to smart phones, rather than the other way around, indicates that the latter is relatively better. We assume that the transit probabilities between the competing products are determined by their relative qualities and thus homogeneous among the individuals. b) Literature review: Various models have been proposed to describe the propagation on networks, such as the percolation model on random graphs [10], [11], the independent cascade model [12]- [14], the linear threshold model [15]- [17] and the epidemic-like meanfield model [18]- [20]. As extensions to the propagation of a single meme, some recent papers have discussed the propagation of multiple memes, e.g., see [21]- [32]. Some of these papers adopt a Susceptible-Infected-Susceptible (SIS) epidemic-like model and discuss the long-term coexistence of multiple memes in single/multiple-layer networks, e.g., see [25]- [27]. Some papers focus instead on the strategy of initial seeding to maximize or prevent the propagation of one specific meme in the presence of adversaries [29]- [32]. Among all these papers mentioned in this paragraph, our model is most closely related to the work by Stanoev et. al. [28] but the social contagion process in [28] is different from our model and theoretical analysis on the general model is not included. c) Contribution: Firstly we propose a generalized and novel model for the competitive propagation on social networks. By taking into account both the endogenous and exogenous factors and considering the individual variance as well as the interplay of the competing products, our model is general enough to describe a large class of multi-meme propagation processes. Moreover, many existing models have difficulty in dealing with the simultaneous contagions of multiple memes, and have to avoid the problem by adding an additional assumption of the infinitesimal step length that only allows the occurrence of a single contagion at every step. Differently from these models, the problem of multiple contagions does not occur in our model since we model the contagion process as the individual's initiative choice under the social pressure, which is more suitable for the product-adoption process. In addition, compared with the independent cascade model, in which individuals' choices are irreversible, our models adopt a more realistic assumption that conversions from one product to another are reversible and occur persistently. Secondly, we propose a new concept, the productconversion graph, to characterize the interplay between the products. There are two graphs in our model: the social network describing the interpersonal connections, and the product-conversion graph defining the transitions between the products in self conversion, which in turn reflect the products' relative quality. Thirdly, starting from the description of individuals' behavior, we develop two Markov-chain competitive propagation models different in the chronological order of the social conversion and the self conversion processes. Applying the independence approximation, we propose two corresponding network competitive propagation models, which are difference equations systems, such that the dimension of our problem is reduced and some theorems in the area of dynamical systems can be applied to the analysis of the approximation models. Fourthly, both theoretical analysis and simulation results are presented on the dynamical properties of the network competitive propagation models. We discuss the existence, uniqueness and stability of the fixed point, as well as how the systems' asymptotic state probability distribution is determined by the social network structure, the individuals' open-mindedness, the initial condition and, most importantly, the structure of the product-conversion graph. We find that, if the productconversion graph contains only one absorbing strongly connected component, then the self conversion dominates the system's asymptotic behavior; With multiple absorbing strongly connected components in the productconversion graph, the system's asymptotic state probability distribution also depends on the initial condition, the network topology and the individual open-mindedness. In addition, simulation results are presented to show the high accuracy of the independence approximation and reveal that the original Markov-chain model also exhibits the same asymptotic behavior. At last, based on the network competitive propagation model, we propose two classes of non-cooperative games. In both games the players are the competing companies with bounded investment budgets on seeding, e.g., advertisement and promotion, and improving their products' quality. The first model is a infinitely repeated one-shot game, in which the players myopically maximize their next-step pay-off. We investigate the unique Nash equilibrium at each stage. Theoretical analysis also reveals some strategic and realistic insights on the seeding-quality trade-off and the allocation of seeding resources among the individuals. The second model is a dynamic game with infinite horizon, in which the players aim to maximize their discounted accumulated pay-offs. The existence of Nash equilibrium for the two-player case is proved and numerical analysis is given on the comparison with the one-shot game. d) Organization: The rest of this paper is organized as follows. Section II give the assumptions for two Markovchain propagation models. Section III and IV discuss the approximation of these two models respectively. In Section V, we discuss the two classes of games. Section VI is the conclusion. E) is always assumed connected and there is no self loop, i.e., <unk>ii = 0 for any i <unk> V. --- II. MODEL DESCRIPTION = (a ij ) n<unk>n with a ij = 1 if (i, j) <unk> E and a ij = 0 if (i, j) / <unk> E. The row-normalized adjacency matrix is denoted by <unk> = (<unk> ij ) n<unk>n, where <unk>ij = 1 Ni a ij with N i = n j=1 a ij. The graph G = (V, b) Competing products and the states of nodes: Suppose there are R competing products, denoted by H 1, H 2,..., H R, propagating in the network. We consider a discrete-time model, i.e., t <unk> N, and assume the products are mutually exclusive. We do not specify the state of adopting no product and collectively refer to all the states as "products". Denote by D i (t) the state of node i after time step t. For any t <unk> N, D i (t) <unk> <unk>H 1, H 2,..., H R <unk>. For simplicity let <unk> = <unk>1, 2,..., R<unk>, i.e., the set of the product indexes. c) Nodes' production adoption behavior: Two mechanisms define the individuals' behavior: the social conversion and the self conversion. The following two assumptions propose respectively two models different in the chronological order of the social and self conversions. Assumption 1 (Social-self conversion model): Consider the competitive propagation of R products in the network G = (V, E). At time step t + 1 for any t <unk> N, suppose the previous state of any node i is D i (t) = H r. Node i first randomly pick one of its neighbor j and following j's previous state, i.e., D i (t + 1) = D j (t), with probability <unk> i. If node i does not follow j's state in the social conversion, with probability 1<unk> i, then node i converts to product H s with probability <unk> rs for any s = r, or stay in H r with probability <unk> rr. Assumption 2 (Self-social conversion model): At any time step t + 1, any node i with D i (t) = H r converts to H s with probability <unk> rs for any s = r, or stay in the state H r with probability <unk> rr. If node i stays in H r in the process above, then node i randomly picks a neighbor j and following D j (t) with probability <unk> i, or still stay in H r with probability 1<unk> i. Assumptions 1 and 2 are illustrated by Figure 1 and Figure 2 respectively. By introducing the parameters <unk> rs we define a directed and weighted graph with the adjacency matrix <unk> = (<unk> rs ) R<unk>R, referred to as the product-conversion graph. Figure 3 gives an example of the product-conversion graph for different smart phone operation systems. Based on either of the two assumptions, <unk> is row-stochastic. In this paper we discuss several types of structures of the product-conversion graph, e.g., the case when it is strongly connected, or consists of a transient subgraph and some isolated absorbing subgraphs. The parameter <unk> i characterizes node i's inclination to be influenced by social pressure. Define <unk> = (<unk> 1, <unk> 2,..., <unk> n ) as the individual openmindedness vector. Assume 0 <unk> <unk> i <unk> 1 for any i <unk> V. d) Problem description: According to either Assumption 1 or Assumption 2, at any time step t + 1, the probability distribution of any node's states depends on its own state as well as the states of all its neighbors at time t. Therefore, the collective evolution of nodes' states is a R n -state discrete-time Markov chain. Define p ir (t) as the probability that node i is in state H r after time step t, i.e., p ir (t) = P[D i (t) = H r ]. We aim to understand the dynamics of p ir (t). Since the Markov chain models have exponential dimensions and are difficult to analyze, we approximate it with lower-dimension difference equations systems and analyze instead the dynamical properties of the approximation systems. e) Notations: Before proceeding to the next section, we introduce some frequently used notations in Table I. the r-th column vector of the matrix X <unk> R n<unk>m x (i) the i-th row vector of the matrix X <unk> R n<unk>m x (-i) the i-th row vector of the matrix <unk>X <unk> R n<unk>m, i.e., x (-i) = (x -i1, x -i2,..., x -im ) where x -ir = n j=1 <unk>ij x jr G(A) the graph with the adjacency matrix A --- III. NETWORK COMPETITIVE PROPAGATION MODEL WITH SOCIAL-SELF CONVERSION This section is based on Assumption 1. We first derive an approximation model for the time evolution of p ir (t), referred to as the social-self conversion network competitive propagation model (social-self NCPM), and then analyze the asymptotic behavior of the approximation model and its relation to the social network topology, the product-conversion graph, the initial condition and the individuals open-mindedness. Further simulation work is presented in the end of this section. A. Derivation of the social-self NCPM Some notations are used in this section. Notation 3: For the competitive propagation of products <unk>H 1, H 2,..., H R <unk> on the network G = (V, E), (1) define the random variable X r i (t) by X r i (t) = 1 if D i (t) = H r ; X r i (t) = 0 if D i (t) = H r. Due to the mutual exclusiveness of the products, for any i <unk> V, if X r i (t) = 1, then X s i (t) = 0 for any s = r; (2) Define the n -1 tuple D -i (t) = (D 1 (t),..., D i-1 (t), D i+1 (t),..., D n (t)), i.e., the states of all the nodes except node i after time step t; (3) Define the following notations for simplicity: P rs ij (t) = P[X r i (t) = 1 | X s j (t) = 1], P r i (t; -i) = P[X r i (t) = 1 | D -i (t)], <unk> r i (t; s, -i) = P[X r i (t + 1) = 1 | X s i (t) = 1, D -i (t)]. In the derivation of the network competitive propagation model, the following approximation is adopted: Approximation 4 (Independence Approximation): For the competitive propagation of R products on the network G = (V, E), approximate the conditional probabil-ity P ms ij (t) by its corresponding total probability p im (t) for any m, s <unk> <unk> and any i, j <unk> V. With the independence approximation, the social-self NCPM is presented in the theorem below. Theorem 5 (Social-self NCPM): Consider the competitive propagation based on Assumption 1, with the social network and the product-conversion graph represented by their adjacency matrices <unk> = (<unk> ij ) n<unk>n and <unk> = (<unk> rs ) R<unk>R respectively. The probability p ir (t) satisfies p ir (t + 1) -p ir (t) = s =r <unk> i n j=1 <unk>ij P sr ij (t)p jr (t) -P rs ij (t)p js (t) + s =r (1 -<unk> i )(<unk> sr p is (t) -<unk> rs p ir (t)),(1) for any i <unk> V and r <unk> <unk>. Applying the independence approximation, the approximation model for equation (1), i.e., the social-self NCPM, is p ir (t + 1) = <unk> i n j=1 <unk>ij p jr (t) + (1 -<unk> i ) R s=1 <unk> sr p is (t).(2) Proof: By definition, p ir (t + 1)-p ir (t) = E E[X r i (t + 1)-X r i (t) | D -i (t)], where the conditional expectation is given by E[X r i (t + 1) -X r i (t) | D -i (t)] = s =r <unk> r i (t; s, -i)P s i (t; -i) -<unk> s i (t; r, -i)P r i (t; -i). According to Assumption 1, <unk> r i (t; s, -i)P s i (t; -i) = <unk> i j <unk>ij X r j (t)P s i (t; -i) + (1 -<unk> i )<unk> sr P s i (t; -i). Therefore, E[<unk> r i (t; s, -i)P s i (t; -i)] = <unk> i j <unk>ij E[X r j (t)P s i (t; -i)] + (1 -<unk> i )<unk> sr E[P s i (t; -i)]. One the right-hand side of the equation above, E[P s i (t; -i)] = p is (t). Moreover, E[X r j (t)P s i (t; -i)] = d-i-j P[X s i (t) = 1, X r j (t) = 1, D -i-j (t) = d -i-j ] = P sr ij (t)p jr (t). Apply the same computation to E[<unk> s i (t; r, -i)P r i (t; -i)] and then we obtain equation (1). Replace P sr ij (t) and P rs ij (t) by p is (t) and p ir (t) respectively and according to the equations s =r p is (t) = 1p ir (t) and s =r <unk> rs = 1<unk> rr, we obtain equation (2). The derivation of Theorem 5 is equivalent to the widely adopted mean-field approximation in the modeling of the network epidemic spreading [19], [33], [34]. Notice that the independence approximation neither neglects the correlation between any two nodes' states, nor destroys the network topology, since p jr (t), p js (t) and <unk>ij all appear in the dynamics of p ir (t). --- B. Asymptotic behavior of the social-self NCPM Define the map f : R n<unk>R <unk> R n<unk>R by f (X) = diag(<unk>) <unk>X + (I -diag(<unk>))X<unk>. (3) According to equation ( 2), the matrix form of the socialself NCPM is written as P (t + 1) = f P (t),(4) where P (t) = (p ir (t)) n<unk>R. We analyze how the asymptotic behavior of system (4), i.e., the existence, uniqueness and stability of the fixed point of the map f, is determined by the two graphs introduced in our model: the social network with the adjacency matrix <unk>, and the product-conversion graph with the adjacency matrix <unk>. 1) Structures of the social network and the productconversion graph: Assume that the social network G( <unk>) has a globally reachable node. As for the productconversion graph, we consider the more general case. Suppose that the product-conversion graph G(<unk>) has m absorbing strongly connected components (absorbing SCCs) and a transient subgraph. Re-index the products such that the product index set for any l-th absorbing SCCs is given by <unk> 1 = <unk>1, 2,..., k 1 <unk>, and <unk> l = l-1 u=1 k u + 1, l-1 u=1 k u + 2,..., l u=1 k u, for any l <unk> <unk>2, 3,..., m<unk>, and the index set for the transient subgraph is <unk> = <unk> m l=1 k l + 1,..., m l=1 k l + 2,..., R<unk>. then the adjacency matrix <unk> of the productconversion graph takes the following form: <unk> = <unk> 0 (R-k0)<unk>k0 B k0<unk>(R-k0) <unk> 0,(5) where <unk> = diag[<unk> 1, <unk> 2,..., <unk> m ] and B = [B 1, B 2,..., B m ], with B l <unk> R k0<unk>k1 for any l <unk> <unk>1, 2,..., m<unk>, is nonzero and entry-wise non-negative. Matrix <unk> l = (<unk> <unk> l rs ) k l <unk>k l, with <unk> <unk>1 rs = <unk> rs and <unk> <unk> l rs = <unk> l-1 u=1 ku+r, l-1 u=1 ku+s for any l <unk> <unk>2, 3,..., m<unk>, is the adjacency matrix of the l-th absorbing SCC, and is thus irreducible and row-stochastic. The following definition classifies four types of structures of G(<unk>). Definition 6 (Four sets of product-conversion graphs): Based on whether the product-conversion graph G(<unk>) --- 2) Stability analysis of the social-self NCPM: The following theorem states the distinct asymptotic behaviors of the social-self NCPM, with different structures of the product-conversion graph. Theorem 7 (Asymptotic behavior for social-self NCPM): Consider the social-self NCPM on a strongly connected social network G( <unk>), with the productconversion graph G(<unk>). Assume that (i) Each absorbing SCC G(<unk> l ) of G(<unk>) is aperiodic; (ii) For any <unk> l, l <unk> <unk>1, 2,..., m<unk>, as least one column of <unk> l is entry-wise strictly positive; (iii) For any r <unk> <unk>, s<unk> <unk> rs <unk> 1, i.e., <unk> 0 1 k0 <unk> 1 k0. Then, for any P (0) <unk> S nR (1 n ), the solution P (t) to equation ( 4) has the following properties, depending upon the structure of <unk>: (i) in Case 1, P (t) converges to P * = 1 n <unk> exponentially fast, where P * is the unique fixed point in S nR (1 n ) for the map f defined by equation (3). Moreover, the convergence rate is (<unk>) = <unk> max +(1-<unk> max )<unk>(<unk>), where <unk> max = max i <unk> i and <unk>(<unk>) = 1 - R r=1 min s <unk> sr ; (ii) in Case 2, for any i <unk> V, lim t<unk> p ir (t) = 0, for any r <unk> <unk>, w r (<unk> 1 ), for any r <unk> <unk> 1 ; (iii) in Case 3, for any l <unk> <unk>1, 2,..., m<unk> and i <unk> V, lim t<unk> p <unk> l (i) (t) = w (M )P <unk> l (0)1 k l w (<unk> l ), where M = diag(<unk>) <unk>+I -diag <unk> and P <unk> l (t) = p <unk> l ir (t) n<unk>k l, with p <unk> l ir (t) = p i, l-1 u=1 ku+r (t) and p <unk> l (i) (t) being the i-th row of P <unk> l (t); (iv) in Case 4, for any l <unk> <unk>1, 2,..., m<unk> and i <unk> V, lim t<unk> p ir (t) = 0, for any r <unk> <unk>, <unk> l w r (<unk> l ), for any r <unk> <unk> l, where <unk> l depends on <unk>, B l, P <unk> l (0), P <unk> (0) and satisfies m l=1 <unk> l = 1. Before proving the theorem above, a useful and wellknown lemma is stated without the proof. Lemma 8 (Row-stochastic matrices after pairwisedifference similarity transform): Let M <unk> R n<unk>n be rowstochastic. Suppose the graph G(M ) is aperiodic and has a globally reachable node. Then the nonsingular matrix 1). Moreover, M red is discrete-time exponentially stable. Q = <unk> <unk> <unk> <unk> <unk> -1 1...... -1 1 1/n... 1/n 1/n <unk> <unk> <unk> <unk> <unk> satisfies QM Q -1 = M red 0 n-1 c 1 for some c <unk> R n-1 and M red <unk> R (n-1)<unk>(n- Proof of Theorem 7: (1) Case 1: Since matrix <unk> is row-stochastic, irreducible and aperiodic, according to the Perron-Frobenius theorem, w(<unk>) <unk> R R is well-defined. By substituting P *, defined by p * (i) = w(<unk>) for any i <unk> V, into equation (3), we verify that P * is a fixed point of f. For any X and Y <unk> R n<unk>R, define the distance d(•, •) by d(X, Y ) = X -Y <unk>. Then (S nR (1 n ), d) is a complete metric space. For any X <unk> S nR (1 n ), it is easy to check that f (X) 0 n<unk>R and f (X)1 R = diag(<unk>) <unk>X1 R + (I -diag(<unk>))X1 R = 1 n. Therefore, f maps S nR (1 n ) to S nR (1 n ). For any X <unk> S nR (1 n ), according to equation (3), f (X) (i) -f (P * ) (i) 1 <unk> <unk> i x (-i) -p * (-i) 1 + (1 -<unk> i ) (x (i) -p * (i) )<unk> 1.(6) The first term of the right-hand side of (6) satisfies x (-i) -p * (-i) 1 <unk> R r=1 n j=1 <unk>ij |x jr -w r (<unk>)| <unk> X -P * <unk>. The second term of the right-hand side of (6) satisfies (x (i) -p * (i) )<unk> 1 = R r=1 | R s=1 x is -w s (<unk>) <unk> sr |. If x (i) = p * (i), then f (X) (i) -f (P * ) (i) 1 <unk> <unk> i X - P * <unk>. If x (i) = p * (i), since x (i) 1 R = p * (i) 1 R = 1, both the set <unk> 1 = <unk>s | x is <unk> w s (<unk>)<unk> and the set <unk> 2 = <unk>s | x is <unk> w s (<unk>)<unk> are nonempty and s<unk>1 x is -w s (<unk>) = s<unk>2 w s (<unk>) -x is = 1 2 R s=1 |x is -w s (<unk>)|. Therefore, (x (i) -p * (i) )<unk> 1 = R r=1 R s=1 |x is -w s (<unk>)|<unk> sr -2 R r=1 min<unk> s<unk>1 (x is -w s (<unk>))<unk> sr, s<unk>2 (w s (<unk>) -x is )<unk> sr <unk>, where(7) min<unk> s<unk>1 (x is -w s (<unk>))<unk> sr, s<unk>2 (w s (<unk>) -x is )<unk> sr <unk> <unk> 1 2 min s <unk> sr x (i) -p * (i) 1. Substituting the inequality above into (7), we obtain (x (i) -p * (i) )<unk> 1 <unk> 1 - R r=1 min s <unk> sr x (i) -p * (i) 1. Since R r=1 <unk> sr = 1 for any s, R r=1 min s <unk> sr is no larger than 1. In addition, since at least one column of <unk> is strictly positive, R r=1 min s <unk> sr > 0. Therefore, 0 <unk> <unk>(<unk>) = 1 - R r=1 min s <unk> sr <unk> 1, and f (X) (i) -p * (i) 1 <unk> <unk> i + (1 -<unk> i )<unk>(<unk>) |X -P * <unk>. This leads to f (X) -f (P * ) <unk> <unk> (<unk>) X -P * <unk>, for any X <unk> S nR (1 n ) and 0 <unk> (<unk>) <unk> 1. This concludes the proof for Case 1. (2) Case 2: For the transient subset <unk>, define P <unk> (t) = p <unk> ir (t) n<unk>k0, with p <unk> ir (t) = p i,r+k1 (t), for any i <unk> V and r <unk> <unk>1, 2,..., k 0 <unk>. Then, P <unk> (t + 1) = diag(<unk>) <unk>P <unk> (t) + (I -diag(<unk>))P <unk> (t)<unk> 0. According to Assumption (iii) of Theorem 7, c = max r<unk>1,2,...,k0<unk> k0 s=1 <unk> <unk> rs <unk> 1, and <unk> 0 1 k0 <unk> c1 k0. Therefore, P <unk> (t+1)1 k0 diag(<unk>) <unk> + c I -diag(<unk>) P <unk> (t)1 k0. Since <unk> diag(<unk>) <unk> + c I -diag(<unk>) <unk> 1, for any P <unk> (0) <unk> Snk0 (1 n ), P <unk> (t) <unk> 0 n<unk>k0 exponentially fast. Define P <unk>1 (t) = (p ir (t)) n<unk>k1. then we have P <unk>1 (t + 1) = diag(<unk>) <unk>P <unk>1 (t) + I -diag(<unk>) P <unk>1 (t)<unk> 1 + I -diag(<unk>) P <unk> (t)B. Since P <unk> (t) converges to 0 n<unk>k0 exponentially fast, we have: 1) there exists C > 0 and 0 <unk> <unk> <unk> 1 such that I -diag(<unk>)P <unk> (t)B <unk> <unk> C<unk> t ; 2) P <unk>1 (t)1 k1 -1 k1 <unk> <unk> 0 exponentially fast, which implies d P <unk>1 (t), S nk1 (1 n ) <unk> 0 exponentially fast. For any X <unk> Snk1 (1 n ), define map f by f (X) = diag(<unk>) <unk>X + I -diag(<unk>) X<unk> 1. According to the proof for Case 1, there exists a unique fixed point
In this paper we propose a class of propagation models for multiple competing products over a social network. We consider two propagation mechanisms: social conversion and self conversion, corresponding, respectively, to endogenous and exogenous factors. A novel concept, the product-conversion graph, is proposed to characterize the interplay among competing products. According to the chronological order of social and self conversions, we develop two Markov-chain models and, based on the independence approximation, we approximate them with two corresponding difference equations systems. Our theoretical analysis on these two approximated models reveals the dependency of their asymptotic behavior on the structures of both the product-conversion graph and the social network, as well as the initial condition. In addition to the theoretical work, we investigate via numerical analysis the accuracy of the independence approximation and the asymptotic behavior of the Markov-chain model, for the case where social conversion occurs before self conversion. Finally, we propose two classes of games based on the competitive propagation model: the repeated one-shot game and the dynamic infinite-horizon game. We characterize the quality-seeding trade-off for the first game and the Nash equilibrium in both games.
(t)) n<unk>k1. then we have P <unk>1 (t + 1) = diag(<unk>) <unk>P <unk>1 (t) + I -diag(<unk>) P <unk>1 (t)<unk> 1 + I -diag(<unk>) P <unk> (t)B. Since P <unk> (t) converges to 0 n<unk>k0 exponentially fast, we have: 1) there exists C > 0 and 0 <unk> <unk> <unk> 1 such that I -diag(<unk>)P <unk> (t)B <unk> <unk> C<unk> t ; 2) P <unk>1 (t)1 k1 -1 k1 <unk> <unk> 0 exponentially fast, which implies d P <unk>1 (t), S nk1 (1 n ) <unk> 0 exponentially fast. For any X <unk> Snk1 (1 n ), define map f by f (X) = diag(<unk>) <unk>X + I -diag(<unk>) X<unk> 1. According to the proof for Case 1, there exists a unique fixed point P * for the map f in S nk1 (1 n ), given by p * ir = w r (<unk> 1 ). Moreover, there exists 0 <unk> <unk> 1 such that, for any X <unk> S nk1 (1 n ), f (X) -P * <unk> <unk> X -P * <unk>. Since the function f (X)-P * <unk> X-P * <unk> is continuous in Snk1 (1 n ) <unk> P * and d P <unk>1 (t), S nk1 (1 n ) <unk> 0, there exists T > 0 and 0 <unk> <unk> <unk> 1 such that, for any t > T, f P <unk>1 (t) -P * <unk> <unk> <unk> P <unk>1 (t) -P * <unk>. For t <unk> N much larger than T, P <unk>1 (t) -P * <unk> <unk> <unk> t-T P <unk>1 (T ) -P * <unk> + C <unk> t -<unk> t-T <unk> T <unk>/<unk>. Since 0 <unk> <unk> <unk> 1, 0 <unk> <unk> <unk> 1 as t <unk> <unk>, P <unk>1 (t) -P * <unk> <unk> 0. This concludes the proof for Case 2. (3) Case 3: For any l <unk> <unk>1, 2,..., m<unk>, P <unk> l (t + 1) = f P <unk> l (t) = I -diag(<unk>) P <unk> l (t)<unk> l + diag(<unk>) <unk>P <unk> l (t), where <unk> l 1 k l = 1 k l since <unk> l is absorbing and strongly connected. Therefore, P <unk> l (t + 1)1 k l = M P <unk> l (t)1 k l, where M = Idiag(<unk>) + diag(<unk>) <unk> is row-stochastic and aperiodic. Moreover, the graph G(M ) has a globally reachable node and therefore the matrix M has a normalized dominant left eigenvector w(M ). Applying the Perron-Frobenius theorem, lim t<unk> P <unk> l (t)1 k l = w (M )P <unk> l (0)1 k l 1 n. Let c l = w (M )P <unk> l (0)1 k l. Following the same line of argument in the proof for Case 2, f maps S nk l (c l 1 n ) to S nk l (c l 1 n ), and maps Snk l (c l 1 n ) to Snk l (c l 1 n ). Moreover, P * <unk> R n<unk>k l with p * (i) = c l w (<unk> l ), for any i <unk> V, is the unique fixed point of the map f in S nk l (c l 1 n ). In addition, there exists 0 <unk> <unk> 1 such that for any X <unk> S nk l (c l 1 n ), f (X) -P * <unk> <unk> X -P * <unk>. The function <unk> (X) = f (X)-P * <unk> X-P * <unk> is continuous in Snk l (c l 1 n ) <unk> P *. Since for any P <unk> l (0) <unk> Snk l (c l 1 n ) <unk> P *, we have P <unk> l (t)1 k l <unk> c l 1 k l, which implies d P <unk> l (t), S nk l (c l 1 k l ) <unk> 0 as t <unk> 0. Therefore, there exists 0 <unk> <unk> <unk> 1 and T > 0 such that for any t > T, f P <unk> l (t) -P * <unk> <unk> <unk> P <unk> l (t) -P * <unk>. Therefore, P <unk> l (t) <unk> P * as t <unk> <unk>. (4) Case 4: P <unk> l (t + 1) = diag(<unk>) <unk>P <unk> l (t) + I -diag(<unk>) P <unk> l (t)<unk> l + I -diag(<unk>) P <unk> (t)B l. for any l <unk> <unk>1, 2,..., m<unk>. Therefore, P <unk> l (t + 1)1 k l = M P <unk> l (t)1 k l + <unk>(t),(8) where M = diag(<unk>) <unk> + Idiag(<unk>) is row-stochastic and primitive. The vector <unk>(t) is a vanishing perturbation according to the proof for Case 2. Let x(t) = P <unk> l (t)1 k l and y(t) = Qx(t) with Q defined in Lemma 8. Let y err (t) = (y 1 (t), y 2 (t),..., y n-1 (t)), where y i (t) = x i+1 (t)x i (t) for any i = 1, 2,..., n -1. Then we have y(t + 1) = QM Q -1 y(t) + Q<unk>(t). Let <unk>(t) = <unk> 1 (t), <unk> 2 (t),..., <unk> n-1 (t) with <unk> i (t) = j Q ij <unk> j (t). <unk>(t) is also a vanishing perturbation and y err (t + 1) = M red y err (t) + <unk>(t). The equation above is an exponentially stable linear system with a vanishing perturbation. Since <unk>(M red ) <unk> 1, y err <unk> 0 n-1 as t <unk> <unk>, which implies that P <unk> l (t)1 k l <unk> <unk>1 n and <unk> l depends on M, B l, P <unk> l (0) and P <unk> (0). Moreover, l <unk> l = 1 since P (t)1 R = 1 n. Following the same argument in the proof for Case 3, we obtain lim t<unk> p <unk> l (i) (t) = <unk> l w (<unk> l ). 3) Interpretations of Theorem 7: Analysis on Case 1 to 4 leads to the following conclusions: 1) The probability of adopting any product in the transient subgraph eventually decays to zero; 2) For the productconversion graph with only one absorbing SCC G(<unk> 1 ), the system's asymptotic product-adoption probability distribution only depends on w(<unk> 1 ). In this case, the self conversion dominates the competitive propagation <unk> = <unk> <unk> <unk> 1 0 0 0 <unk> 2 0 B 1 B 2 <unk> 0 <unk> <unk> = <unk> <unk> <unk> <unk> 0.6 0.4 0 0 0.3 0.7 0 0 0 0 1 0 0 0.8 0 0.2 <unk> <unk> <unk> <unk>. (9) The Markov-chin solution is computed by the Monte Carlo method. In each sampling, A, <unk> and P (0) are randomly generated and set identical for the Markov chain and the NCPM. The probability p 12 (t) is plotted for both models on different types of social networks, such as the complete graph, the Erd<unk>s-Rényi graph, the power-law graph and the star graph. As shown in Figure 4 and Figure 5, the solution to the social-self NCPM nearly overlaps with the Markov-chain solution in every plot, due to the i.i.d self conversion process. b) Asymptotic behavior of the Markov chain model In Figure 6 and Figure 7, all the trajectories p ir (t), for the Markov-chain model on an Erd<unk>s-Rényi graph with n = 5, p = 0.4 and randomly generated <unk>, are computed by the Monte Carlo method. Figure 6(a) corresponds to the structure of the product-conversion graph defined by <unk> = <unk> 1 0 0 <unk> 2, <unk> 1 = 0.6 0.4 0.3 0.7, <unk> 2 = 0.5 0.5 0.1 0.9. The simulation results shows that, in these two cases the Markov-chain solutions converge exactly to the values indicated by the social-self NCPM, regardless of the initial condition. The matrix <unk> used in Figure 7 is given by equation (9). As illustrated by Figure 7, the asymptotic adoption probabilities vary with the initial condition in the Markov-chain model, in consistence with the results of Theorem 7. --- IV. ANALYSIS ON THE SELF-SOCIAL NETWORK COMPETITIVE PROPAGATION MODEL In this section we discuss the network competitive propagation model based on Assumption 2, i.e, the case in which self conversion occurs before social conversion at each time step. Firstly we propose an approximation model, referred to as the self-social network competitive propagation model (self-social NCPM), and then analyze the dynamical properties of this approximation model. Theorem 9 (Self-social NCPM): Consider the competitive propagation model based on Assumption 2, with the social network and the product-conversion graph represented by their adjacency matrices <unk> and <unk> respectively. The probability p ir (t) satisfies p ir (t + 1) -p ir (t) = s =r <unk> sr p is (t) -<unk> rs p ir (t) + s =r <unk> ss <unk> i n j=1 <unk>ij p is (t)P rs ji (t) - s =r <unk> rr <unk> i n j=1 <unk>ij p ir (t)P sr ji (t), for any i <unk> V and r <unk> <unk>. Applying the independence assumption, the matrix form of the self-social NCPM is P (t + 1) = P (t)<unk> + diag(<unk>) diag P (t)<unk> <unk>P (t) -diag(<unk>)P (t) diag(<unk>),(10) with P (t) = (p ir (t)) n<unk>R and <unk> = (<unk> 11, <unk> 22,..., <unk> RR ). It is straightforward to check that, for any P (t) <unk> S nR (1 n ), P (t + 1) is still in S nR (1 n ). According to the Brower fixed point theorem, there exists at least one fixed point for the system (10) in S nR (1 n ). Since the nonlinearity of equation ( 10) add much difficulty to the analysis of it, in the remaining part of this section we discuss the special case when R = 2. For simplicity, in this section, let p(t) = p 2 (t) = p 12 (t), p 22 (t),..., p n2 (t). Without loss of generality, assume <unk> 22 <unk> <unk> 11. Define the map h : R n <unk> R n by h(x) = <unk> 12 1 n + (1 -<unk> 12 -<unk> 21 )x + <unk> 11 diag(<unk>) <unk>x -<unk> 22 diag(<unk>)x + (<unk> 22 -<unk> 11 ) diag(<unk>) diag(x) <unk>x.(11) Then the self-social NCPM for R = 2 is written as p(t + 1) = h(p(t)),(12) and p 1 (t) is computed by p 1 (t) = 1 np(t). We present below the main theorem of this section. Theorem 10 (Dynamical behavior of self-social NCPM with R = 2): Consider the two-product self-social NCPM, given by equations ( 11) and ( 12), with the parameters <unk> 11, <unk> 12, <unk> 21, <unk> 22, <unk> 1,..., <unk> n all in the interval (0, 1), and <unk> 22 <unk> <unk> 11. We conclude that, (i) system (12) p * i -p * -i <unk> 1 -1 2 <unk> i <unk> i <unk> 22 -<unk> 11 <unk> 22 + <unk> 11 ;(13) <unk> i <unk> <unk> 22 + <unk> 11 3<unk> 22 -<unk> 11 for any i <unk> V,(16) then p * is globally exponentially stable. Moreover, the convergence rate is upper bounded by max i max<unk> i, K i i + K i -1<unk>, where i and K i are defined as i = (2<unk> 22<unk> 11 )<unk> i /K i and K i = <unk> 12 + <unk> 21 + <unk> 22 <unk> i, respectively. Proof: We start the proof by establishing that h is a continuous map from [0, 1] n to [0, 1] n itself. Firstly, since h(x) = <unk> 12 (1 n -x) + <unk> 11 diag(<unk>) <unk>x + (1 -<unk> 21 )x -<unk> 22 diag(<unk>)x + (<unk> 22 -<unk> 11 ) diag(<unk>) diag(x) <unk>x, and (1 -<unk> 21 )x -<unk> 22 diag(<unk>)x (1 -<unk> 21 -<unk> 22 )x = 0 n, the right-hand side of the expression of h is non-negative. Therefore, for any x <unk> [0, 1] n, h(x) 0 n. Secondly, recall that x -i = ( <unk>x) i = j <unk>ij x j. That is, x -i is the weighted average of all the x j's except x i and the value of x -i does not depend on x i since <unk>ii = 0. Moreover, since j <unk>ij = 1 for any i <unk> V, x -i is also in the interval [0, 1]. According to equation (11), rewrite the i-th entry of h(x) as h(x) i = <unk> 12 + <unk> 11 <unk> i x -i + <unk> i x i, where <unk> i = 1-<unk> 12 -<unk> 21 -<unk> 22 <unk> i +(<unk> 22 -<unk> 11 )<unk> i x -i. The maximum value of <unk> i is 1<unk> 12<unk> 21<unk> 11 <unk> i, obtained when x -i = 1. Therefore, <unk> i x i <unk> max(1 -<unk> 12 -<unk> 21 -<unk> 11 <unk> i, 0). --- Then we have h(x) i <unk> <unk> 12 + <unk> 11 <unk> i + max(1 -<unk> 12 -<unk> 21 -<unk> 11 <unk> i, 0) = max(<unk> 22, <unk> 12 + <unk> 11 <unk> i ) <unk> 1. --- The inequality above leads to h(x) 1 n for any x <unk> [0, 1] n. Since h maps [0, 1] n to [0, 1] n itself, according to the Brower fixed point theorem, there exists p * such that h(p * ) = p *. This concludes the proof of the existence of a fixed point. Any fixed point of h should satisfy h(p * ) = p *, i.e., 0 n = <unk> 12 1 n + <unk> 11 diag(<unk>) <unk>p * + (<unk> 22 -<unk> 11 ) diag(<unk>) diag(p * ) <unk>p * -(<unk> 12 + <unk> 21 )p * -<unk> 22 diag(<unk>)p *.(17) Therefore, p * = <unk> 12 K -1 1 n + <unk> 11 K -1 diag(<unk>) <unk>p * + (<unk> 22 -<unk> 11 )K -1 diag(<unk>) diag(p * ) <unk>p *, where K = (<unk> 12 + <unk> 21 )I + <unk> 22 diag(<unk>) is a positive diagonal matrix. Define a map T : R n <unk> R n by T (x) = <unk> 12 K -1 1 n + <unk> 11 K -1 diag(<unk>) <unk>x + (<unk> 22 -<unk> 11 )K -1 diag(<unk>) diag(x) <unk>x.(18) We have that map h has a unique fixed point if and only if map T has a unique fixed point. For any x and y <unk> [0, 1] n, define the distance d(x, y) = xy <unk>. Then ([0, 1] n, d) is a complete metric space. According to equation (18), since K -1, diag(<unk>), <unk>, <unk> 22<unk> 11 and diag(x) are all nonnegative, for any x, y <unk> [0, 1] n and x y, we have T (x) T (y). Moreover, T (0 n ) = <unk> 12 K -1 1 n 0 n,and T (1 n ) = <unk> 12 K -1 1 n + <unk> 11 K -1 <unk> + (<unk> 22 -<unk> 11 )K -1 <unk> = <unk> 12 K -1 1 n + <unk> 22 K -1 <unk>. Since T (1 n ) i = <unk> 12 + <unk> 22 <unk> i <unk> 12 + <unk> 21 + <unk> 22 <unk> i <unk> 1, we have T (1 n ) <unk> 1 n. Therefore, T maps [0, 1] n to [0, 1] n. For any x, y <unk> [0, 1] n, T (x) i -T (y) i = <unk> 11 <unk> i K i (x -i -y -i ) + (<unk> 22 -<unk> 11 )<unk> i K i (x i x -i -y i y -i ). Moreover, |x -i -y -i | <unk> ( n j=1 <unk>ij ) max j |x j -y j | = x -y <unk>,and |x i x -i -y i y -i | <unk> max max i y 2 i -min i x 2 i, max i x 2 i -min i y 2 i <unk> 2 x -y <unk>. Therefore, |T (x) i -T (y) i | <unk> i x -y <unk>, where i = (2<unk>22-<unk>11)<unk>i <unk>12+<unk>21+<unk>22<unk>i. One can check that i <unk> 1 for any i <unk> V and i does not depend on the x and y. Let = max i i. Then for any x, y <unk> [0, 1] n, T (x) -T (y) <unk> <unk> x -y <unk> with <unk> 1. Applying the Banach fixed point theorem, we know that the map T possesses a unique fixed point p * in [0, 1] n. In addition, for any p(0), the sequence <unk>p(t)<unk> t<unk>N defined by p(t+1) = T p(t) satisfies lim t<unk> p(t) = p *. This concludes the proof of statement (i). For statement (ii), one can check that T maps S = <unk>x <unk> R n | 1 2 1 n x <unk>12 <unk>12+<unk>21 1 n <unk> to S itself. Since T is a contraction map, the unique fixed point p * is in S. The concludes the proof for equation (13). According to equation (17), we have C i p * i -C -i p * -i = <unk> 12 -<unk> 12 p * i, where C i = <unk> 21 + <unk> 22 <unk> i and C -i = <unk> 11 <unk> i + (<unk> 22 - <unk> 11 )<unk> i p * i. Firstly we point out that C i > C -i, since C i -C -i = <unk> 21 + <unk> i (<unk> 22 -<unk> 11 )(1 -p * i ) > 0. Moreover, p * i -p * -i = <unk> 12 -<unk> 12 + <unk> 21 + <unk> i (<unk> 22 -<unk> 11 )(1 -p * i ) p * i <unk> 11 <unk> i + (<unk> 22 -<unk> 11 )<unk> i p * i. The right-hand side of the equation above with 1 2 <unk> p * i <unk> <unk>12 <unk>12+<unk>21 achieves its maximum value 1-1 2 <unk>i <unk>i <unk>22-<unk>11 <unk>22+<unk>11 at p * i = 1 2. This concludes the proof for equation ( 14). Now we prove statement (iii). With <unk> 11 = <unk> 22, h x = x + <unk> 12 1 n -2<unk> 12 x + <unk> 11 diag(<unk>) <unk>x -x. One can check that p * = 1 2 1 n is a fixed point. According to statement (i), the fixed point is unique. Let p(t) = y(t) + 1 2 1 n. Then the two-product selfsocial NCPM becomes y(t + 1) = M y(t), where M = (1 -2<unk> 12 )I + <unk> 11 diag(<unk>) <unk> -<unk> 11 diag(<unk>). For any i <unk> V, if 1 -2<unk> 12 -<unk> 11 <unk> i <unk> 0, then n j=1 |M ij | = 1 -2<unk> 12 -<unk> 11 <unk> i + <unk> 11 <unk> i = 1 -2<unk> 12 <unk> 1; and, if 1 -2<unk> 12 -<unk> 11 <unk> i <unk> 0, then n j=1 |M ij | = 2<unk> 12 + <unk> 11 <unk> i + <unk> 11 <unk> i -1 <unk> 1. Since <unk>(M ) <unk> M <unk> = max i n j=1 |M ij |, the spectral radius of M is strictly less than 1. Therefore, the fixed point p * = 1 2 1 n is globally exponentially stable. Now consider the case when <unk> 22 > <unk> 11. Let p(t) = y(t) + p *. Then system (12) becomes y(t + 1) = M y + (<unk> 22 -<unk> 11 ) diag(<unk>) diag(y(t)) <unk>y(t). The right-hand side of the equation above is a linear term M y(t) with a constant matrix M, plus a quadratic term. The matrix M can be decomposed as M = M -<unk> 12 I and M = M (1) + M (2) is further decomposed as a diagonal matrix M (1) plus a matrix M (2) in which all the diagonal entries are 0. Since M (1) = (1 -<unk> 12 )I -<unk> 22 diag(<unk>) + (<unk> 22 -<unk> 11 ) diag(<unk>) diag( <unk>p * ) is a positive diagonal matrix, and M (2) = <unk> 11 diag(<unk>) <unk> + (<unk> 22 -<unk> 11 ) diag(<unk>) diag(p * ) <unk> is a matrix with all the diagonal entries being zero and all the off-diagonal entries being nonnegative. The matrix M = M (1) + M (2) is nonnegative. Since <unk> = diag( 1 N1, 1 N2,...,1 Nn )A, the matrix M can be written in the form DA + E, where A is symmetric and D, E are positive diagonal matrix. One can easily prove that all the eigenvalues of any matrix in the form M = DA + E are real since M is similar to the symmetric matrix D 1 2 (A + D -1 E)D 1 2. The local stability of p * is equivalent to the inequality <unk>(M ) <unk> 1, which is in turn equivalent to the intersection of the following two conditions: <unk> max ( M ) <unk> 1 + <unk> 12 and <unk> min ( M ) > -1 + <unk> 12. First we prove <unk> max ( M ) <unk> 1 + <unk> 12. Since A is irreducible and <unk> 0 n, p * 0 n, we have Mij > 0 if and only if a ij > 0 for any i = j. In addition, Mii > 0 for any i <unk> V. Therefore, M is irreducible, aperiodic and thus primitive. According to the Perron-Frobenius theorem, <unk> max ( M ) = <unk>( M ). We have <unk>( M ) <unk> M <unk> and for any i <unk> V, j | Mij | = 1 -<unk> 21 + (<unk> 22 -<unk> 11 ) <unk> i (p * -i + p * i ) -<unk> i. According to equation ( 13), for any i <unk> V, 1-<unk> 21 <unk> j | Mij | <unk> 1-<unk> 21 + (<unk> 12 -<unk> 21 ) 2 <unk> 12 + <unk> 21 <unk> i <unk> 1+<unk> 12. Therefore, <unk> max ( M ) <unk> 1 -<unk> 21 + (<unk> 12 -<unk> 21 ) 2 <unk> 12 + <unk> 21 <unk> i <unk> 1 + <unk> 12. Now we prove <unk> min ( M ) > -1 + <unk> 12. According to the Gershgorin circle theorem, <unk> min ( M ) <unk> min i ( Mii - j =i | Mij |). For any i <unk> V, Mii - j =i | Mij | = 1 -<unk> 21 -<unk> i (<unk> 22 + <unk> 11 ) -<unk> i (<unk> 22 -<unk> 11 )(p * i -p * -i ). According to equation ( 14), p * i -p * -i <unk> 1 -1 2 <unk> i <unk> i <unk> 22 -<unk> 11 <unk> 22 + <unk> 11. Moreover, inequality ( 15) is necessary and sufficient to 1 -1 2 <unk> i <unk> i <unk> 22 -<unk> 11 <unk> 22 + <unk> 11 <unk> 1 -<unk> i <unk> i <unk> 22 + <unk> 11 <unk> 22 -<unk> 11. Therefore, Mii - j =i | Mij | > 1 -<unk> 21 -<unk> i (<unk> 22 + <unk> 11 ) -(1 -<unk> i )(<unk> 22 + <unk> 11 ) = -1 + <unk> 12, for any i <unk> V. That is to say, the inequality ( 15) is sufficient for <unk>(M ) <unk> 1, i.e., the local stability of p *. This concludes the proof for statement (iv). For statement (v), observe that the maps h and T satisfy the following relation: h(x) = KT (x) + (I -K)x, for any x <unk> [0, 1] n, where K = (<unk> 12 + <unk> 21 )I + <unk> 22 diag(<unk>). For any x, y <unk> [0, 1] n, |h(x) i -h(y) i | = |K i T (x) i -T (y) i + (1 -K i )(x i -y i )|. We estimate the upper bound of |h(x) i -h(y) i | in terms of xy <unk> in two cases. Case 1: <unk> 12 + <unk> 21 + <unk> 22 <unk> i <unk> 1 for any i. Firstly, <unk> 11 <unk> 22 + 1 - 1 <unk> 22 <unk> <unk> 11 + <unk> 22 3<unk> 22 -<unk> 11 always holds as long as <unk> 11 <unk> <unk> 22. Then recall that, for any x, y <unk> [0, 1] n, |T (x) i -T (y) i | <unk> i x -y <unk>, where i = (2<unk>22-<unk>11)<unk>i Ki <unk> 1. Therefore, |h(x) i -h(y) i | <unk> (K i i + 1 -K i ) x -y <unk>, for any i <unk> V. The coefficient K i i + 1 -K i is always strictly less than 1 because it is a convex combination of i <unk> 1 and 1. Therefore, h is a contraction map. Case 2: There exists some i such that <unk> 12 + <unk> 21 + <unk> 22 <unk> i <unk> 1. In this case, for any such i, |h(x) i -h(y) i | <unk> (K i i + K i -1) x -y <unk>. If <unk> i <unk> <unk>11+<unk>22 3<unk>22-<unk>11, then we have K i i + K i -1 = (3<unk> 22 -<unk> 11 )<unk> i + <unk> 12 + <unk> 21 -1 <unk> <unk> 11 + <unk> 22 + <unk> 12 + <unk> 21 -1 = 1. Therefore, h is also a contraction map. Combining Case 1 and Case 2 we conclude that if <unk> i <unk> <unk>11+<unk>22 3<unk>22-<unk>11 for any i <unk> V, then h is a contraction map. According to the proof for statement (i), h maps [0, 1] n to [0, 1] n. Therefore, according to the Banach fixed point theorem, for any initial condition p(0) <unk> [0, 1] n, the solution p(t) converges to p * exponentially fast and the convergence rate is upper bounded by max i max( i, K i i + K i -1). The rest of this section are some remarks of Theorem 10. Firstly, equation ( 13) has a meaningful interpretation: The condition <unk> 22 <unk> <unk> 11 implies that product H 2 is advantageous to H 1, in the sense that the nodes in state H 1 have a higher or equal tendency of converting to H 2 than the other way around. As the result, the fixed point is in favor of H 2, i.e., p * <unk> 1 2 1 n. From the proof of statement (iv), we know that, around the unique fixed point, the linearized system is y(t + 1) = M y(t), where M is a Metzler matrix and is Hurwitz stable. Usually the Metzler matrices are presented in continuous-time network dynamics models, e.g., the epidemic spreading model [35], [36]. In the proof of Theorem 10 (iv), we provide an example of the Metzler matrix in a stable discrete-time system. Figure 8 plots the right-hand sides of inequalities ( 15) and ( 16), respectively, as functions of the ratio <unk>11 <unk>22, for the case when 0 <unk> <unk>11 <unk>22 <unk> 1. One can observe that, for a large range of <unk>11 <unk>22, the sufficient condition we propose for the global
In this paper we propose a class of propagation models for multiple competing products over a social network. We consider two propagation mechanisms: social conversion and self conversion, corresponding, respectively, to endogenous and exogenous factors. A novel concept, the product-conversion graph, is proposed to characterize the interplay among competing products. According to the chronological order of social and self conversions, we develop two Markov-chain models and, based on the independence approximation, we approximate them with two corresponding difference equations systems. Our theoretical analysis on these two approximated models reveals the dependency of their asymptotic behavior on the structures of both the product-conversion graph and the social network, as well as the initial condition. In addition to the theoretical work, we investigate via numerical analysis the accuracy of the independence approximation and the asymptotic behavior of the Markov-chain model, for the case where social conversion occurs before self conversion. Finally, we propose two classes of games based on the competitive propagation model: the repeated one-shot game and the dynamic infinite-horizon game. We characterize the quality-seeding trade-off for the first game and the Nash equilibrium in both games.
state H 1 have a higher or equal tendency of converting to H 2 than the other way around. As the result, the fixed point is in favor of H 2, i.e., p * <unk> 1 2 1 n. From the proof of statement (iv), we know that, around the unique fixed point, the linearized system is y(t + 1) = M y(t), where M is a Metzler matrix and is Hurwitz stable. Usually the Metzler matrices are presented in continuous-time network dynamics models, e.g., the epidemic spreading model [35], [36]. In the proof of Theorem 10 (iv), we provide an example of the Metzler matrix in a stable discrete-time system. Figure 8 plots the right-hand sides of inequalities ( 15) and ( 16), respectively, as functions of the ratio <unk>11 <unk>22, for the case when 0 <unk> <unk>11 <unk>22 <unk> 1. One can observe that, for a large range of <unk>11 <unk>22, the sufficient condition we propose for the global stability is more conservative than the sufficient condition for the local stability. One major difference between the self-social and the social-self NCPM in the asymptotic property is that, in the self-social NCPM, every individual's state probability distribution is not necessarily identical. Moreover, distinct from the social-self NCPM, for any of the four cases of G(<unk>) defined in Definition 6, the asymptotic behavior of the self-social NCPM depends on not only the structure of G(<unk>), but also the structure of the social network G( <unk>) and the individual open-mindedness <unk>. --- V. NON-COOPERATIVE QUALITY-SEEDING GAMES Based on the social-self NCPM given by equation ( 4), we propose two non-cooperative multi-player games distinct in the pay-off functions, and analyze their Nash equilibria. These two games share the common idea that, companies benefit from the adoption of their products, and thereby invest on both improving their products' quality, and seeding, e.g., advertisement and promotion, to maximize their products' adoption probabilities. All the notations in Table I and the previous sections still apply, and, in Table II, we introduce some additional notations and functions exclusively for this section. <unk> R <unk>0 defined by <unk>r(x (i) ; <unk>) = x ir /(x (i) 1 R + <unk>), with model parameter <unk> > 0 gr(w; <unk>) gr : R R<unk>1 <unk>0 <unk> R <unk>0 defined by gr(w; <unk>) = (wr + <unk>r)/1 R (w + <unk>), where <unk> <unk> R R >0 <unk>r(t) <unk>r(t) = <unk> 1r (t),..., <unk>nr(t) = <unk>pr(t) ur(P ) single-stage reward for player r with system state P. ur(P ) = 1 n pr A. Repeated one-shot quality-seeding game 1) Game set-up and analysis: In this subsection we consider the scenario in which the companies allocate their investments aiming to maximize their instant payoffs. The game is referred to as the repeated one-shot quality-seeding game, and is formalized as follows. (a) Players: The players are the R companies. Each company r has a product H r competing on the network. (b) Players' actions: At each stage (or time step equivalently) t, each company r has two types of investments. The investment on seeding, i.e., x r (t), and the investment on quality, i.e., w r (t). The total investment is bounded by a fixed budget c r, i.e., 1 n x r (t)+w r (t) <unk> c r. (c) Rules: The investment on seeding changes the individuals' product-adoption probability in the social conversion process. For any individual i <unk> V, each company r's investment x ir (t) creates a "virtual node" in the network, who is always adopting the product H r. In the social conversion process, the probability that individual i picks company r's virtual node is <unk> r x (i) (t); <unk> for any i <unk> V and r <unk> <unk>. The probability that individual i picks individual j in the social conversion process is then given by 1 -R s=1 <unk> s x (i) ; <unk> <unk>ij. The investment on quality, i.e., w r (t), influences the product-conversion graph. We assume that the productconversion graph is associated with a rank-one adjacency matrix [<unk> 1 1 n, <unk> 2 1 n,..., <unk> R 1 n ] and <unk> r = g r (w(t); <unk>) is determined by all the companies' investments on product quality and the products' preset qualities <unk> = (<unk> 1,..., <unk> R ) 0 R. With each company r's action y r (t) = x r (t), w r (t) at time t, the dynamics of the product-adoption probabilities P (t) <unk> R n<unk>R <unk>0 is given by P (t + 1) = H P (t), y 1 (t),..., y R (t),(19) where the map H is defined by H P,y 1 (t),..., y R (t) ir = <unk> i <unk> x (i) (t)1 R + <unk> n k=1 <unk>ik p kr + <unk> i <unk> r x (i) (t); <unk> + (1 -<unk> i )g r w(t); <unk>, for any P <unk> S nR (1 n ), i <unk> V, and r <unk> <unk>. (d) Pay-offs and goals: At each stage t, each player r chooses its action y r (t), in order to maximize the payoff u r (P (t + 1)) = 1 n p r (t + 1), i.e., the total adoption probability of product H r at the next stage. The following theorem gives a closed-form expression of the Nash equilibrium at each stage and the system's asymptotic behavior when every player is adopting the policy at the Nash equilibrium. Theorem 11 (Repeated one-shot quality-seeding game): Consider the R-player quality-seeding game described in this subsection. Further assume that the budget limit c r for any company r satisfies c r <unk> max ( n min i <unk> i -1)<unk> -<unk> r, 1 n <unk> n -1 n <unk> <unk> r. (20 ) Then we have the following conclusions: i) for each t, there exists a unique pure-strategy Nash equilibrium Y * (t) = X * (t), w * (t), given by x * ir (t) = <unk> i n c r + <unk> i <unk> n 1 n <unk> r (t) + <unk> i n <unk> r -<unk> ir (t)<unk>,(21) w * r (t) = 1 - 1 n <unk> n c r + 1 n <unk> r (t)<unk> - 1 n <unk> n <unk> r,(22) and x * ir (t) <unk> 0, w * r (t) <unk> 0 for any i <unk> V, r <unk> <unk>; ii) if X(t), w(t) = X * (t), w * (t) for any t <unk> N and P (0) <unk> S nR (1 n ), then P (t) obeys the following iteration equations: p r (t + 1) = c r + <unk> r + 1 n <unk>p r (t)<unk> 1 R c + 1 R <unk> + n<unk> 1 n,(23) for any r <unk> <unk>, t <unk> N. As the result, p r (t) converges to (c r + <unk> r ) 1 R (c + <unk>) exponentially fast with the rate n<unk> 1 R (c + <unk>) + n<unk>. Proof: Since we only discuss the actions at stage t in this proof, for simplicity of notations and without causing any confusion, we use x ir (w r, x * ir, w * r resp.) for x ir (t) (w r (t), x * ir (t), w * r (t) resp.). If company r knows the actions of all the other companies at time step t, i.e., y s, for any s = r, the optimal response for company r is the solution to the following optimization problem: minimize (x,w)<unk>r -1 n p r (t + 1) subject to 1 n x + w -c r <unk> 0. (24 ) Let xir = x ir + <unk> ir (t)<unk>, wr = w r + <unk> r, and L r (x r, w r, <unk> r ) = -1 n p r (t + 1) + <unk> r 1 n x r + <unk> r w r<unk> r c r, for any i <unk> V and r <unk> <unk>. The solution to the optimization problem (24) satisfies <unk>L r <unk>x ir = -<unk> i s =r xis ( R s=1 xis ) 2 + <unk> r = 0,(25) <unk>L r <unk>w r = -1 n (1 n -<unk>) s =r ws (1 R w) 2 + <unk> r = 0,(26) <unk>L r <unk> r = 1 n x r + w r -c r = 0.(27) According to the definition of Nash equilibrium, (x * r, w * r ) solves the optimization problem ( 24) with (x s, w s ) = (x * s, w * s ) for any s = r. One immediate result is that 1 n x * r + w * rc r = 0 for any r <unk> <unk>. Moreover, equation (25) leads to: 1 <unk> <unk> r = 1 n k=1 <unk> k s =r x * ks R s=1 c s -w * s + 1 n <unk> s (t)<unk>, and therefore, <unk> i s =r x * is n k=1 <unk> k s =r x * ks = R s=1 x * is R s=1 c s -w * s + 1 n <unk> s (t)<unk>.(28) The right-hand side of the equation above does not depend on the product index r. Therefore, for any r <unk> <unk>. Therefore, x * ir = <unk> i 1 n <unk> c r -w * r + 1 n <unk> r (t)<unk>. (29 ) Combining equation ( 29) and ( 26), we obtain c r -w * r + 1 n <unk> r (t)<unk> w * r = c <unk> -w * <unk> + 1 n <unk> <unk> (t)<unk> w * <unk> = <unk>, for any r, <unk> <unk> <unk> and some constant <unk>. Substitute the equation above into equation ( 26), we solve that <unk> = 1 n <unk>/1 n (1 n<unk>). Therefore, we obtain equation ( 22), and by substituting equation ( 22) into equation ( 29) we obtain equation (21). The uniqueness of the purestrategy Nash equilibrium (X *, w) is implied from the computation. Moreover, equation ( 20) guarantees x * ir <unk> 0 and w * r <unk> 0 for any i <unk> V and r <unk> <unk>. Substituting euqation ( 21) and ( 22) into the dynamical system (19), after simplification, we obtain equation (23) and thereby all the results in Conclusion ii). 2) Interpretations and Remarks: : The basic idea of seeding-quality trade-off in the competitive seedingquality game is similar to the work by Fazeli et. al. [32], but, in our model, players take actions at every step, instead of just at the beginning of the game. Moreover, our model is based on a different propagation model. Theorem 11 reveals the behavior of the competitive propagation dynamics under the players' rational but myopic actions, and provides some strategic insights on the investment decisions and the seeding-quality tradeoff for short-term reward maximization. (a) Interpretation of <unk> ir (t): By definition, <unk> ir (t) is the average probability, among all the neighbors of individual i, of adopting product H r at time step t. The larger <unk> ir (t), the more individual i is inclined to adopt H r via social conversion. Therefore, <unk> ir (t) characterizes the current "social attraction" of H r for individual i, and 1 n <unk> r (t)/n characterizes the current overall social attraction of product H r in the network. (b) Seeding-quality trade-off: According to equation (22), at the Nash equilibrium, the investment on H r's product quality monotonically decreases with 1 n <unk>/n, and increases with 1 n <unk> r. This observation implies that: 1) in a society with relatively low openmindedness, the competing companies should relatively emphasize more on improving their products' quality, rather than seeding, and vice versa; 2) for products which do not have much social attraction, seeding is more efficient than improving the product's quality. (c) Allocation of seeding resources among the individuals: According to equation (21) (d) Nash equilibrium on the boundary: Without equation (20), the right-hand sides of equation ( 21) and ( 22) could be non-positive. In this case, the Nash equilibrium would be on the boundary of the feasible action set, i.e., some of the x * ir (t) or w * r (t) might be 0. --- B. Dynamic quality-seeding game with infinite-horizon In this subsection we introduce a multi-stage game among more farsighted players than in the previous subsection. The players aim to maximize the accumulated pay-offs of all the stages. We refer to this game as the dynamic quality-seeding game. The model set-up is the same with the game defined in the previous subsection, except for the following two modifications: (a) Players' policies Denote by Y r the set of functions mapping S nR (1 n ) to <unk> r. Each player r's policy is a sequence of maps, denoted by Y r = <unk>Y r,t <unk> t<unk>N, where Y r,t <unk> Y r for any t. Player r's action at each stage t is thus given by y t = Y r,t P (t). We refer to Y r = <unk>Y r,t <unk> t<unk>N as stationary policy if Y r,t = Y r,<unk> for any t = <unk>, and simply use Y r for the map at each stage. (b) Pay-offs and goals: Denote by v r (P ; Y 1,..., Y R ) the pay-off of Player r, with initial condition P (0) = P and each Player s adopting the policy Y s. The pay-off v r (P ; Y 1,..., Y R ) is given by the accumulated step payoffs with discount, that is, v r (P ; Y 1,..., Y R ) = <unk> t=0 <unk> t u r (P (t)), where P (0) = P and P (t + 1) = H P (t); Y 1 (P (t)),..., Y R (P (t)) for any t <unk> N. This model set-up defines a multi-stage noncooperative dynamic game with infinite horizon. One interpretation of the discounted accumulated pay-off is that, people tend to value the immediate profit more than the future profit. An alternative explanation is that, the discount factor <unk> characterizes the interest rate 1/<unk> -1 when the players deposit their current pay-off to the banks, or use them for some other investments. --- The R-tuple (Y * 1,..., Y * R ) is a Nash equilibrium if, for any P <unk> S nR (1 n ) and r <unk> <unk>, v r (P ; Y * 1,..., Y * R ) <unk> v r (P ; Y * 1,..., Y * r-1, Y r, Y * r+1,..., Y * R ), for any Y r <unk> Y <unk> r = Y r <unk> Y r <unk>.... In this subsection, we limit our discussion to the case of two players. The following theorem presents some results on the stationary Nash equilibrium and the equilibrium pay-off function for this dynamic quality-seeding game. Theorem 12 (Two-player infinite-horizon dynamic game): Consider the dynamic quality-seeding game defined in this subsection, with R = 2. Define the subset of continuously differentiable functions V = v : [0, 1] n <unk> R v satisfies properties P 1 and P 2, where P 1 : p p <unk> v(p) <unk> v( p) for any p, p <unk> [0, 1] n, P 2 : v(p) is convex in p. --- We conclude that: (i) There exists a Nash equilibrium (Y * 1, Y * 2 ), where Y * 1 and Y * 2 are both stationary policies; (ii) The total pay-off for Player 2 at this Nash equilibrium is given by v 2 (P ; Y * 1, Y * 2 ) = v * (P e 2 ), where e 2 is the second standard basis vector of R 2, and v * is the unique fixed point of the map T : V <unk> V, defined by Before proving the theorem above, we summarize Theorem 4.4 and Property 4.1 in [37], on the two-player zero-sum continuous games, into the following lemma. T v(p) = 1 n p + <unk> sup y2<unk>2 inf y1<unk>1 v H(P ; y 1, y 2 )e 2, where P = [1 n -p, p] <unk> R n<unk>2. As a result, v 1 (P ; Y * 1, Y * 2 ) = n/(1 -<unk>) -v 2 (P ; Y * 1, Y* Lemma 13 (Pure-strategy Nash equilibrium): Consider the two-player zero-sum continuous game with Player 1 as the minimizer and Player 2 as the maximizer. Suppose the action sets of Player 1 and 2, denoted by <unk> 1 and <unk> 2 respectively, are both compact and convex subsets of finite-dimension Euclidean spaces. If the cost function v(y 1, y 2 ) : <unk> 1 <unk> <unk> 2 <unk> R is continuously differentiable, convex in y 1, and concave in y 2, then: (1) the game admits at least one saddle-point Nash equilibrium in pure strategies; (2) if there are multiple saddle points, the saddle points satisfy the ordered interchangeability property. That is, if (y * 1, y * 2 ) and ( <unk>1, <unk>2 ) are saddle points, so are (y * 1, <unk>2 ) and ( <unk>1, y * 2 ). Proof of Theorem 12: In this proof, for simplicity, denote by p the second column of the matrix P, i.e., P = [1 np, p], and correspondingly, P = [1 np, p]. Since <unk> 1 and <unk> 2 are compact subsets of R n+1, for any v <unk> V, there exists (y 1, y 2 ) such that T v(p) = 1 n p + <unk>v H(P ; y 1, y 2 )e 2. Moreover, from the expression of map H, one can deduce that H(P, y 1, y 2 ) satisfies p p <unk> H(P ; y 1, y 2 )e 2 H( P ; y 1, y 2 )e 2, for any (y 1, y 2 ) <unk> <unk> 1 <unk> <unk> 2 and p, p <unk> [0, 1] n. This leads to the conclusion that T v also satisfies property P 1. Moreover, by definition, H(P ; y 1, y 2 ) is linear in P. Since v(p) is convex in p, one can check that T v(p) is also convex in p. Therefore, T satisfies property P According to the expression of the map H(P ; y ) is a Nash equilibrium of the dynamics game. This concludes the proof. Theorem 12 provides an iteration algorithm to compute the stationary Nash policy (Y * 1, Y * 2 ), and the players' respective pay-offs at the Nash equilibrium. A comparison by simulation is given in Figure 9, between the Nash policies for the dynamic game discussed in this subsection, and the repeated one-shot game in the previous subsection. The model parameters are set as n = 3, <unk> = (0.51, 0.87, 0.77), <unk> = 5, <unk> 1 = <unk> 2 = 1, c 1 = 30, c 2 = 60, <unk> = 0.8, and <unk> such that <unk>13 = <unk>23 = 1, <unk>31 = <unk>32 = 0.5, and <unk>ij = 0 otherwise. Simulation results show that, with the same initial condition, for the two types of games, the players' total pay-offs at the respective Nash equilibria are very close to each other. Moreover, from Figure 9 we can observe that, for each of the two games, the players' pay-offs are almost linear to the initial average probability of adopting H 2. --- VI. CONCLUSION This paper discusses a class of competitive propagation models based on two product-adoption mechanisms: the social conversion and the self conversion. Applying the independence approximation we propose two difference equations systems, referred to as the social-self NCPM and the self-social NCPM respectively. Theoretical analysis reveals that the structure of the product-conversion graph plays an important role in determining the nodes' asymptotic state probability distributions. Simulation results reveal the high accuracy of the independence approximation and the asymptotic behavior of the original social-self Markov chain model. Based on the social-self NCPM, we propose two-types of competitive propagation games and discuss their Nash equilibria, as well as the trade-off between seeding and quality for the repeated one-shot game. One possible future work is the deliberative investigation on the Nash equilibrium on the boundary. It is also of research value to explore the extension of the analysis in Section V.B to the case of multiple-player dynamic games. Another open problem is the stability analysis of the self-social NCPM with R > 2. Simulation results support the claim that, for the case when R > 2, there also exists a unique fixed point P * and, for any initial condition P (0) <unk> S nR (1 n ), the solution P (t) to equation (10) converges to P *. We leave this statement as a conjecture.
In this paper we propose a class of propagation models for multiple competing products over a social network. We consider two propagation mechanisms: social conversion and self conversion, corresponding, respectively, to endogenous and exogenous factors. A novel concept, the product-conversion graph, is proposed to characterize the interplay among competing products. According to the chronological order of social and self conversions, we develop two Markov-chain models and, based on the independence approximation, we approximate them with two corresponding difference equations systems. Our theoretical analysis on these two approximated models reveals the dependency of their asymptotic behavior on the structures of both the product-conversion graph and the social network, as well as the initial condition. In addition to the theoretical work, we investigate via numerical analysis the accuracy of the independence approximation and the asymptotic behavior of the Markov-chain model, for the case where social conversion occurs before self conversion. Finally, we propose two classes of games based on the competitive propagation model: the repeated one-shot game and the dynamic infinite-horizon game. We characterize the quality-seeding trade-off for the first game and the Nash equilibrium in both games.